Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 5 hours 32 min ago

DBSAT un outil pour la sécurité de vos bases de données Oracle

Fri, 2017-06-16 10:22
Qu’est-ce que DBSAT ?

C’est un outil gratuit d’Oracle que vous pouvez télécharger sur My Oracle Support avec comme référence : Doc Id 2138254.1.
Il a pour but d’évaluer la sécurité de vos bases de données Oracle en analysant la configuration et les stratégies de sécurité mise en place afin de découvrir les risques liés à la sécurité.

Comment cela fonctionne t-il ?

Dans un premier temps il sera nécessaire de collecter les informations de votre database et dans un second temps de générer un rapport.
L’outil met à notre disposition 3 types de rapport.

  • Rapport Texte
  • Rapport Tableau
  • Rapport HTML

En quelques mots :

  • Implémentation et utilisation facile et rapide
  • Rapports détaillés
  • Détecte de véritable problème de sécurité
  • Pas de coûts supplémentaires si vous avez avez un contrat de support Oracle
  • Les résultats sont mis en évidence par différentes couleurs (Bleu, Vert, Jaune, Rouge)
Les différentes étapes nécessaires à la mise en œuvre
  • Installation de l’outil (DBSAT)
  • Collection des informations (DBSAT Collector)
  • Rapport sur l’état des risques (DBSAT Reports)
Installation de l’outil

Cet outil est développé en Python est requiert la version 2.6 ou supérieure (voir la version : python -V).

Le répertoire d’installation peut être ou vous le souhaitez car l’installation n’est qu’une décompression d’un fichier zippé, mais nous vous conseillons de le décompresser dans le répertoire de l’utilisateur  Oracle (/Home/Oracle/DBSAT).

DBSAT Collector doit être exécuté en tant qu’utilisateur OS avec des autorisations de lecture sur les fichiers et les répertoires sous ORACLE_HOME afin de collecter et traiter les données.

Si vous utilisez un environnement Vault il sera nécessaire d’utiliser un utilisateur non-SYS avec le role DV_SECANALYST.

Rôle DV_SECANALYST :

    • CREATE SESSION
    • SELECT on SYS.REGISTRY$HISTORY
    • Role SELECT_CATALOG_ROLE
    • Role DV_SECANALYST (if Database Vault is enabled)
    • Role AUDIT_VIEWER (12c only)
    • Role CAPTURE_ADMIN (12c only)
    • SELECT on SYS.DBA_USERS_WITH_DEFPWD (11g and 12c)
    • SELECT on AUDSYS.AUD$UNIFIED (12c only)

Vous trouverez plus d’information dans la documentation à l’adresse suivante :
https://docs.oracle.com/cd/E76178_01/SATUG/toc.htm

Collection des informations

La collection des informations est obligatoire. Celle-ci sera nécessaire pour la génération des rapports (Texte, HTML ou Tableau).
Très simple à utiliser et sécurisé : dbsat collect /file_name
Capture d’écran 2017-06-09 à 17.00.00

Rapport sur l’état des risques

Le rapport peut être généré de différentes manière en excluant plusieurs sections avec l’option -x.
dbsat report [-a] [-n] [-x section] pathname

Options :
Capture d’écran 2017-06-12 à 17.52.53

Les différentes sections utilisables

USER     : Compte utilisateur
PRIV      : Privileges et Roles
AUTH     : Contrôles authorisations
CRYPT    : Encryption des données
ACCESS  : Contrôle d’accès
AUDIT    : Audit
CONF      : Configuration Base de données
NET         : Configuration réseaux
OS            : Système d’exploitation

Capture d’écran 2017-06-09 à 17.31.14
Une fois décompressé, nous avons nos 3 types de fichier, texte, tableau et html.

Capture d’écran 2017-06-09 à 17.35.39

 Consultation du rapport

Aperçu du rapport.
Si vous utilisez des PDB, il sera nécessaire de collecter les informations auprès de chaque PDB individuellement.

Capture d’écran 2017-06-12 à 17.11.04

Capture d’écran 2017-06-12 à 17.14.03Les différents status du rapport

Vous pouvez utiliser ces status comme un fil rouge pour la mise en œuvre de recommandation. Ceci peut être utilisé pour prioriser et  planifier les modifications en fonction du niveau de risque. Un risque grave pourrait nécessiter des mesures correctives immédiates, alors que d’autres risques pourraient être résolus pendant un temps d’arrêt planifié ou associés à d’autres activités de maintenance.

Passe             : Aucune erreur trouvée
Évaluation   : Nécessite une analyse manuelle
Risque           : Bas
Risque           : Medium significatif
Risque           : Haut

Conclusion

Testez-le sans modération, il vous permettra d’avoir une vue globale sur la mise en place de la sécurité de vos bases de données. Une fois les problèmes identifiés il ne vous restera plus qu’à les corriger.

 

Cet article DBSAT un outil pour la sécurité de vos bases de données Oracle est apparu en premier sur Blog dbi services.

OUD 11.1.2.3 – How to create an OUD Start/Stop/Status script on Oracle Linux 6

Fri, 2017-06-16 08:39

One of the questions that pops up immediately, after you have installed your OUD successfully is how to integrate it into the automatic startup routines of the OS.

My example here show how to do it on Oracle Linux 6. On Oracle Linux 7 it looks a little different. Fortunately, Oracle delivers a script called “create-rc-script”, which can be found in your asinst home directory. It lets you specify the user name under which the OUD will run, the JAVA home and few more stuff. The whole documentation can be found under the following link.

https://docs.oracle.com/cd/E52734_01/oud/OUDAG/appendix_cli.htm#OUDAG01144

Running “–help” gives you all the options.

$ cat /etc/oracle-release
Oracle Linux Server release 6.9

$ ./create-rc-script -V
Oracle Unified Directory 11.1.2.3.170418
Build 20170206221556Z

$ ./create-rc-script --help
Create an RC script that may be used to start, stop, and restart the Directory
Server on UNIX-based systems

Usage:  create-rc-script  {options}
        where {options} include:

-f, --outputFile {path}
    The path to the output file to create
-u, --userName {userName}
    The name of the user account under which the server should run
-j, --javaHome {path}
    The path to the Java installation that should be used to run the server
-J, --javaArgs {args}
    A set of arguments that should be passed to the JVM when running the server

General Options

-V, --version
    Display Directory Server version information
-?, -H, --help
    Display this usage information

Take care that you start the “create-rc-script” script from your asinst_1 home, and not from the Oracle_OUD1 home. The reason for that, is that the “create-rc-script” sets the working directory to your current directory. “WORKING_DIR=`pwd`”, and if not started from the correct directory, you might end up with a not working start/stop script.

So .. to do it correctly, switch to your OUD asinst home first and run it from there. I am using here only a few options. The JAVA home, the user name under which the OUD will run and the output file.

$ cd /u01/app/oracle/product/middleware/asinst_1/OUD/bin

$ pwd
/u01/app/oracle/product/middleware/asinst_1/OUD/bin

$ ./create-rc-script --userName oracle --javaHome /u01/app/oracle/product/middleware/jdk --outputFile /home/oracle/bin/oud

The output generated by the script will be the start/stop script.

$ pwd
/home/oracle/bin

$ ls -l
total 4
-rwxr-xr-x. 1 oracle oinstall 862 Jun 16 13:35 oud

$ cat oud
#!/bin/sh
#
# Copyright (c) 2012, Oracle and/or its affiliates. All rights reserved.
#
#
# chkconfig: 345 90 30
# description: Oracle Unified Directory startup script
#

# Set the path to the Oracle Unified Directory instance to manage
INSTALL_ROOT="/u01/app/oracle/product/middleware/asinst_1/OUD"
export INSTALL_ROOT

# Specify the path to the Java installation to use
OPENDS_JAVA_HOME="/u01/app/oracle/product/middleware/jdk"
export OPENDS_JAVA_HOME

# Determine what action should be performed on the server
case "${1}" in
start)
  /bin/su - oracle -- "${INSTALL_ROOT}/bin/start-ds" --quiet
  exit ${?}
  ;;
stop)
  /bin/su - oracle -- "${INSTALL_ROOT}/bin/stop-ds" --quiet
  exit ${?}
  ;;
restart)
  /bin/su - oracle -- "${INSTALL_ROOT}/bin/stop-ds" --restart --quiet
  exit ${?}
  ;;
*)
  echo "Usage:  $0 { start | stop | restart }"
  exit 1
  ;;
esac

The generated start/stop script looks quite complete. The only thing missing is the “status” section which is quite useful from my point of view. To add the status section, we can use the “status” script, which is also part of the OUD installation.

$ ./status --help
This utility can be used to display basic server information

Usage:  status {options}
        where {options} include:


LDAP Connection Options

-D, --bindDN {bindDN}
    DN to use to bind to the server
    Default value: cn=Directory Manager
-j, --bindPasswordFile {bindPasswordFile}
    Bind password file
-o, --saslOption {name=value}
    SASL bind options
-X, --trustAll
    Trust all server SSL certificates
-P, --trustStorePath {trustStorePath}
    Certificate trust store path
-U, --trustStorePasswordFile {path}
    Certificate trust store PIN file
-K, --keyStorePath {keyStorePath}
    Certificate key store path
-u, --keyStorePasswordFile {keyStorePasswordFile}
    Certificate key store PIN file
-N, --certNickname {nickname}
    Nickname of certificate for SSL client authentication
--connectTimeout {timeout}
    Maximum length of time (in milliseconds) that can be taken to establish a
    connection.  Use '0' to specify no time out
    Default value: 30000

Utility Input/Output Options

-n, --no-prompt
    Use non-interactive mode.  If data in the command is missing, the user is
    not prompted and the tool will fail
-s, --script-friendly
    Use script-friendly mode
--propertiesFilePath {propertiesFilePath}
    Path to the file containing default property values used for command line
    arguments
--noPropertiesFile
    No properties file will be used to get default command line argument values
-r, --refresh {period}
    When this argument is specified, the status command will display its
    contents periodically.  Used to specify the period (in seconds) between two
    displays of the status

General Options

-V, --version
    Display Directory Server version information
-?, -H, --help
    Display this usage information

Take care. Per default, the status utility is an interactive one, and it asks you for the user bind DN and the password. So, the interactive version of that script is not useful for our script.

$ ./status

>>>> Specify Oracle Unified Directory LDAP connection parameters

Administrator user bind DN [cn=Directory Manager]:

Password for user 'cn=Directory Manager':

          --- Server Status ---
Server Run Status:        Started
Open Connections:         6

          --- Server Details ---
Host Name:                dbidg01
Administrative Users:     cn=Directory Manager
Installation Path:        /u01/app/oracle/product/middleware/Oracle_OUD1
Instance Path:            /u01/app/oracle/product/middleware/asinst_1/OUD
Version:                  Oracle Unified Directory 11.1.2.3.170418
Java Version:             1.7.0_141
Administration Connector: Port 4444 (LDAPS)

          --- Connection Handlers ---
Address:Port : Protocol               : State
-------------:------------------------:---------
--           : LDIF                   : Disabled
8899         : Replication (secure)   : Enabled
0.0.0.0:161  : SNMP                   : Disabled
0.0.0.0:1389 : LDAP (allows StartTLS) : Enabled
0.0.0.0:1636 : LDAPS                  : Enabled
0.0.0.0:1689 : JMX                    : Disabled
...
...

And we need to do some adjustments, like in the following example.

./status --trustAll --no-prompt --bindDN cn="Directory Manager" --bindPasswordFile /home/oracle/.oudpwd | head -24

OK. To complete the script, we can add the status section to the script.

$ cat oud

#!/bin/sh
#
# Copyright (c) 2012, Oracle and/or its affiliates. All rights reserved.
#
#
# chkconfig: 345 90 30
# description: Oracle Unified Directory startup script
#

# Set the path to the Oracle Unified Directory instance to manage
INSTALL_ROOT="/u01/app/oracle/product/middleware/asinst_1/OUD"
export INSTALL_ROOT

# Specify the path to the Java installation to use
OPENDS_JAVA_HOME="/u01/app/oracle/product/middleware/jdk"
export OPENDS_JAVA_HOME

# Determine what action should be performed on the server
case "${1}" in
start)
  /bin/su - oracle -- "${INSTALL_ROOT}/bin/start-ds" --quiet
  exit ${?}
  ;;
stop)
  /bin/su - oracle -- "${INSTALL_ROOT}/bin/stop-ds" --quiet
  exit ${?}
  ;;
restart)
  /bin/su - oracle -- "${INSTALL_ROOT}/bin/stop-ds" --restart --quiet
  exit ${?}
  ;;
status)
  /bin/su - oracle -- "${INSTALL_ROOT}/bin/status" --trustAll --no-prompt --bindDN cn="Directory Manager" --bindPasswordFile /home/oracle/.oudpwd | head -24
  exit ${?}
  ;;
*)
  echo "Usage:  $0 { start | stop | restart | status }"
  exit 1
  ;;
esac

Last but not least, we need to move it with the root user to the /etc/init.d directory and add it via chkconfig.

# mv /home/oracle/bin/oud /etc/init.d/
# chkconfig --add oud

# chkconfig --list | grep oud
oud             0:off   1:off   2:off   3:on    4:on    5:on    6:off

That’s all. The OUD part is done now. But what about the ODSM? We want the WebLogic domain to startup automatically as well. For doing so, we need another script.

$ cat /home/oracle/bin/weblogic

#!/bin/sh
#
#
# chkconfig: 345 90 30
# description: WebLogic 10.3.6 startup script
#

# Specify the path to the Java installation to use
JAVA_HOME="/u01/app/oracle/product/middleware/jdk"
export JAVA_HOME

BASE_DOMAIN="/u01/app/oracle/product/middleware/user_projects/domains/base_domain"
export BASE_DOMAIN

# Determine what action should be performed on the server
case "${1}" in
start)
  /bin/su - oracle -c "nohup ${BASE_DOMAIN}/bin/startWebLogic.sh &"
  exit ${?}
  ;;
stop)
  /bin/su - oracle -c "${BASE_DOMAIN}/bin/stopWebLogic.sh"
  exit ${?}
  ;;
restart)
  /bin/su - oracle -c "${BASE_DOMAIN}/bin/stopWebLogic.sh"
  /bin/su - oracle -c "nohup ${BASE_DOMAIN}/bin/startWebLogic.sh &"
  exit ${?}
  ;;
*)
  echo "Usage:  $0 { start | stop | restart }"
  exit 1
  ;;
esac

Now it’s time to move the weblogic to the start routines as well.

# mv /home/oracle/bin/weblogic /etc/init.d/
# chkconfig --add weblogic
# chkconfig --list | grep weblogic
weblogic        0:off   1:off   2:off   3:on    4:on    5:on    6:off

After everything is setup, it is time to test it. ;-)

# chkconfig --list | egrep '(weblogic|oud)'
oud             0:off   1:off   2:off   3:on    4:on    5:on    6:off
weblogic        0:off   1:off   2:off   3:on    4:on    5:on    6:off

# init 6

Now just check if everything came up correctly.

Conclusion

The OUD comes with a script “create-rc-script” which is quite useful. However, in case you have the OSDM and you want the OUD status section as well, some adjustments have to be done.

 

Cet article OUD 11.1.2.3 – How to create an OUD Start/Stop/Status script on Oracle Linux 6 est apparu en premier sur Blog dbi services.

12c NSSn process for Data Guard SYNC transport

Thu, 2017-06-15 10:15

In a previous post https://blog.dbi-services.com/dataguard-wait-events-have-changed-in-12c/ I mentioned the new processes NSA for ASYNC transport and NSS for SYNC transport. I’m answering a bit late to a comment about the number of processes: yes there is one NSSn process per LOG_ARCHIVE_DEST_n destination in SYNC and the numbers match.

Here is my configuration with two physical standby:
DGMGRL> show configuration
 
Configuration - orcl
 
Protection Mode: MaxPerformance
Members:
orcla - Primary database
orclb - Physical standby database
orclc - Physical standby database
 
Fast-Start Failover: DISABLED
 
Configuration Status:
SUCCESS (status updated 56 seconds ago)

Both are in SYNC:
DGMGRL> show database orclb logxptmode;
LogXptMode = 'sync'
DGMGRL> show database orclc logxptmode;
LogXptMode = 'sync'

I can see two NSS processes:
DGMGRL> host ps -edf | grep --color=auto ora_nss[0-9] Executing operating system command(s):" ps -edf | grep --color=auto ora_nss[0-9]"
oracle 4952 1 0 16:05 ? 00:00:00 ora_nss3_ORCLA
oracle 5322 1 0 16:17 ? 00:00:00 ora_nss2_ORCLA

Here are the two log archive dest:
SQL> select name,value from v$parameter where regexp_like(name,'^log_archive_dest_[23]$');
NAME VALUE
---- -----
log_archive_dest_2 service="ORCLB", SYNC AFFIRM delay=0 optional compression=disable max_failure=0 max_connections=1 reopen=300 db_unique_name="orclb" net_timeout=30, valid_for=(online_logfile,all_roles)
log_archive_dest_3 service="ORCLC", SYNC AFFIRM delay=0 optional compression=disable max_failure=0 max_connections=1 reopen=300 db_unique_name="orclc" net_timeout=30, valid_for=(online_logfile,all_roles)

I set the 3rd one in ASYNC:
DGMGRL> edit database orclc set property logxptmode=ASYNC;
Property "logxptmode" updated

The NSS3 has stopped:
DGMGRL> host ps -edf | grep --color=auto ora_nss[0-9] Executing operating system command(s):" ps -edf | grep --color=auto ora_nss[0-9]"
oracle 5322 1 0 16:17 ? 00:00:00 ora_nss2_ORCLA

I set the 2nd destination to ASYNC:
DGMGRL> edit database orclb set property logxptmode=ASYNC;
Property "logxptmode" updated

The NSS2 has stopped:
DGMGRL> host ps -edf | grep --color=auto ora_nss[0-9] Executing operating system command(s):" ps -edf | grep --color=auto ora_nss[0-9]"

Now starting the 3rd destination first:
DGMGRL> edit database orclc set property logxptmode=SYNC;
Property "logxptmode" updated

I can see that nss3 has started as it is the log_archive_dest_3 which is in SYNC now:
DGMGRL> host ps -edf | grep --color=auto ora_nss[0-9] Executing operating system command(s):" ps -edf | grep --color=auto ora_nss[0-9]"
oracle 5368 1 0 16:20 ? 00:00:00 ora_nss3_ORCLA

Then starting the second one:
DGMGRL> edit database orclb set property logxptmode=SYNC;
Property "logxptmode" updated

Here are the two processes:

DGMGRL> host ps -edf | grep --color=auto ora_nss[0-9] Executing operating system command(s):" ps -edf | grep --color=auto ora_nss[0-9]"
oracle 5368 1 0 16:20 ? 00:00:00 ora_nss3_ORCLA
oracle 5393 1 0 16:20 ? 00:00:00 ora_nss2_ORCLA

So if you see some SYNC Remote Write events in ASH, look at the program name to know which destination it is.

 

Cet article 12c NSSn process for Data Guard SYNC transport est apparu en premier sur Blog dbi services.

A first look at EDB Postgres Enterprise Manager 7 beta – Connecting a PostgreSQL instance

Thu, 2017-06-15 10:14

In the last post we did a click/click/click setup of the PEM server. What we want to do now is to attach a PostgreSQL instance to the PEM server for being able to monitor and administer it. For that we need to install the PEM agent on a host where we have a PostgreSQL instance running (192.168.22.249 in my case, which runs PostgreSQL 10 Beta1). Lets go …

As usual, when you want to have the systemd services generated automatically you should run the installation as root:

[root@pemclient postgres]# ls
pem_agent-7.0.0-beta1-1-linux-x64.run
[root@pemclient postgres]# chmod +x pem_agent-7.0.0-beta1-1-linux-x64.run 
[root@pemclient postgres]# ./pem_agent-7.0.0-beta1-1-linux-x64.run 

The installation itself is not a big deal, just follow the screens:

pem_agent1
pem_agent2
pem_agent3
pem_agent4
pem_agent5
pem_agent6
pem_agent7
pem_agent8
pem_agent9

Once done we have a new systemd service:

[root@pemclient postgres]# systemctl list-unit-files | grep pem
pemagent.service                              enabled 

… and the processes that make up the PEM agent:

[root@pemclient postgres]# ps -ef | grep pem
root      3454     1  0 16:40 ?        00:00:00 /u01/app/postgres/product/pem7/agent/agent/bin/pemagent -c /u01/app/postgres/product/pem7/agent/agent/etc/agent.cfg
root      3455  3454  0 16:40 ?        00:00:00 /u01/app/postgres/product/pem7/agent/agent/bin/pemworker -c /u01/app/postgres/product/pem7/agent/agent/etc/agent.cfg --pid 3454
root      3515  2741  0 16:43 pts/0    00:00:00 grep --color=auto pem

Heading back to the PEM web interface the new agent is visible immediately:
pem_agent10

So, lets add the instance:
pem_agent11
pem_agent12
pem_agent17
pem_agent14
pem_agent15

Of course, we need to allow connections to our PostgreSQL instance from the PEM server. Adding this to the pg_hba.conf and reloading the instance fixes the issue:

host    all             all             192.168.22.248/32       md5

Once done:
pem_agent16

… and the instance is there.

In the next post we’ll setup some monitoring for our newly added PostgreSQL instance.

 

Cet article A first look at EDB Postgres Enterprise Manager 7 beta – Connecting a PostgreSQL instance est apparu en premier sur Blog dbi services.

A first look at EDB Postgres Enterprise Manager 7 beta

Thu, 2017-06-15 08:09

In case you missed it: EnterpriseDB has released the beta of Postgres Enterprise Manager 7 beta. When installation is as easy as for the current version that should just be a matter of clicking next, lets see.

Because the installer will create the systemd services installation should be done as root:

[root@edbpem tmp]$ ls -l
total 289076
-rw-r--r--. 1 root root 296009946 Jun  1 18:58 pem_server-7.0.0-beta1-1-linux-x64.run
[root@edbpem tmp]$ chmod +x pem_server-7.0.0-beta1-1-linux-x64.run 
[root@edbpem tmp]$ ./pem_server-7.0.0-beta1-1-linux-x64.run 

From now on all is graphical and straight forward:

pem7_1
pem7_2
pem7_3
pem7_4
pem7_5
pem7_6
pem7_7
pem7_8
pem7_9
pem7_10
pem7_11
pem7_12
pem7_13
pem7_14
pem7_15
pem7_16
pem7_17
pem7_18
pem7_19
pem7_20
pem7_21
pem7_22
pem7_23
pem7_24
pem7_25
pem7_26
pem7_27
pem7_28
pem7_29
pem7_30
pem7_31
pem7_32

What you get from a process perspective is this:

[root@edbpem tmp]$ ps -ef | grep pem
postgres 13462     1  0 19:17 ?        00:00:00 /u01/app/postgres/product/96/db_2/bin/postgres -D /u02/pgdata/pem
root     13869     1  0 19:18 ?        00:00:00 /u01/app/postgres/product/pem7/agent/bin/pemagent -c /u01/app/postgres/product/pem7/agent/etc/agent.cfg
root     13870 13869  0 19:18 ?        00:00:00 /u01/app/postgres/product/pem7/agent/bin/pemworker -c /u01/app/postgres/product/pem7/agent/etc/agent.cfg --pid 13869
postgres 13883 13462  1 19:18 ?        00:00:02 postgres: agent1 pem 127.0.0.1(53232) idle
postgres 13885 13462  0 19:18 ?        00:00:00 postgres: agent1 pem 127.0.0.1(53234) idle
postgres 13886 13462  0 19:18 ?        00:00:00 postgres: agent1 pem 127.0.0.1(53236) idle
postgres 13887 13462  0 19:18 ?        00:00:00 postgres: agent1 pem 127.0.0.1(53238) idle
postgres 13888 13462  0 19:18 ?        00:00:00 postgres: agent1 pem 127.0.0.1(53240) idle
pem      13938 13937  0 19:18 ?        00:00:00 EDBPEM                                                              -k start -f /u01/app/postgres/product/EnterpriseDB-ApacheHTTPD/apache/conf/httpd.conf
root     14301 11016  0 19:20 pts/0    00:00:00 grep --color=auto pem

Two new systemd services have been created so PEM should startup and shutdown automatically when the server reboots:

[root@edbpem tmp]# systemctl list-unit-files | egrep "pem|postgres"
pemagent.service                              enabled 
postgresql-9.6.service                        enabled 

Lets connect to PEM: https://192.168.22.248:8443/pem

pem7_33

If you have an EDB subscription now it is the time to enter the product key:

pem7_34

What you immediately can see is that the look and feel changed to that of pgadmin4 (A fat client for PEM as in the current version is not available any more):

pem7_35
pem7_36
pem7_37

In a next post we’ll add a PostgreSQL instance and start to monitor it.

 

Cet article A first look at EDB Postgres Enterprise Manager 7 beta est apparu en premier sur Blog dbi services.

Oracle 12.2 – How to rotate the 12.2 listener log (DIAG_ADR_ENABLED_LISTENER = OFF) – Follow Up

Wed, 2017-06-14 02:14

My blog about listener rotation caused some discussion, which is great. :-) It sounds like an easy stuff, but in case of listener logging it isn’t.

https://blog.dbi-services.com/oracle-12-2-how-to-rotate-the-12-2-listener-log-diag_adr_enabled_listener-off/

Many ways do exist to rotate the listener log, but I was trying to point out some issues, because there are a few, e.g.

  1. What happens when the log file reaches 4G?
  2. What are the performance implications?
  3. What happens if I move the listener.log while the listener is running (Open file descriptor)?
  4. And how to rotate the listener log with minimal impact?

The first two points have been discussed already in the previous post, so, I’m not going deeper into those one’s. Let’s take a look at the other ones, and start with the file descriptor issue. In the output below, you can see that the listener has an open file descriptor “3w” which is pointing to “/u01/app/oracle/network/log/listener.log”. It can be quite easily identified by using the lsof utility (list open files).

oracle@dbidg01:/u01/app/oracle/ [DBIT122] ps -ef | grep tnslsnr | grep -v grep
oracle    4686     1  0 07:59 ?        00:00:00 /u01/app/oracle/product/12.2.0/dbhome_1/bin/tnslsnr LISTENER -inherit

oracle@dbidg01:/u01/app/oracle/ [DBIT122] lsof -p 4686 | grep listener.log
COMMAND  PID   USER   FD      TYPE             DEVICE SIZE/OFF      NODE NAME
tnslsnr 4686 oracle    3w      REG              251,4    55239 141607653 /u01/app/oracle/network/log/listener.log

So .. what happens if I move the listener log, while the listener is running.

oracle@dbidg01:/u01/app/oracle/network/log/ [DBIT122] mv listener.log listener.log.1
oracle@dbidg01:/u01/app/oracle/network/log/ [DBIT122]

And to have some data, I’m doing some sqlplus sessions here to generate more log entries …

oracle@dbidg01:/u01/app/oracle/network/log/ [DBIT122] sqlplus hr/hr@DBIT122_SITE1
... 

oracle@dbidg01:/u01/app/oracle/network/log/ [DBIT122] sqlplus hr/hr@DBIT122_SITE1
... 

oracle@dbidg01:/u01/app/oracle/network/log/ [DBIT122] ls -l listener.log
ls: cannot access listener.log: No such file or directory

oracle@dbidg01:/u01/app/oracle/network/log/ [DBIT122] lsof -p 4686 | grep listener.log
COMMAND  PID   USER   FD      TYPE             DEVICE SIZE/OFF      NODE NAME
tnslsnr 4686 oracle    3w      REG              251,4    56143 141607653 /u01/app/oracle/network/log/listener.log.1

As you can see. A new listener.log is not automatically created, and the file descriptor “3w” is pointing now to /u01/app/oracle/network/log/listener.log.1.

That’s why it is not a good idea to move the listener log, without stopping the listener logging first. And please don’t try to send a hang up signal to the listener. The listener does not understand the hang up signal (kill -HUP) and you would kill it immediately. ;-)

Ok. Let’s get back to the question about how to rotate the listener log with minimal impact?

If you have activated the listener logging, then there is a reason for doing that and you don’t want to lose data out of that log file. In case that losing any listener log data is not acceptable, you have to stop the listener, rotate the log and start the listener. There is no way out of that. Of course it has the disadvantage that new sessions cannot be established during that time.

However, if you want the listener to be up and running, and rotate the listener log with an absolute minimum of log data loss (taking here about milliseconds), then I would use the approach which I have described already in my previous post.

I have written quickly a listener rotate script which demonstrates how it can look like. I know that the script is not baby save, and a lot of extra checks and tweaks can be built in, but you will get an idea how it can look like. The script takes two parameters. The first one is the listener name, and the second one is the number of days about how long the listener logs should be kept on disk. Everything older than that will be removed.

oracle@dbidg01:/home/oracle/ [DBIT122] ./rotate_listener.sh LISTENER 7
INFO: Check if Listener LISTENER is running
INFO: Start Listener Rotating for LISTENER
INFO: Current Listener Logging for LISTENER is: ON
INFO: Tuning Listener Logging for LISTENER OFF
INFO: Current Listener Logging for LISTENER is: OFF
INFO: Rotating Listener Log /u01/app/oracle/network/log/listener.log to /u01/app/oracle/network/log/listener.log.1497363899
INFO: Turning on Listener Logging for LISTENER
INFO: Current Listener Logging for LISTENER is: ON
INFO: Rotated successfully Listener Log for LISTENER
INFO: Starting cleanup of old listener log files
INFO: Will delete the following log files
/u01/app/oracle/network/log/listener.log.1497354123
/u01/app/oracle/network/log/listener.log.1497354122
/u01/app/oracle/network/log/listener.log.1497354121
INFO: Finished cleanup

And here are the contents of the script.

oracle@dbidg01:/home/oracle/ [DBIT122] cat rotate_listener.sh
#!/bin/bash
#
#-- Listener Name => parameter 1
#-- Delete listener log files older than number of days => parameter 2
#

#-- set -x

ListenerName=$1
NumberOfDays=$2
#-- %s  seconds since 1970-01-01 00:00:00 UTC
Date=`date +%s`

#-- Check if variable $1 and $2 are empty
if [ -z ${ListenerName} ]; then
        echo ""
        echo "INFO: Please specify the listener name"
        echo "INFO: e.g. ./rotate_listener.sh LISTENER 7"
        echo ""
        exit
fi

if [ -z ${NumberOfDays} ]; then
        echo ""
        echo "INFO: Please specify the number of days"
        echo "INFO: e.g. ./rotate_listener.sh LISTENER 7"
        echo ""
        exit
fi

echo "INFO: Check if Listener ${ListenerName} is running"

ps -ef | grep "tnslsnr ${ListenerName} -inherit" | grep -v grep >/dev/null 2>&1
if [ $? != 0 ]; then
        echo "INFO: Listener ${ListenerName} is not running ... will exit here"
        exit
fi

#-- Set the listener log file
ListenerLogFile=`lsnrctl status ${ListenerName} | grep "Listener Log File" | awk '{ print $4 }'`


echo "INFO: Start Listener Rotating for ${ListenerName}"

#-- Check listener log status
ListenerLogStatus=`lsnrctl <<-EOF  | grep log_status | awk '{ print $6 }'
set displaymode normal
set current_listener ${ListenerName}
show log_status
EOF`

if [ ${ListenerLogStatus} = "ON" ]; then
echo "INFO: Current Listener Logging for ${ListenerName} is: ${ListenerLogStatus}"
echo "INFO: Tuning Listener Logging for ${ListenerName} OFF"

ListenerLogStatus=`lsnrctl <<-EOF | grep log_status | awk '{ print $6 }'
set displaymode normal
set current_listener ${ListenerName}
set log_status off
EOF`

echo "INFO: Current Listener Logging for ${ListenerName} is: ${ListenerLogStatus}"
 if [ ${ListenerLogStatus} = "OFF" ]; then
   echo "INFO: Rotating Listener Log ${ListenerLogFile} to ${ListenerLogFile}.${Date}"
        mv ${ListenerLogFile} ${ListenerLogFile}.${Date}
    echo "INFO: Turning on Listener Logging for ${ListenerName}"

ListenerLogStatus=`lsnrctl <<-EOF | grep log_status | awk '{ print $6 }'
set displaymode normal
set current_listener ${ListenerName}
set log_status on
EOF`

echo "INFO: Current Listener Logging for ${ListenerName} is: ${ListenerLogStatus}"
echo "INFO: Rotated successfully Listener Log for ${ListenerName}"
 fi
fi

echo "INFO: Starting cleanup of old listener log files"
echo "INFO: Will delete the following log files"
ListenerLogDirectory=`dirname $ListenerLogFile`
find ${ListenerLogDirectory} -name "listener.log.*" -mtime +${2} -print
find ${ListenerLogDirectory} -name "listener.log.*" -mtime +${2} -print | xargs rm -f

echo "INFO: Finished cleanup"

#-- EOF
Conclusion

If you run that rotate listener log script during a time (e.g. in the middle of night), where you expect minimal activity on the DB, you can minimize the chance of losing listener log entries even further.

 

Cet article Oracle 12.2 – How to rotate the 12.2 listener log (DIAG_ADR_ENABLED_LISTENER = OFF) – Follow Up est apparu en premier sur Blog dbi services.

Oracle 12.2 – How to run a consistent full database export with dbms_datapump and parallel degree of 8

Tue, 2017-06-13 03:58

Nowadays, since the cloud is becoming more and more important, the PL/SQL API’s become more and more important too. Fortunately, Oracle has quite a lot of them. E.g. How do you run a Data Pump export if you have no ssh connectivity to the server? You could use the old exp tool, which is still available even with Oracle 12.2, or you can use DBMS_DATAPUMP. The Data Pump API is quite good documented at in the following books:

Database Utilities
https://docs.oracle.com/database/122/SUTIL/using-ORACLE_DATAPUMP-api.htm#SUTIL600

Database PL/SQL Packages and Types Reference
https://docs.oracle.com/database/122/ARPLS/DBMS_DATAPUMP.htm#ARPLS66050

But you might might some useful stuff in the $ORACLE_HOME/rdbms/admin/dbmsdp.sql as well.

In this little how to, I would like to show how to create a consistent full database export (parallel 8) with
the Data Pump API.

There are a only a few steps involved to get the job done.

1. Create a directory and grant the necessary privileges to user HR
2. Grant the DATAPUMP_EXP_FULL_DATABASE role to user HR
3. Execute the Data Pump job with DBMS_DATAPUMP
4. Monitor the Data Pump job
5. Optionally, do some cleanup

 

1.) Let’s start with the directory.

SQL> CREATE OR REPLACE DIRECTORY DATAPUMP_DIR AS '/u01/app/oracle/admin/DBIT122/dpdump';

Directory created.

SQL> GRANT READ, WRITE ON DIRECTORY DATAPUMP_DIR TO HR;

Grant succeeded.

2.) Now we grant the DATAPUMP_EXP_FULL_DATABASE role to the HR user

SQL> GRANT DATAPUMP_EXP_FULL_DATABASE TO HR;

Grant succeeded.

Please be aware that the DATAPUMP_EXP_FULL_DATABASE role affects only export operations. It allows the user HR to run these operations:

  • Perform the operation outside of the scope of their schema
  • Monitor jobs that were initiated by another user
  • Export objects (for example, TABLESPACE definitions) that unprivileged users cannot reference

Without this role, you might run into the following error when doing a full export:

ERROR at line 1:
ORA-31631: privileges are required
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.DBMS_DATAPUMP", line 4932
ORA-06512: at "SYS.DBMS_DATAPUMP", line 6844
ORA-06512: at line 6

 

3.) Now it’s time to run the Data Pump job. Be aware, that for a consistent export, you need to specify the FLASHBACK_TIME or FLASHBACK_SCN. In my case, I use the FLASHBACK_TIME and set it to the current SYSTIMESTAMP.

Ok. Let’s give it a try.

SQL> connect hr/hr
Connected.

SQL> @exp_datapump.sql
SQL> declare
  2        l_datapump_handle    NUMBER;  -- Data Pump job handle
  3        l_datapump_dir       VARCHAR2(20) := 'DATAPUMP_DIR';  -- Data Pump Directory
  4        l_status             varchar2(200); -- Data Pump Status
  5    BEGIN
  6        l_datapump_handle := dbms_datapump.open(operation => 'EXPORT',  -- operation = EXPORT, IMPORT, SQL_FILE
  7                                                job_mode =>'FULL',  -- job_mode = FULL, SCHEMA, TABLE, TABLESPACE, TRANSPORTABLE
  8                                                job_name => 'DBIT122 EXPORT JOB RUN 003',  -- job_name = NULL (default) or: job name (max 30 chars)
  9                                                version => '12'); -- version = COMPATIBLE (default), LATEST (dbversion), a value (11.0.0 or 12)
 10
 11            dbms_datapump.add_file(handle    => l_datapump_handle,
 12                               filename  => 'exp_DBIT122_%U.dmp',
 13                               directory => l_datapump_dir);
 14
 15        dbms_datapump.add_file(handle    => l_datapump_handle,
 16                               filename  => 'exp_DBIT122.log' ,
 17                               directory => l_datapump_dir ,
 18                               filetype  => DBMS_DATAPUMP.ku$_file_type_log_file);
 19
 20        dbms_datapump.set_parameter(l_datapump_handle,'CLIENT_COMMAND','Full Consistent Data Pump Export of DBIT122 with PARALLEL 8');
 21
 22            dbms_datapump.set_parameter(l_datapump_handle,'FLASHBACK_TIME','SYSTIMESTAMP');
 23
 24        dbms_datapump.set_parallel(l_datapump_handle,8);
 25
 26        dbms_datapump.start_job(handle => l_datapump_handle);
 27
 28        dbms_datapump.wait_for_job(handle => l_datapump_handle,
 29                                   job_state => l_status );
 30
 31        dbms_output.put_line( l_status );
 32
 33        end;
 34  /

PL/SQL procedure successfully completed.

SQL>

4.) In another window, you might want to monitor the status of your export job.

SQL> r
  1  select owner_name, job_name, rtrim(operation) "OPERATION",
  2         rtrim(job_mode) "JOB_MODE", state, attached_sessions
  3    from dba_datapump_jobs
  4   where job_name not like 'BIN$%'
  5*  order by 1,2

OWNER_NAME JOB_NAME                         OPERATION    JOB_MODE     STATE        ATTACHED_SESSIONS
---------- -------------------------------- ------------ ------------ ------------ -----------------
HR         DBIT122 EXPORT JOB RUN 003       EXPORT       FULL         EXECUTING                    1

Cool. If the job finished successfully, you will see 8 dump files, because we specified exp_DBIT122_%U.dmp as the file name, and one log file.

oracle@dbidg01:/u01/app/oracle/admin/DBIT122/dpdump/ [DBIT122] ls -l
total 4752
-rw-r----- 1 oracle oinstall  630784 Jun 13 10:29 exp_DBIT122_01.dmp
-rw-r----- 1 oracle oinstall 3321856 Jun 13 10:29 exp_DBIT122_02.dmp
-rw-r----- 1 oracle oinstall  180224 Jun 13 10:29 exp_DBIT122_03.dmp
-rw-r----- 1 oracle oinstall   57344 Jun 13 10:29 exp_DBIT122_04.dmp
-rw-r----- 1 oracle oinstall  430080 Jun 13 10:28 exp_DBIT122_05.dmp
-rw-r----- 1 oracle oinstall   20480 Jun 13 10:29 exp_DBIT122_06.dmp
-rw-r----- 1 oracle oinstall   28672 Jun 13 10:29 exp_DBIT122_07.dmp
-rw-r----- 1 oracle oinstall  176128 Jun 13 10:28 exp_DBIT122_08.dmp
-rw-r--r-- 1 oracle oinstall   11966 Jun 13 10:29 exp_DBIT122.log

5.) Finally, you might want to do some cleanup, in case you don’t need the dump files and the log files anymore. Or you start your export job with the REUSE_DUMPFILES=YES option. This option overwrites the destination dump file if they exist. In case you want to do the cleanup manually, you can use the ULT_FILE package.

SQL> exec utl_file.fremove( 'DATAPUMP_DIR', 'exp_DBIT122.log' );

PL/SQL procedure successfully completed.

SQL> exec utl_file.fremove( 'DATAPUMP_DIR', 'exp_DBIT122_01.dmp' );

PL/SQL procedure successfully completed.

SQL> exec utl_file.fremove( 'DATAPUMP_DIR', 'exp_DBIT122_02.dmp' );

PL/SQL procedure successfully completed.

SQL> exec utl_file.fremove( 'DATAPUMP_DIR', 'exp_DBIT122_03.dmp' );

PL/SQL procedure successfully completed.

SQL> exec utl_file.fremove( 'DATAPUMP_DIR', 'exp_DBIT122_04.dmp' );

PL/SQL procedure successfully completed.

SQL> exec utl_file.fremove( 'DATAPUMP_DIR', 'exp_DBIT122_05.dmp' );

PL/SQL procedure successfully completed.

SQL> exec utl_file.fremove( 'DATAPUMP_DIR', 'exp_DBIT122_06.dmp' );

PL/SQL procedure successfully completed.

SQL> exec utl_file.fremove( 'DATAPUMP_DIR', 'exp_DBIT122_07.dmp' );

PL/SQL procedure successfully completed.

SQL> exec utl_file.fremove( 'DATAPUMP_DIR', 'exp_DBIT122_08.dmp' );

PL/SQL procedure successfully completed.

SQL>
Conclusion

The PL/SQL API’s become more and more important, especially in cloud environments. It makes quite a lot of sense, from my point of view to look closer into the one or the other. Especially the DBMS_DATAPUMP is an important one for moving data around.

 

 

 

Cet article Oracle 12.2 – How to run a consistent full database export with dbms_datapump and parallel degree of 8 est apparu en premier sur Blog dbi services.

12cR2 PDB refresh as a poor-man standby?

Mon, 2017-06-12 15:04
Disclaimer

My goal here is only to show that the Refreshable PDB feature works by shipping and applying redo, and then can synchronize a copy of the datafiles. I do not recommend to use it for disaster recovery in any production environment yet. Even if I’m using only supported features, those features were not designed for this usage, and are quite new and not stable yet. Disaster Recovery must use safe and proven technologies and this is why I’ll stick with Dbvisit standby for disaster recovery in Standard Edition.

This post explains what I had in my mind whith the following tweet:
CapturePoorManSBY

Primary PRDPDB

On my primary server, I have a CDB1 container database in Standard Edition with one Pluggable Database: PDRDPDB:

21:36:45 SQL> connect sys/oracle@//192.168.78.105/CDB1 as sysdba
Connected.
 
21:36:46 SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PRDPDB READ WRITE NO

I need a user there to be able to remote clone from it:

21:36:46 SQL> grant create session, sysoper, dba to C##DBA identified by oracle container=all;
Grant succeeded.

Standby server

On my standby server, I have a CDB1 container database in Standard Edition, where I create a database link to the production CDB using the user created above to connect to it:

21:36:46 SQL> connect sys/oracle@//192.168.78.106:1522/CDB1 as sysdba
Connected.
21:36:46 SQL> create database link CDB1A connect to C##DBA identified by oracle using '//192.168.78.105/CDB1A';
Database link created.

My standby server runs Grid Infrastructure and has the database created on /acfs which is an ACFS filesystem. We will see the reason later when we will need to create a PDB snapshot copy. Any filesystem where you can use PDB snapshot copy would be fine.

Standby SBYPDB

The creation of the ‘standby’ pluggable database is done with a simple remote clone command and can be run in 12cR2 with the source PRDPDB still opened read write:


21:36:46 SQL> create pluggable database SBYPDB from PRDPDB@CDB1A
21:36:46 2 file_name_convert=('/u01/oradata/CDB1A/PRDPDB','/acfs/oradata/CDB1/SBYPDB')
21:36:46 3 refresh mode every 1 minutes;
 
Pluggable database created.

The REFRESH MODE is a 12cR2 feature which primary goal is to maintain and refresh a master clone for further provisioning of thin clones. This clone is refreshed every 1 minute, which means that I expect to have at most a one minute gap between PRDPDB and SBYPDB data, with the additional time to apply the 1 minute redo, of course.

Activity on the source

I will simulate a crash of the primary server and a failover to the standby, when transactions are running. I’ll run this activity on the SCOTT.EMP table:

21:39:03 SQL> connect scott/tiger@//192.168.78.105/PRDPDB;
Connected.
 
21:39:04 SQL> select * from emp where ename='KING';
 
EMPNO ENAME JOB MGR HIREDATE SAL
---------- ---------- --------- ---------- -------------------- ----------
7839 KING PRESIDENT 17-nov-1981 00:00:00 5000

I’m now updating the date and incrementing the number each second.

21:39:09 SQL> exec for i in 1..150 loop update emp set hiredate=sysdate, sal=sal+1; dbms_lock.sleep(1); commit; end loop
 
PL/SQL procedure successfully completed.

Here is the latest data on the primary server:

21:41:39 SQL> select * from emp where ename='KING';
 
EMPNO ENAME JOB MGR HIREDATE SAL
---------- ---------- --------- ---------- -------------------- ----------
7839 KING PRESIDENT 11-jun-2017 21:41:38 5150

Crash the primary

The primary server is not supposed to be accessible in case of Disaster Recovery, so I’m crashing it:

21:41:39 SQL> disconnect
Disconnected from Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production
21:41:39 SQL> connect / as sysdba
Connected.
21:41:39 SQL> shutdown abort
ORACLE instance shut down.

Activate the standby

The datafiles are up to date, with a maximum 1 minute gap and all I want is open it and have the application re-connect to it. However a refreshable clone can be opened only read-only. This makes sense: you cannot apply more redo from source once opened read-write. So my first idea was to stop the refresh mode:

21:41:45 SQL> connect sys/oracle@//192.168.78.106:1522/CDB1 as sysdba
Connected.
21:41:45 SQL> alter session set container=SBYPDB;
Session altered.
 
21:41:45 SQL> alter pluggable database SBYPDB refresh mode none;
alter pluggable database SBYPDB refresh mode none
*
ERROR at line 1:
ORA-17627: ORA-12514: TNS:listener does not currently know of service requested
in connect descriptor
ORA-17629: Cannot connect to the remote database server

It seems that Oracle tries to do one last refresh when you stop the refresh mode, but this fails here because the source is not accessible. I think that it should be possible to open read-write without applying more redo. However, these refreshable clones were not designed for failover.

I hope that one day we will just be able to end refresh mode without connecting to source, accepting to lose the latest transactions.

Open Read Only

Without an access to the source, I stay in refresh mode and I can only open read only:
21:41:45 SQL> alter pluggable database SBYPDB open read-only;
Pluggable database altered.
 
21:41:47 SQL> alter session set container=SBYPDB;
Session altered.
&nsbp;
21:41:47 SQL> alter session set nls_date_format='dd-mon-yyyy hh24:mi:ss';
Session altered.
 
21:41:47 SQL> select * from scott.emp where ename='KING';
 
EMPNO ENAME JOB MGR HIREDATE SAL
---------- ---------- --------- ---------- -------------------- ----------
7839 KING PRESIDENT 11-jun-2017 21:41:01 5113

My data is there, with my less than one minute gap, but that’s not sufficient for me. I want to run my application on it.

Snapshot Clone

My first idea to get the PDB read write on the standby server is to clone it. Of course, the failover time should not depend on the size of the database, so my idea is to do a snapshot copy, and this is why I’ve setup my standby CDB on ACFS. Here I’m cloning the SBYPDB to the same name as the primary: PRDPDB

21:41:47 SQL> alter session set container=CDB$ROOT;
Session altered.
 
21:41:47 SQL> create pluggable database PRDPDB from SBYPDB file_name_convert=('SBYPDB','PRDPDB') snapshot copy;
Pluggable database created.
 
21:42:03 SQL> alter pluggable database PRDPDB open;
Pluggable database altered.

I have now my new PRDPDB opened read write with the latest data that was refreshed:

21:42:26 SQL> alter session set container=PRDPDB;
Session altered.
 
21:42:26 SQL> alter session set nls_date_format='dd-mon-yyyy hh24:mi:ss';
Session altered.
 
21:42:26 SQL> select * from scott.emp where ename='KING';
 
EMPNO ENAME JOB MGR HIREDATE SAL
---------- ---------- --------- ---------- -------------------- ----------
7839 KING PRESIDENT 11-jun-2017 21:41:01 5113

I’m running on a snapshot here. I can stay like that, or plan to move it out of the snapshot in the future. There is no online datafile move in Standard Edition, but there is the online pluggable database relocate. Anyway, running the database in a snapshot is sufficient to run a production after a Disaster Recovery and I can remove the SBYPRD so that there is no need to copy the ACFS extents on future writes.

Keep the snapshot

At that point, you should tell me that I cannot snapshot copy a PDB within the same CDB here because I’m in Standard Edition. And that’s right: you can create only one PDB there and you are supposed to get a ‘feature not enabled’. But I was able to do it here in my lab, with a small trick to inverse the CON_ID sequence:

Connected to:
Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production
 
SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PRDPDB READ WRITE NO
4 SBYPDB MOUNTED

Remote snapshot clone should be possible as well. But there’s another licensing issue here. Using ACFS snapshots for the database is not allowed in Standard Edition. This means that this solution probably requires another snapshot solution than the one I’m using here in my lab.

If you don’t fear to violate the single-tenant rules, you may prefer to keep the SBYPRD for a while. Imagine that you are able to restart the crashed server for a few minutes, then you can do the last refresh of SBYPRD to have a look at the transactions that were lost in the 1 minute window.

I re-start the crashed CDB:

21:42:26 SQL> connect / as sysdba
Connected to an idle instance.
21:42:27 SQL> startup
ORACLE instance started.
 
Total System Global Area 859832320 bytes
Fixed Size 8798552 bytes
Variable Size 356519592 bytes
Database Buffers 486539264 bytes
Redo Buffers 7974912 bytes
Database mounted.
Database opened.

and now, on my standby server, I can finally stop the refresh mode:

21:42:51 SQL> connect sys/oracle@//192.168.78.106:1522/CDB1 as sysdba
Connected.
21:42:52 SQL> alter pluggable database SBYPDB close;
Pluggable database altered.
 
21:42:52 SQL> alter session set container=SBYPDB;
Session altered.
 
21:42:52 SQL> alter pluggable database SBYPDB refresh mode none;
Pluggable database altered.

Be careful not to have jobs or services starting here because your production is now on the snapshot clone PRDPDB running on the same server. Let’s open it:

21:43:02 SQL> alter pluggable database SBYPDB open restricted;
Pluggable database altered.
 
21:43:24 SQL> select * from scott.emp where ename='KING';
 
EMPNO ENAME JOB MGR HIREDATE SAL
---------- ---------- --------- ---------- -------------------- ----------
7839 KING PRESIDENT 11-jun-2017 21:41:38 5150

And here we are with the data at the moment of the crash. Then, the application owner can manually check what was missed between the last refresh (which made its way to PRDPDB) and the crash (visible in SBYPDB).

Unplug/Plug

I was not very satisfied by the snapshot clone because of the limitations in Standard Edition, which is where this solution may be interesting. I have the datafiles but cannot open the SBYPDB read write. I tried to unplug them but cannot because of the refresh mode:

SQL> alter pluggable database SBYPDB unplug into '/tmp/tmp.xml';
 
Error starting at line : 1 in command -
alter pluggable database SBYPDB unplug into '/tmp/tmp.xml'
Error report -
ORA-01113: file 23 needs media recovery
ORA-01110: data file 23: '/acfs/oradata/CDB1/SBYPDB/undotbs01.dbf'
01113. 00000 - "file %s needs media recovery"
*Cause: An attempt was made to online or open a database with a file that
is in need of media recovery.
*Action: First apply media recovery to the file.

I know that I don’t need more recovery. So let’s unplug it in another way:

SQL> alter pluggable database SBYPDB open read only;
Pluggable database SBYPDB altered.
 
SQL> exec dbms_pdb.describe('/tmp/tmp.xml','SBYPDB');
PL/SQL procedure successfully completed.

Then drop it but keep the datafiles:

SQL> alter pluggable database SBYPDB close;
Pluggable database SBYPDB altered.
 
SQL> drop pluggable database SBYPDB;
Pluggable database SBYPDB dropped.

And plug it back:

SQL> create pluggable database SBYPDB using '/tmp/tmp.xml';
Pluggable database SBYPDB created.
 
SQL> alter pluggable database SBYPDB open;
Pluggable database SBYPDB altered.

Here it is. This takes a bit longer than the snapshot solution but still ready to activate the ‘standby’ PDB without copying datafiles.

So what?

All the new 12cR2 multitenant features are available in all Editions, which is very good. Here with ALTER PLUGGABLE DATABASE … REFRESH we have log shipping and apply, for free in Standard Edition, at PDB level. And I’ve tested two ways to open this standby PDB in case of disaster recovery. I’m using only supported features here, but be careful that those features were not designed for this goal. The normal operations on refreshable clone require that the remote CDB is accessible. But there are workarounds here because you can describe/drop/plug or snapshot clone from a PDB that you can open read only.

 

Cet article 12cR2 PDB refresh as a poor-man standby? est apparu en premier sur Blog dbi services.

Install Apache Kafka on Linux

Fri, 2017-06-09 06:37

download

What is Apache Kafka ?

No, Kafka is not only the famous author (en.wikipedia.org/wiki/Franz_Kafka), it’s an open-source distributed pub-sub messaging system with powerful skills like scalability and fault tolerance. It’s also a stream processing platform (near real-time) for the streaming datasources. The design of Apache Kafka is strongly influenced by the commit logs. Apache Kafka was originally developed by Linkedin and was subsequently open sourced in early 2011.

The installation is pretty simple but need to be rigorous .

Binaries installation

    • Prerequisites
      Get a Linux server (I have chosen Centos 7.3.1611), it could run on a small config. (memory 1G min.)
      Connect as a sudo user or root
    • Update your system and reboot
      [root@osboxes ~]# yum update
      Loaded plugins: fastestmirror, langpacks
      Loading mirror speeds from cached hostfile
       * base: mirror.switch.ch
       * epel: mirror.uni-trier.de
       * extras: mirror.switch.ch
       * updates: mirror.switch.ch
      No packages marked for update
  • Install the latest openjdk and set your environment
    [root@osboxes ~]# yum install java-1.8.0-openjdk
    Loaded plugins: fastestmirror, langpacks
    Loading mirror speeds from cached hostfile
     * base: mirror.switch.ch
     * epel: mirror.imt-systems.com
     * extras: mirror.switch.ch
     * updates: mirror.switch.ch
    Package 1:java-1.8.0-openjdk-1.8.0.131-3.b12.el7_3.x86_64 already installed and latest version
    Nothing to do
    
    #Check it:
    [root@osboxes ~]# java -version
    openjdk version "1.8.0_131"
    OpenJDK Runtime Environment (build 1.8.0_131-b12)
    OpenJDK 64-Bit Server VM (build 25.131-b12, mixed mode)
    
    
    #Update your bash_profile:
    export JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk
    export JRE_HOME=/usr/lib/jvm/jre
    # and source your profile:
    [root@osboxes ~]# . ./.bash_profile
    [root@osboxes ~]# echo $JAVA_HOME
    /usr/lib/jvm/jre-1.8.0-openjdk
    [root@osboxes ~]# echo $JRE_HOME
    /usr/lib/jvm/jre
    
  • The Confluent Platform is an open source platform that contains all the components you need
    to create a scalable data platform built around Apache Kafka.
    Confluent Open Source is freely downloadable.
    Install the public key from Confluent

    rpm --import http://packages.confluent.io/rpm/3.2/archive.key
  • Add the confluent.repo to your /etc/yum.repos.d with this content
    [Confluent.dist]
    name=Confluent repository (dist)
    baseurl=http://packages.confluent.io/rpm/3.2/7
    gpgcheck=1
    gpgkey=http://packages.confluent.io/rpm/3.2/archive.key
    enabled=1
    
    [Confluent]
    name=Confluent repository
    baseurl=http://packages.confluent.io/rpm/3.2
    gpgcheck=1
    gpgkey=http://packages.confluent.io/rpm/3.2/archive.key
    enabled=1
  • Clean your yum caches
    yum clean all
  • And finally install the open source version of Confluent
    yum install confluent-platform-oss-2.11
    Transaction Summary
    ============================================================================================================================================================================
    Install  1 Package (+11 Dependent packages)
    
    Total download size: 391 M
    Installed size: 446 M
    Is this ok [y/d/N]: y
    Downloading packages:
    (1/12): confluent-common-3.2.1-1.noarch.rpm                                                                                                          | 2.0 MB  00:00:06
    (2/12): confluent-camus-3.2.1-1.noarch.rpm                                                                                                           |  20 MB  00:00:28
    (3/12): confluent-kafka-connect-elasticsearch-3.2.1-1.noarch.rpm                                                                                     | 4.3 MB  00:00:06
    (4/12): confluent-kafka-2.11-0.10.2.1-1.noarch.rpm                                                                                                   |  38 MB  00:00:28
    (5/12): confluent-kafka-connect-jdbc-3.2.1-1.noarch.rpm                                                                                              | 6.0 MB  00:00:07
    (6/12): confluent-kafka-connect-hdfs-3.2.1-1.noarch.rpm                                                                                              |  91 MB  00:01:17
    (7/12): confluent-kafka-connect-s3-3.2.1-1.noarch.rpm                                                                                                |  92 MB  00:01:18
    (8/12): confluent-kafka-rest-3.2.1-1.noarch.rpm                                                                                                      |  16 MB  00:00:16
    (9/12): confluent-platform-oss-2.11-3.2.1-1.noarch.rpm                                                                                               | 6.7 kB  00:00:00
    (10/12): confluent-rest-utils-3.2.1-1.noarch.rpm                                                                                                     | 7.1 MB  00:00:06
    (11/12): confluent-schema-registry-3.2.1-1.noarch.rpm                                                                                                |  27 MB  00:00:23
    (12/12): confluent-kafka-connect-storage-common-3.2.1-1.noarch.rpm                                                                                   |  89 MB  00:01:08
    ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    Total                                                                                                                                       2.2 MB/s | 391 MB  00:03:00
    Running transaction check
    Running transaction test
    Transaction test succeeded
    Running transaction
      Installing : confluent-common-3.2.1-1.noarch                                                                                                                         1/12
      Installing : confluent-kafka-connect-storage-common-3.2.1-1.noarch                                                                                                   2/12
      Installing : confluent-rest-utils-3.2.1-1.noarch                                                                                                                     3/12
      Installing : confluent-kafka-rest-3.2.1-1.noarch                                                                                                                     4/12
      Installing : confluent-schema-registry-3.2.1-1.noarch                                                                                                                5/12
      Installing : confluent-kafka-connect-s3-3.2.1-1.noarch                                                                                                               6/12
      Installing : confluent-kafka-connect-elasticsearch-3.2.1-1.noarch                                                                                                    7/12
      Installing : confluent-kafka-connect-jdbc-3.2.1-1.noarch                                                                                                             8/12
      Installing : confluent-kafka-connect-hdfs-3.2.1-1.noarch                                                                                                             9/12
      Installing : confluent-kafka-2.11-0.10.2.1-1.noarch                                                                                                                 10/12
      Installing : confluent-camus-3.2.1-1.noarch                                                                                                                         11/12
      Installing : confluent-platform-oss-2.11-3.2.1-1.noarch                                                                                                             12/12
      Verifying  : confluent-kafka-connect-storage-common-3.2.1-1.noarch                                                                                                   1/12
      Verifying  : confluent-platform-oss-2.11-3.2.1-1.noarch                                                                                                              2/12
      Verifying  : confluent-rest-utils-3.2.1-1.noarch                                                                                                                     3/12
      Verifying  : confluent-kafka-connect-elasticsearch-3.2.1-1.noarch                                                                                                    4/12
      Verifying  : confluent-kafka-connect-s3-3.2.1-1.noarch                                                                                                               5/12
      Verifying  : confluent-kafka-rest-3.2.1-1.noarch                                                                                                                     6/12
      Verifying  : confluent-camus-3.2.1-1.noarch                                                                                                                          7/12
      Verifying  : confluent-kafka-connect-jdbc-3.2.1-1.noarch                                                                                                             8/12
      Verifying  : confluent-schema-registry-3.2.1-1.noarch                                                                                                                9/12
      Verifying  : confluent-kafka-2.11-0.10.2.1-1.noarch                                                                                                                 10/12
      Verifying  : confluent-kafka-connect-hdfs-3.2.1-1.noarch                                                                                                            11/12
      Verifying  : confluent-common-3.2.1-1.noarch                                                                                                                        12/12
    
    Installed:
      confluent-platform-oss-2.11.noarch 0:3.2.1-1
    
    Dependency Installed:
      confluent-camus.noarch 0:3.2.1-1                           confluent-common.noarch 0:3.2.1-1                           confluent-kafka-2.11.noarch 0:0.10.2.1-1
      confluent-kafka-connect-elasticsearch.noarch 0:3.2.1-1     confluent-kafka-connect-hdfs.noarch 0:3.2.1-1               confluent-kafka-connect-jdbc.noarch 0:3.2.1-1
      confluent-kafka-connect-s3.noarch 0:3.2.1-1                confluent-kafka-connect-storage-common.noarch 0:3.2.1-1     confluent-kafka-rest.noarch 0:3.2.1-1
      confluent-rest-utils.noarch 0:3.2.1-1                      confluent-schema-registry.noarch 0:3.2.1-1
    
    Complete!

Ok , the binaries are installed now. The next operation will be to configure and launch Zookeeper and Kafka itself !

    • First , take a look at the Zookeeper configuration :
      [root@osboxes kafka]# cat /etc/kafka/zookeeper.properties
      # Licensed to the Apache Software Foundation (ASF) under one or more
      # contributor license agreements.  See the NOTICE file distributed with
      # this work for additional information regarding copyright ownership.
      # The ASF licenses this file to You under the Apache License, Version 2.0
      # (the "License"); you may not use this file except in compliance with
      # the License.  You may obtain a copy of the License at
      #
      #    http://www.apache.org/licenses/LICENSE-2.0
      #
      # Unless required by applicable law or agreed to in writing, software
      # distributed under the License is distributed on an "AS IS" BASIS,
      # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
      # See the License for the specific language governing permissions and
      # limitations under the License.
      # the directory where the snapshot is stored.
      dataDir=/var/lib/zookeeper
      # the port at which the clients will connect
      clientPort=2181
      # disable the per-ip limit on the number of connections since this is a non-production config
      maxClientCnxns=0
    • Don’t change the configuration file (the default values are okay to start with)  and launch Zookeeper
      /usr/bin/zookeeper-server-start /etc/kafka/zookeeper.properties
      ...
      [2017-06-08 14:05:02,051] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory)
    • Keep the session with Zookeeper and open a new terminal for the Kafka part
      /usr/bin/kafka-server-start /etc/kafka/server.properties
      ...
      [2017-06-08 14:11:31,333] INFO Kafka version : 0.10.2.1-cp1 (org.apache.kafka.common.utils.AppInfoParser)
      [2017-06-08 14:11:31,334] INFO Kafka commitId : 80ff5014b9e74a45 (org.apache.kafka.common.utils.AppInfoParser)
      [2017-06-08 14:11:31,335] INFO [Kafka Server 0], started (kafka.server.KafkaServer)
      [2017-06-08 14:11:31,350] INFO Waiting 10062 ms for the monitored broker to finish starting up... (io.confluent.support.
      metrics.MetricsReporter)
      [2017-06-08 14:11:41,413] INFO Monitored broker is now ready (io.confluent.support.metrics.MetricsReporter)
      [2017-06-08 14:11:41,413] INFO Starting metrics collection from monitored broker... (io.confluent.support.metrics.Metric
      sReporter)
    • Like Zookeeper , let the Kafka Terminal open and launch a new session for the topic creation.

 Topic creation

  • Messages in Kafka are categorized into Topics, it’s like a db table or a directory in a file system.
    At first , we are going to create a new topic.

    [root@osboxes ~]# /usr/bin/kafka-topics --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic dbi
    Created topic "dbi".
  • Check if the topic has been effectively created
    [root@osboxes ~]# /usr/bin/kafka-topics --list --zookeeper localhost:2181
    dbi
  • 
    

    Nice , we can now produce some messages using the topic “dbi”

    [root@osboxes ~]# kafka-console-producer --broker-list localhost:9092 --topic dbi
    be passionate
    be successful
    be responsible
    be sharing
  • Open a new terminal and act like a consumer with the console
    /usr/bin/kafka-console-consumer --zookeeper localhost:2181 --topic dbi --from-beginning
    be passionate
    be successful
    be responsible
    be sharing
  • Et voilà ! the messages produced with the producer appeared now in the consumer windows. You can type a new message in the producer console , it will display immediately in the other terminal.If you want to stop all the consoles , you can press Ctrl-C.

Now the most difficult thing is still to be done, configure Kafka with multiple producers / consumers within a complex broker topology.

cluster_architecture

 

Cet article Install Apache Kafka on Linux est apparu en premier sur Blog dbi services.

SQL Server 2017 high availability and Resumable Online Indexes

Mon, 2017-06-05 13:19

Ensuring data availability is an important part of the high availability design. SQL Server AlwaysOn features including SQL Server FCIs and availability groups address some aspects of the problem but we may also rely on the online operations features to maximize data availability. Indeed, for some high-critical workloads, offline operations during the maintenance phase are not permitted and may contribute to call into question the entire architecture.

But have you ever faced an outage from your primary replica leading to a failover to the secondary replica during the database maintenance timeline? Yes, your architecture is designed to behave in this way for sure but let’s say you were in the middle of a rebuild index operation concerning a big table and everything is rolled back now. In this case, you will probably have to wait the next windows maintenance time to initiate another rebuild operation, but from the start …

Fortunately, the new Resumable Online Indexes feature from SQL Server 2017 will come to the rescue and seems promising to address such situation. Some limitations are still around and well-documented in the Microsoft documentation but I guess Microsoft will work hard to remove them in the future.

Let’s play a little bit with the Resumable Online Indexes on my AG environment that includes two replicas with cluster_type = NONE meaning we don’t rely on an underlying orchestrator to handle automatic failover of the AG resources. In this context, it will be sufficient to demonstrate how Resumable Online Indexes work.

blog 123 - 01 - resumable onlline index - scenario_

My initial configuration is as follows:

blog 123 - 02 - resumable onlline index - ag win

I used the AdventureWorks2016 database with a bigTransactionHistory table generated from the Adam Machanic script.

select 
	o.name as table_name,
	i.name as index_name,
	p.index_id,
	au.type_desc,
	au.used_pages,
	p.rows
from 
	sys.allocation_units as au
join 
	sys.partitions as p on au.container_id = p.hobt_id
join 
	sys.objects as o on o.object_id = p.object_id
left join 
	sys.indexes as i on i.object_id = p.object_id
						and i.index_id = p.index_id
where p.object_id = object_id('bigTransactionHistory');

 

blog 123 - 03 - resumable onlline index - bigTransactionHistory

Let’s then initiate a rebuild of the pk_bigTransactionHistory index by using the new RESUMABLE option as follows:

ALTER INDEX pk_bigTransactionHistory 
ON bigTransactionHistory 
REBUILD
WITH (ONLINE = ON, RESUMABLE = ON);
GO

 

By using this option, I made the rebuild phase “resumable” and I’m now able to perform additional actions like pausing or aborting the index rebuild operation. I may also use MAX_DURATION parameter to setup the duration (in minutes) of the operation before to pause it. We may easily identify resumable sessions which are running on the SQL Server instance by using the sys.dm_exec_requests DMV and new is_resumable column.

blog 123 - 04 - resumable onlline index - sys_dm_exec_requests_is_resumable

Let’s initiate a manual failover while the index is rebuilt by SQL Server on the primary replica. Then during the outage of the first primary replica, I faced the following error message:

blog 123 - 05 - resumable onlline index - error during failover

A typical error message regarding the situation … but the most interesting part comes now. If we go through the new sys.index_resumable_operations DMV, we may notice our rebuild index operation is paused.

SELECT 
	iro.last_max_dop_used,
	iro.sql_text,
	iro.start_time,
	iro.last_pause_time,
	iro.total_execution_time,
	iro.page_count, 
	iro.percent_complete
FROM  
	sys.index_resumable_operations as iro
JOIN
	sys.objects AS o ON o.object_id = iro.object_id

 

blog 123 - 06 - resumable onlline index - resume op dmv

From the above output, we may extract interesting column values:

  • percent_complete : the percent complete value at the moment of pausing the operation
  • page_count = size of the second index structure at the moment of pausing the operation
  • last_pause_time = it is self-explanatory
  • last_max_dop_used = max dop value used during the last rebuild operation. It is interesting to notice we may change this value for the same operation between pausing sessions.

Before resuming the index operation, let’s have a brief look at the transaction log used space. I asked myself some questions about the transaction behavior when the index operation is paused: Does a pause state have an impact on the transaction log? Do the VLFs touched by this operation remain active until the index is completely rebuilt? Let’s answer to this question by the following test.

A first look at the transaction log space used says that the transaction log contains records that must be backed-up.

use AdventureWorks2016;

select 
	total_log_size_in_bytes / 1024 / 1024 / 1024 as total_log_size_GB,
	used_log_space_in_bytes / 1024 / 1024  as used_log_space_MB,
	log_space_in_bytes_since_last_backup / 1024 / 1024 as log_space_MB_since_last_backup,
	used_log_space_in_percent
from sys.dm_db_log_space_usage;
go

 

blog 123 - 07 - resumable onlline index - tlog

According to the previous result, let’s perform a backup log …

backup log AdventureWorks2016
to disk = 'NUL' 
with stats = 10;
go

blog 123 - 08 - resumable onlline index - tlog

Well, pausing an index operation doesn’t rely on a remaining active transaction that may impact the transaction log retention but after a few thoughts, it seems to be obvious it works in this way.

What about storage? The resumable option is available only with online index operations meaning SQL Server must maintain the corresponding hidden structures as long as it is necessary to rebuild the underlying index. According to the BOL:

No extra resources are required for resumable index rebuild except for◦Additional space required to keep the index being built, including the time when index is being paused

Obviously, maintaining such structure may have a huge impact on your workload regarding your context. Out of curiosity, I went through the system view to see those hidden structures as follows:

select 
	o.name as table_name,
	i.name as index_name,
	p.index_id,
	au.type_desc,
	SUM(au.used_pages) as total_used_pages,
	SUM(p.rows) as total_rows,
	COUNT(*) as nb_partitions
from 
	sys.allocation_units as au
join 
	sys.partitions as p on au.container_id = p.hobt_id
join 
	sys.objects as o on o.object_id = p.object_id
left join 
	sys.indexes as i on i.object_id = p.object_id
						and i.index_id = p.index_id
where 
	p.object_id = object_id('bigTransactionHistory')
group by 
	o.name, i.name, p.index_id, au.type_desc
order by 
	o.name, p.index_id;
go

 

blog 123 - 09 - resumable onlline index - hidden index structure

In regard to the second record, a simple math – 9261 * 100. / 190150 – confirms the structure is only 4.8 percent in size of the underlying cluster index. We may retrieve this result from the first output of the new sys.index_resumble_index system view. For the third one, my guess is it corresponds to the temporary mapping index used by SQL Server to determine which records to delete in the new indexes that are being built when rows in the underlying table are updated or deleted.

Here is an execution plan sample of an update query against the bigTransactionHistory table. We may notice DELETE / UPDATE operations to a “hidden” nonclustered index related to this special index highlighted in red.

blog 123 - 10 - resumable onlline index - execution plan

Go ahead and let’s resume the pending index operation by using the new RESUME option. We may also add the MAX_DURATION option to guarantee the rebuild operation will go back to a pause state if we consider that reaching the maintenance windows limit cannot hurt the daily business workload in terms of resource for instance. An option we will definitely consider to add it with the next version of our DMK maintenance tool for SQL Server.

ALTER INDEX pk_bigTransactionHistory 
ON bigTransactionHistory RESUME WITH (MAX_DURATION = 120 MINUTES);
GO

In this blog, we just have surfaced this new maintenance option and its capabilities. Other articles will probably come in the future.

Stay tuned!

 

 

Cet article SQL Server 2017 high availability and Resumable Online Indexes est apparu en premier sur Blog dbi services.

Oracle 12.2 – How to rotate the 12.2 listener log (DIAG_ADR_ENABLED_LISTENER = OFF)

Mon, 2017-06-05 06:49

The listener log file contains a lot of very useful information, like the program which was used for the connection, the IP address where the connection is coming from, the OS user which was used on the client  and many many more.

05-JUN-2017 12:36:19 * service_update * DBIT122 * 0
05-JUN-2017 12:36:19 * (CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE1_DGMGRL)(UR=A)(CID=(PROGRAM=sqlplus@dbidg01)(HOST=dbidg01)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.201)(PORT=42619)) * establish * DBIT122_SITE1_DGMGRL * 0
05-JUN-2017 12:36:19 * (CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE1_DGMGRL)(UR=A)(CID=(PROGRAM=sqlplus@dbidg01)(HOST=dbidg01)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.201)(PORT=42620)) * establish * DBIT122_SITE1_DGMGRL * 0

However, it does not contain only successful information; it also shows when connections have been rejected because of TCP.VALIDNODE_CHECKING or any type of TNS errors. So, why not using it for auditing? And what about the performance overhead of listener logging.

Let’s start first with the performance overhead. I am doing 4 types of tests here.

1. Performance with no listener logging (100 connections)
2. Performance with a small listener.log (100 connections)
3. Performance with big listener.log, close to 4G (100 connections)
4. Performance with a full listener.log, exactly 4G (100 connections)

 

Test 1: Performance with no listener logging (100 connections)
oracle@dbidg01:/home/oracle/ [DBIT122] time ./performance_listener.sh
SQLPlus count 1
SQLPlus count 2
...
...
SQLPlus count 99
SQLPlus count 100

real    0m3.360s
user    0m1.065s
sys     0m0.495s
Test 2: Performance with a small listener.log (100 connections)
oracle@dbidg01:/home/oracle/ [DBIT122] time ./performance_listener.sh
SQLPlus count 1
SQLPlus count 2
...
...
SQLPlus count 99
SQLPlus count 100

real    0m3.401s
user    0m1.049s
sys     0m0.511s
Test 3: Performance with big listener.log, close to 4G (100 connections)
oracle@dbidg01:/home/oracle/tmp/ [DBIT122] time ./performance_listener.sh
SQLPlus count 1
SQLPlus count 2
SQLPlus count 3
...
...
SQLPlus count 99
SQLPlus count 100

real    0m3.766s
user    0m1.110s
sys     0m0.522s
Test 4: Performance with a full listener.log, exactly 4G (100 connections)
oracle@dbidg01:/home/oracle/tmp/ [DBIT122] time ./performance_listener.sh
SQLPlus count 1
SQLPlus count 2
SQLPlus count 3
...
...
SQLPlus count 99
SQLPlus count 100

real    0m3.430s
user    0m1.068s
sys     0m0.501s

As you can see in the results, sqlplus connections without listener logging are the fastest one, and the bigger the file gets, the slower the connections are. But wait a second. What’s going on with test 4? As soon as the listener log is full, connections are faster again. The reason for test 4 being faster again, is that the listener is not logging anymore. As soon as it reaches the 4G limit, which is still the case with Oracle 12.2, the listener does not crash like in some older versions beforehand, but the logs are going to /dev/null. So, I will lose all my auditing information for those ones after the 4G limit was reached.

How do we overcome this issue? The answer is to rotate the listener log. But how do we do it? We can’t just simply move the old listener log away and create an empty new one, because the file descriptor is still open and you would create a huge mess.

We could stop the listener, rotate the listener log and start the listener again. A little better, but for the time the listener is stopped, no connection will be possible which is also not a good idea.

From my point of view, the best solution is to stop listener logging online as soon as it hits 1G, rotate the listener log and start listener logging again, like in the following example:

lsnrctl <<-EOF
set current_listener $ListenerName
set log_status off
EOF

mv ${ListenerLogFile} ${ListenerLogFile}.${Date}

lsnrctl <<-EOF
set current_listener $ListenerName
set log_status on
EOF

Now you have the advantage, that the log rotation is an online operation, and you don’t create any mess with open file descriptors. And if you rotate the listener log before it reaches 1G, it is also less likely to run into performance issues.

Conclusion

Take care of your listener log, so that it does not hit the 4G file size limit. You might lose very important information which will not be logged anymore. And do the listener log rotation correctly. ;-)

 

Cet article Oracle 12.2 – How to rotate the 12.2 listener log (DIAG_ADR_ENABLED_LISTENER = OFF) est apparu en premier sur Blog dbi services.

12cR2 auditing all users with a role granted

Sun, 2017-06-04 11:03

12.1 introduced Unified Auditing where you define policies and then enable them. As with the traditional audit, you enable them for all users or for specific users. The unified auditing adds a syntax to audit all users except some listed ones. 12.2 adds a syntax to audit a group of users, defined by the role granted. This is the best way to enable a policy for a group of users, including those created later.

I create a simple policy, to audit logon and DBA role usage:

SQL> create audit policy DEMO_POLICY actions logon, roles DBA;
Audit POLICY created.

I create a new DBA user, USER1

SQL> create user USER1 identified by covfefe quota unlimited on USERS;
User USER1 created.
SQL> grant DBA to USER1;
Grant succeeded.

I want to enable the policy for this user because I want to audit all DBAs

SQL> audit policy DEMO_POLICY by USER1;
Audit succeeded.

I remove Audit records for this demo

SQL> exec dbms_audit_mgmt.clean_audit_trail(audit_trail_type=>dbms_audit_mgmt.audit_trail_unified,use_last_arch_timestamp=>false);
PL/SQL procedure successfully completed.

Let’s connect with this user and see what is audited:

SQL> connect USER1/covfefe@//localhost/PDB1
Connected.
 
SQL> select audit_type,os_username,userhost,terminal,dbusername,action_name,unified_audit_policies,system_privilege_used,event_timestamp
2 from unified_audit_trail where unified_audit_policies='DEMO_POLICY' order by event_timestamp;
 
AUDIT_TYPE OS_USERNAME USERHOST TERMINAL DBUSERNAME ACTION_NAME UNIFIED_AUDIT_POLICIES SYSTEM_PRIVILEGE_USED EVENT_TIMESTAMP
---------- ----------- -------- -------- ---------- ----------- ---------------------- --------------------- ---------------
Standard oracle VM104 pts/0 USER1 LOGON DEMO_POLICY CREATE SESSION 04-JUN-17 04.22.51.865094000 PM
Standard oracle VM104 pts/0 USER1 SELECT DEMO_POLICY SELECT ANY DICTIONARY 04-JUN-17 04.22.51.948187000 PM

The logon and the select on dictionary table (possible here thanks to the DBA role) has been audited because the policy is enabled for this user.

We have a new DBA and we create a new user for him:

SQL> create user USER2 identified by covfefe quota unlimited on USERS;
User USER2 created.
SQL> grant DBA to USER2;
Grant succeeded.

He connects and check what is audited:

SQL> connect USER2/covfefe@//localhost/PDB1
Connected.
SQL> select audit_type,os_username,userhost,terminal,dbusername,action_name,unified_audit_policies,system_privilege_used,event_timestamp
2 from unified_audit_trail where unified_audit_policies='DEMO_POLICY' order by event_timestamp;
 
AUDIT_TYPE OS_USERNAME USERHOST TERMINAL DBUSERNAME ACTION_NAME UNIFIED_AUDIT_POLICIES SYSTEM_PRIVILEGE_USED EVENT_TIMESTAMP
---------- ----------- -------- -------- ---------- ----------- ---------------------- --------------------- ---------------
Standard oracle VM104 pts/0 USER1 LOGON DEMO_POLICY CREATE SESSION 04-JUN-17 04.22.51.865094000 PM
Standard oracle VM104 pts/0 USER1 SELECT DEMO_POLICY SELECT ANY DICTIONARY 04-JUN-17 04.22.51.948187000 PM
Standard oracle VM104 pts/0 USER1 SELECT DEMO_POLICY SELECT ANY DICTIONARY 04-JUN-17 04.22.52.132814000 PM

Nothing is audited for this user. The DBA role usage is audited, but only for USER1.

Of course, we can add an audit statement for each user created for a DBA:

SQL> audit policy DEMO_POLICY by USER2;
Audit succeeded.

Then his new activity is audited:

SQL> connect USER2/covfefe@//localhost/PDB1
Connected.
SQL> select audit_type,os_username,userhost,terminal,dbusername,action_name,unified_audit_policies,system_privilege_used,event_timestamp
2 from unified_audit_trail where unified_audit_policies='DEMO_POLICY' order by event_timestamp;
 
AUDIT_TYPE OS_USERNAME USERHOST TERMINAL DBUSERNAME ACTION_NAME UNIFIED_AUDIT_POLICIES SYSTEM_PRIVILEGE_USED EVENT_TIMESTAMP
---------- ----------- -------- -------- ---------- ----------- ---------------------- --------------------- ---------------
Standard oracle VM104 pts/0 USER1 LOGON DEMO_POLICY CREATE SESSION 04-JUN-17 04.22.51.865094000 PM
Standard oracle VM104 pts/0 USER1 SELECT DEMO_POLICY SELECT ANY DICTIONARY 04-JUN-17 04.22.51.948187000 PM
Standard oracle VM104 pts/0 USER1 SELECT DEMO_POLICY SELECT ANY DICTIONARY 04-JUN-17 04.22.52.132814000 PM
Standard oracle VM104 pts/0 USER2 LOGON DEMO_POLICY CREATE SESSION 04-JUN-17 04.22.52.338928000 PM

But for security reason, we would like to be sure that any new user having the DBA role granted is audited.
Let’s try something else

SQL> noaudit policy DEMO_POLICY by USER1,USER2;
Noaudit succeeded.

We can simply audit all users:

SQL> audit policy DEMO_POLICY;
Audit succeeded.

But this is too much. Some applications constantly logon and logoff and we don’t want to have that in the audit trail.

SQL> noaudit policy DEMO_POLICY;
Noaudit succeeded.

We can still enable the policy for all users, and exempt those users we don’t want:

SQL> audit policy DEMO_POLICY except DEMO;
Audit succeeded.

Here is what is enabled, and this will audot all new users:

SQL> select * from audit_unified_enabled_policies;
 
USER_NAME POLICY_NAME ENABLED_OPT ENABLED_OPTION ENTITY_NAME ENTITY_TYPE SUCCESS FAILURE
--------- ----------- ----------- -------------- ----------- ----------- ------- -------
DEMO DEMO_POLICY EXCEPT EXCEPT USER DEMO USER YES YES
ALL USERS ORA_SECURECONFIG BY BY USER ALL USERS USER YES YES
ALL USERS ORA_LOGON_FAILURES BY BY USER ALL USERS USER NO YES

But once again, this is not what we want.

SQL> noaudit policy DEMO_POLICY by DEMO;
Noaudit succeeded.
 
SQL> select * from audit_unified_enabled_policies;
 
USER_NAME POLICY_NAME ENABLED_OPT ENABLED_OPTION ENTITY_NAME ENTITY_TYPE SUCCESS FAILURE
--------- ----------- ----------- -------------- ----------- ----------- ------- -------
ALL USERS ORA_SECURECONFIG BY BY USER ALL USERS USER YES YES
ALL USERS ORA_LOGON_FAILURES BY BY USER ALL USERS USER NO YES

Audit all users to whom roles are granted directly

In 12cR2 we have the possibility to do exactly what we want: audit all users having the DBA role granted:

SQL> audit policy DEMO_POLICY by users with granted roles DBA;
Audit succeeded.

This enables the audit for all users for whom the DBA role has been directly granted:

SQL> select * from audit_unified_enabled_policies;
 
USER_NAME POLICY_NAME ENABLED_OPT ENABLED_OPTION ENTITY_NAME ENTITY_TYPE SUCCESS FAILURE
--------- ----------- ----------- -------------- ----------- ----------- ------- -------
DEMO_POLICY INVALID BY GRANTED ROLE DBA ROLE YES YES
ALL USERS ORA_SECURECONFIG BY BY USER ALL USERS USER YES YES
ALL USERS ORA_LOGON_FAILURES BY BY USER ALL USERS USER NO YES

The important thing is that a newly created user will be audited as long as he has the DBA role directly granted:

SQL> create user USER3 identified by covfefe quota unlimited on USERS;
User USER3 created.
SQL> grant DBA to USER3;
Grant succeeded.
 
SQL> connect USER3/covfefe@//localhost/PDB1
Connected.
SQL> select audit_type,os_username,userhost,terminal,dbusername,action_name,unified_audit_policies,system_privilege_used,event_timestamp
2 from unified_audit_trail where unified_audit_policies='DEMO_POLICY' order by event_timestamp;
 
AUDIT_TYPE OS_USERNAME USERHOST TERMINAL DBUSERNAME ACTION_NAME UNIFIED_AUDIT_POLICIES SYSTEM_PRIVILEGE_USED EVENT_TIMESTAMP
---------- ----------- -------- -------- ---------- ----------- ---------------------- --------------------- ---------------
Standard oracle VM104 pts/0 USER1 LOGON DEMO_POLICY CREATE SESSION 04-JUN-17 04.29.17.915217000 PM
Standard oracle VM104 pts/0 USER1 SELECT DEMO_POLICY SELECT ANY DICTIONARY 04-JUN-17 04.29.17.988151000 PM
Standard oracle VM104 pts/0 USER1 SELECT DEMO_POLICY SELECT ANY DICTIONARY 04-JUN-17 04.29.18.117258000 PM
Standard oracle VM104 pts/0 USER2 LOGON DEMO_POLICY CREATE SESSION 04-JUN-17 04.29.18.322716000 PM
Standard oracle VM104 pts/0 USER2 SELECT DEMO_POLICY SELECT ANY DICTIONARY 04-JUN-17 04.29.18.345351000 PM
Standard oracle VM104 pts/0 USER2 SELECT DEMO_POLICY SELECT ANY DICTIONARY 04-JUN-17 04.29.18.415117000 PM
Standard oracle VM104 pts/0 USER2 SELECT DEMO_POLICY SELECT ANY DICTIONARY 04-JUN-17 04.29.18.439656000 PM
Standard oracle VM104 pts/0 USER2 SELECT DEMO_POLICY SELECT ANY DICTIONARY 04-JUN-17 04.29.18.455274000 PM
Standard oracle VM104 pts/0 USER3 LOGON DEMO_POLICY CREATE SESSION 04-JUN-17 04.29.18.507496000 PM

This policy applies to all users having the DBA role, and gives the possibility to audit more than their DBA role usage: here I audit all login from users having the DBA role.

So what?

We don’t use roles only to group privileges to grant. A role is usually granted to define groups of users: DBAs, Application user, Read-only application users, etc. The Unified Auditing can define complex policies, combining the audit of actions, privileges, and roles. The 12.2 syntax allows enabling the policy to a specific group of users.

 

Cet article 12cR2 auditing all users with a role granted est apparu en premier sur Blog dbi services.

Prochaine édition des 24 HOP 2017 francophone

Tue, 2017-05-30 06:28

La prochaine édition du 24 Hours of PASS 2017 edition francophone se déroulera les 29-30 juin prochain.

Pour rappel le format est simple: 24 webinars gratuits répartis sur 2 jours de 07:00 à 18h00 GMT et en Français. La seule obligation: s’inscrire aux sessions auxquelles vous assisterez. Cela vous permettra également de récupérer l’enregistrement vidéo si vous voulez la visionner à nouveau par la suite.

Cette année il y en aura encore pour tous les goûts. Du monitoring, de la performance, de l’Azure, de la BI, du BigData et machine learning, de la modélisation, de la haute disponibilité, de l’open source et des nouveautés concernant la prochaine version de SQL Server!

Pour ma part j’aurai le privilège de présenter une session concernant les nouvelles possibilités en terme de haute disponibilité avec SQL Server dans un monde mixte (Windows et Linux) et un monde “full Linux”.

 

24HOP%20Website%20Banner%20French

Au plaisir de vous y retrouver!

 

 

Cet article Prochaine édition des 24 HOP 2017 francophone est apparu en premier sur Blog dbi services.

New release model for JUL17 (or Oracle 17.3): RU and RUR

Tue, 2017-05-30 04:58
Updated June 5th

When reading about new version numbering in SQL Developer, I took the risk to change the title and guess the future version number for Oracle Database: Oracle 17.3 for the July (Q3) of 2017. Of course, this is just a guess.

Updated June 10th

Confirming the previous guess, we start to see some bugs planned to be fixed in version 18.1 which is probably the January 2018 Release Update.
Capture18.1

News from DOAGDB17 – May 30th

Oracle Database software comes in versions: 10g in 2004, 11g in 2007, 12c in 2013
In between, there are releases: 10gR2 in 2005, 11gR2 in 2009, 12cR2 in 2017
In between, there are Patch Sets: the latest 11gR2 is 11.2.0.4 (2 years after 11.2.0.3) and 12cR1 is 12.1.0.2 (one year after 12.1.0.1)
Those are full install: full Oracle Home. It can be in-place or into a new Oracle Home but it is a full install. There are a lot of changes in the system dictionary and you run catupgrd.sql to update it. It takes from 30 minutes to 1 hour on average depending on the components and the system.

Their primary goal is to release features. You should test them carefully. For example, the In-Memory option came in the first Patch Set of 12cR1

This will change in 2017 with annual feature releases. Well, this looks like a rename of Patch Set.

All releases are supported several years, with fixes (patches) provided for encountered issues: security, wrong result, hanging, crash, etc. Rather than installing one-off patches, and requesting merges for them, Oracle supplies some bundle patches: merged together, tested as a whole, cumulative, with a quarterly release frequency. Depending on the content and the platform, they are called Bundle Patches (in Windows), CPU (only security fixes), SPU (same as CPU but renamed to SPU), PSU (Patch Set Update), Proactive Bundle Patch (a bit more than in the PSU)…
The names have changed, the versioning number as well as they now include the date of release.
You apply patches into the Oracle Home with OPatch utility and into the database dictionary with the new datapatch utility.

Their primary goal is to get stability: fix issues with a low risk of regression. The minimum recommended is in the PSU, more fixes are in the ProactiveBP for known bugs.

This will change in 2017 with quarterly Release Updates. Well, this looks like a rename of PSU to RUR (Release Update Revision) and a rename of ProactiveBP as RU (Release Update).

The goal is to reduce the need to apply one-off patches on top of PSUs.

Here is all what I know about it, as presented by Andy Mendelsohn at DOAG Datenbank 2017 keynote:

IMG_3976

It is not common to have new announcements in the last fiscal year quarter. Thanks DOAG for this keynote.

 

Cet article New release model for JUL17 (or Oracle 17.3): RU and RUR est apparu en premier sur Blog dbi services.

Oracle 12cR2: exchange partition deferred invalidation

Mon, 2017-05-29 14:36

In a previous post I introduced the new 12cR2 feature where some DDL operations can use the same rolling invalidation than what is done with dbms_stats. On tables, DDL deferred invalidation is available only for operations on partitions. Here is how it works for partition exchange.

Here is my session environment:

SQL> whenever sqlerror exit failure
SQL> alter session set nls_date_format='hh24:mi:ss';
Session altered.
SQL> alter session set session_cached_cursors=0;
Session altered.
SQL> alter session set optimizer_dynamic_sampling=0;
Session altered.
SQL> alter system set "_optimizer_invalidation_period"=5;
System SET altered.
SQL> show user
USER is "DEMO"

I create a partitioned table with one local index

SQL> create table DEMO (n number, p number) partition by list(p) (partition P1 values(1), partition P2 values(2));
Table DEMO created.
SQL> create index DEMO on DEMO(n) local;
Index DEMO created.

I create the table with same structure for exchange

SQL> create table DEMOX for exchange with table DEMO;
Table DEMOX created.
SQL> create index DEMOX on DEMOX(n);
Index DEMOX created.

The CREATE TABLE FOR EXCHANGE do not create the indexes, but for rolling invalidation we need them. Without the same indexes, immediate invalidation occurs.

In order observe invalidation, I run queries on the partitioned tables, involving or not the partition I’ll exchange. I also run a query on the table used for exchange.

SQL> SELECT * FROM DEMO partition (P1);
no rows selected
SQL> SELECT * FROM DEMO partition (P2);
no rows selected
SQL> SELECT * FROM DEMO;
no rows selected
SQL> SELECT * FROM DEMOX;
no rows selected

Here are the cursors and some execution plans:

SQL> select sql_id,sql_text,child_number,invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time,is_rolling_invalid from v$sql where sql_text like 'S%DEMO%' order by sql_text;
 
SQL_ID SQL_TEXT CHILD_NUMBER INVALIDATIONS LOADS PARSE_CALLS EXECUTIONS FIRST_LOAD_TIME LAST_LOAD_TIME LAST_ACTIVE_TIME IS_ROLLING_INVALID
------ -------- ------------ ------------- ----- ----------- ---------- --------------- -------------- ---------------- ------------------
dd3ajp6k49u1d SELECT * FROM DEMO 0 0 1 1 1 2017-05-26/10:06:12 2017-05-26/10:06:12 10:06:12 N
1ft329rx910sa SELECT * FROM DEMO partition (P1) 0 0 1 1 1 2017-05-26/10:06:12 2017-05-26/10:06:12 10:06:12 N
9pp3h276waqvm SELECT * FROM DEMO partition (P2) 0 0 1 1 1 2017-05-26/10:06:12 2017-05-26/10:06:12 10:06:12 N
by2m6mh16tpsz SELECT * FROM DEMOX 0 0 1 1 1 2017-05-26/10:06:12 2017-05-26/10:06:12 10:06:12 N
 
SQL> select * from table(dbms_xplan.display_cursor('1ft329rx910sa',0,'basic +partition'));
 
PLAN_TABLE_OUTPUT
-----------------
EXPLAINED SQL STATEMENT:
------------------------
SELECT * FROM DEMO partition (P1)
 
Plan hash value: 3520634703
 
------------------------------------------------------
| Id | Operation | Name | Pstart| Pstop |
------------------------------------------------------
| 0 | SELECT STATEMENT | | | |
| 1 | PARTITION LIST SINGLE| | 1 | 1 |
| 2 | TABLE ACCESS FULL | DEMO | 1 | 1 |
------------------------------------------------------
 
SQL> select * from table(dbms_xplan.display_cursor('dd3ajp6k49u1d',0,'basic +partition'));
 
PLAN_TABLE_OUTPUT
-----------------
EXPLAINED SQL STATEMENT:
------------------------
SELECT * FROM DEMO
 
Plan hash value: 1180220521
 
---------------------------------------------------
| Id | Operation | Name | Pstart| Pstop |
---------------------------------------------------
| 0 | SELECT STATEMENT | | | |
| 1 | PARTITION LIST ALL| | 1 | 2 |
| 2 | TABLE ACCESS FULL| DEMO | 1 | 2 |
---------------------------------------------------

I exchange the partition P1 with the table DEMOX. I include indexes and add the DEFERRED INVALIDATION clause


SQL> alter table DEMO exchange partition P1 with table DEMOX including indexes without validation deferred invalidation;
Table DEMO altered.

If I do the same without the DEFERRED INVALIDATION clause, or without including indexes, or having different indexes, then I would see all cursors invalidated. Here only the select on the DEMOX table is invalidated:


SQL> select sql_text,child_number,invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time,is_rolling_invalid from v$sql where sql_text like 'S%DEMO%' order by sql_text;
 
SQL_TEXT CHILD_NUMBER INVALIDATIONS LOADS PARSE_CALLS EXECUTIONS FIRST_LOAD_TIME LAST_LOAD_TIME LAST_ACTIVE_TIME IS_ROLLING_INVALID
-------- ------------ ------------- ----- ----------- ---------- --------------- -------------- ---------------- ------------------
SELECT * FROM DEMO 0 0 1 1 1 2017-05-26/10:06:12 2017-05-26/10:06:12 10:06:12 N
SELECT * FROM DEMO partition (P1) 0 0 1 1 1 2017-05-26/10:06:12 2017-05-26/10:06:12 10:06:12 N
SELECT * FROM DEMO partition (P2) 0 0 1 1 1 2017-05-26/10:06:12 2017-05-26/10:06:12 10:06:12 N
SELECT * FROM DEMOX 0 1 1 1 1 2017-05-26/10:06:12 2017-05-26/10:06:12 10:06:12 N

I expected to see the IS_ROLLING_INVALID flag changed to ‘Y’ as we can observe with other operations. I have opened an SR for that.

Rolling invalidation sets a timestamp at next execution:


SQL> SELECT * FROM DEMO partition (P1);
no rows selected
SQL> SELECT * FROM DEMO partition (P2);
no rows selected
SQL> SELECT * FROM DEMO;
no rows selected
SQL> SELECT * FROM DEMOX;
no rows selected
 
 
SQL> select sql_text,child_number,invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time,is_rolling_invalid from v$sql where sql_text like 'S%DEMO%' order by sql_text;
 
SQL_TEXT CHILD_NUMBER INVALIDATIONS LOADS PARSE_CALLS EXECUTIONS FIRST_LOAD_TIME LAST_LOAD_TIME LAST_ACTIVE_TIME IS_ROLLING_INVALID
-------- ------------ ------------- ----- ----------- ---------- --------------- -------------- ---------------- ------------------
SELECT * FROM DEMO 0 0 1 2 2 2017-05-26/10:06:12 2017-05-26/10:06:12 10:06:14 N
SELECT * FROM DEMO partition (P1) 0 0 1 2 2 2017-05-26/10:06:12 2017-05-26/10:06:12 10:06:14 N
SELECT * FROM DEMO partition (P2) 0 0 1 2 2 2017-05-26/10:06:12 2017-05-26/10:06:12 10:06:14 N
SELECT * FROM DEMOX 0 1 2 1 1 2017-05-26/10:06:12 2017-05-26/10:06:14 10:06:14 N

I expected to see IS_ROLLING_INVALID going from ‘Y’ to ‘X’ here when the random time is set for invalidation.

By default, the random time is set within a 5 hours window, but I changed “_optimizer_invalidation_period” to 5 seconds instead and I wait for this time window to be sure that invalidation occurs. And then run my queries again.


SQL> host sleep 5
 
SQL> SELECT * FROM DEMO partition (P1);
no rows selected
SQL> SELECT * FROM DEMO partition (P2);
no rows selected
SQL> SELECT * FROM DEMO;
no rows selected
SQL> SELECT * FROM DEMOX;
no rows selected
 

Here are the new child cursors created for the ones that were marked for rolling invalidation. The IS_ROLLING_INVALID did not display anything, but it seems that it works as expected:


SQL> select sql_text,child_number,invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time,is_rolling_invalid from v$sql where sql_text like 'S%DEMO%' order by sql_text;
 
SQL_TEXT CHILD_NUMBER INVALIDATIONS LOADS PARSE_CALLS EXECUTIONS FIRST_LOAD_TIME LAST_LOAD_TIME LAST_ACTIVE_TIME IS_ROLLING_INVALID
-------- ------------ ------------- ----- ----------- ---------- --------------- -------------- ---------------- ------------------
SELECT * FROM DEMO 0 0 1 2 2 2017-05-26/10:06:12 2017-05-26/10:06:12 10:06:14 N
SELECT * FROM DEMO 1 0 1 1 1 2017-05-26/10:06:12 2017-05-26/10:06:19 10:06:19 N
SELECT * FROM DEMO partition (P1) 1 0 1 1 1 2017-05-26/10:06:12 2017-05-26/10:06:19 10:06:19 N
SELECT * FROM DEMO partition (P1) 0 0 1 2 2 2017-05-26/10:06:12 2017-05-26/10:06:12 10:06:14 N
SELECT * FROM DEMO partition (P2) 1 0 1 1 1 2017-05-26/10:06:12 2017-05-26/10:06:19 10:06:19 N
SELECT * FROM DEMO partition (P2) 0 0 1 2 2 2017-05-26/10:06:12 2017-05-26/10:06:12 10:06:14 N
SELECT * FROM DEMOX 0 1 2 2 2 2017-05-26/10:06:12 2017-05-26/10:06:14 10:06:19 N

Here is the confirmation that those 3 cursors were not shared because they have passed the rolling invalidation window:


SQL> select sql_id,child_number,reason from v$sql_shared_cursor join v$sql using(sql_id, child_number) where sql_text like 'S%DEMO%';
 
SQL_ID CHILD_NUMBER REASON
------ ------------ ------
1ft329rx910sa 0 033Rolling Invalidate Window Exceeded(3)2x414957859751495785979
1ft329rx910sa 1
by2m6mh16tpsz 0
dd3ajp6k49u1d 0 033Rolling Invalidate Window Exceeded(3)2x414957859771495785979
dd3ajp6k49u1d 1
9pp3h276waqvm 0 033Rolling Invalidate Window Exceeded(3)2x414957859781495785979
9pp3h276waqvm 1

So what?

The first observation is that deferred invalidation works with partition exchange, despite the fact that the V$SQL.IS_ROLLING_INVALID flag is not updated. I was surprised to see that rolling invalidation occurs even for the cursors accessing to the partition which was exchanged. However, the rolling invalidation occurs only if the indexes are the same. If we do not exchange the indexes, then all cursors are invalidated immediately. This means that probably the cursor parsed is compatible to run after the exchange as the indexes are guaranteed to have same structure, type, compression,…
This is a very nice feature when exchange partition is used to keep the fact table when loading new data: you load into a table and then exchange it with the latest partition. The new values are now exposed immediately and this new feature avoids a hard parse peak.

 

Cet article Oracle 12cR2: exchange partition deferred invalidation est apparu en premier sur Blog dbi services.

Which privilege for CREATE PLUGGABLE DATABASE from DB LINK?

Mon, 2017-05-29 13:59

When cloning a PDB from a remote CDB you need to define a database link to be used in the CREATE PLUGGABLE DATABASE … FROM …@… command. The documentation is not completely clear about the privileges required on the source for the user defined in the database link, so here are the different possibilities.

Remote clone

Here is what the documentation says:CapturePDBPrivsClone

So you can connect to the CDB or to the PDB.

In order to connect to the CDB you need a common user with the CREATE SESSION system privilege:

SQL> create user C##DBA identified by oracle;
User C##DBA created.
SQL> grant create session to C##DBA container=current;
Grant succeeded.

No need for CONTAINER=ALL here because you connect only to the CDB$ROOT.

Then you need the CREATE PLUGGABLE DATABASE system privilege on the PDB. You can grant it from the CDB$ROOT with the CONTAINER=ALL but it is sufficient to grant it locally on the source PDB:

SQL> alter session set container=PDB1;
Session altered.
SQL> grant create pluggable database to C##DBA container=current;
Grant succeeded.

Note that, not documented, but the SYSOPER administrative privilege can replace the CREATE PLUGGABLE DATABASE so we can run the following instead of the previous one:

SQL> alter session set container=PDB1;
Session altered.
grant sysoper to C##DBA container=current;
Grant succeeded.

Both ways are usable for cloning, you create a database link to this common user, on the destination, and run the CLONE PLUGGABLE DATABASE:

SQL> create database link CDB1A connect to C##DBA identified by oracle using '//localhost/CDB1A';
Database link CDB1A created.
SQL> create pluggable database PDB1CLONE from PDB1@CDB1A file_name_convert=('CDB1A/PDB1','CDB2A/PDB1CLONE');
Pluggable database PDB1CLONE created.
SQL> alter pluggable database PDB1CLONE open;
Pluggable database PDB1CLONE altered.

This was using a common user but you can also define the user locally on the source PDB:

SQL> alter session set container=PDB1;
Session altered.
SQL> create user PDBDBA identified by oracle;
User PDBDBA created.
SQL> grant create session to PDBDBA container=current;
Grant succeeded.
SQL> grant create pluggable database to PDBDBA container=current;
Grant succeeded.

There again you have the alternative to use SYSOPER instead of CREATE PLUGGABLE DATABASE:

SQL> alter session set container=PDB1;
Session altered.
SQL> create user PDBDBA identified by oracle;
User PDBDBA created.
SQL> grant create session to PDBDBA container=current;
Grant succeeded.
SQL> grant sysoper to PDBDBA container=current;
Grant succeeded.

With one of those, you can clone from the target with a database link connecting to the local user only:

SQL> create database link CDB1A connect to PDBDBA identified by oracle using '//localhost/PDB1';
Database link CDB1A created.
SQL> create pluggable database PDB1CLONE from PDB1@CDB1A file_name_convert=('CDB1A/PDB1','CDB2A/PDB1CLONE');
Pluggable database PDB1CLONE created.
SQL> alter pluggable database PDB1CLONE open;
Pluggable database PDB1CLONE altered.

Then which alternative to use? The choice of the common or local user is up to you. I probably use a common user to do system administration, and cloning is one of them. But if you are in a PDBaaS environment where you are the PDB administrator, then you can clone your PDB to another CDB that you manage. This can mean cloning a PDB from the Cloud to a CDB on your laptop.

PDB Relocate

Things are different with the RELOCATE option where you drop the source PDB and redirect the connection to the new one. This is definitely a system administration task to do at CDB level and requires a common user. Trying it from a database link connecting to a local user will raise the following error:

ORA-17628: Oracle error 65338 returned by remote Oracle server
 
65338, 00000, "unable to create pluggable database"
// *Cause: An attempt was made to relocate a pluggable database using a
// database link to the source pluggable database.
// *Action: Use a database link that points to the source multitenant container
// database root and retry the operation.

Here is what the documentation says:CapturePDBPrivsRelocate

So, we need to have a common user on the source CDB, with CREATE SESSION privilege, and it makes sense to use an administrative privilege:

SQL> create user C##DBA identified by oracle;
User C##DBA created.
SQL> grant create session to C##DBA container=current;
Grant succeeded.
SQL> alter session set container=PDB1;
Session altered.
grant sysoper to C##DBA container=current;
Grant succeeded.

The documentation mentions that you can use either SYSDBA or SYSOPER, but from my tests (and Deiby Gómez ones) only SYSOPER works without raising an ‘insufficient privileges’. The documentation mentions that CREATE PLUGGABLE DATABASE is also necessary. Actually, it is not. And, with a relocate, it cannot be an alternative to SYSOPER. The user must be a common user, the CREATE SESSION must be granted commonly, but the SYSOPER can be locally for the PDB we relocate.

In summary

To clone a remote PDB you can use a common or local user, with SYSOPER or CREATE PLUGGABLE DATABASE privilege. To relocate a PDB you need a common user with SYSOPER.

 

Cet article Which privilege for CREATE PLUGGABLE DATABASE from DB LINK? est apparu en premier sur Blog dbi services.

OUD – Oracle Unified Directory 11.1.2.3, How to change the Java version of OUD 11.1.2.3 with Weblogic and ODSM

Mon, 2017-05-29 08:50

In this blog, I would like to show how to change the Java version used by OUD and WebLogic, in case you use the ODSM. The OUD (Oracle Unified Directory) is written purely in Java, not like OID for example, and so, it relies heavily on your installed Java version. From a security point of view, updating the Java version is a task that you might do frequently and you want to do it as easy as possible. E.g. by just changing a symbolic link. In my case, I would like to update the Java version from 1.7.0_131 to 1.7.0_141. The first time you do the change, you have a few steps of manual steps to do, but the next Java updates will be much easier, by just changing a symbolic link.

So … the first question that might popp up is, where can I get Java 7 downloads? You can find them
in the Java SE 7 Archive Downloads, however, the Java 1.7 is freely available only until version 1.7.0_80, and for OUD you have to use Java 1.7 because this is the only supported version. Here is the link to the Java 1.7 archive downloads.

http://www.oracle.com/technetwork/java/javase/downloads/java-archive-downloads-javase7-521261.html

Hint: In case you use SSLv3, the following announcement might be of interest for you. Starting with the January 20, 2015 Critical Patch Update releases JDK 7u75 and above, the Java Runtime Environment has SSLv3 disabled by default. For more information check out the following web page.

http://www.oracle.com/technetwork/java/javase/documentation/cve-2014-3566-2342133.html

OK. So, where do I get the latest Java version which is 1.7.0_141 at the moment? Since July 2015, the updates for Java 7 are no longer available to the public. Oracle offers updates to Java 7 only for customers who have purchased Java support or have Oracle products that require Java 7. That means, you have to go to MOS and search for the following note:

Information Center: Installation & Configuration for Oracle Java SE (Doc ID 1412103.2)

There you will find a link which points you to the JDK/JRE 7 Update 141 and “Patch 13079846: Oracle JDK 1.7.0″.

Java download

OK. Now that we go the latest 1.7 version, we can start with the upgrade procedure which consists of 8 steps.

1. Check the current version
2. Install the new Java version
3. Stop WebLogic and the OUD
4. Adjust the symlink
5. Update OUD configuration files (java.properties)
6. Update WebLogic configuration files
7. Start WebLogic and the OUD
8. Check the new version

1. Check the current version of your Java installation, WebLogic Server and OUD.

The Java version can be checked via “java -version”

[dbafmw@dbidg01 ~]$ echo $JAVA_HOME
/u01/app/oracle/product/Middleware/11.1.2.3/jdk1.7.0_131	
[dbafmw@dbidg01 bin]$ /u01/app/oracle/product/Middleware/11.1.2.3/jdk1.7.0_131/bin/java -version
java version "1.7.0_131"
Java(TM) SE Runtime Environment (build 1.7.0_131-b12)
Java HotSpot(TM) 64-Bit Server VM (build 24.131-b12, mixed mode)

To check the Java vendor and Java version used by WebLogic Server, I use usually the Admin Console. Just go to Environment > Servers > Your Server > Monitoring > General

That page should have all the information that you need, like on the following screen shot:

WebLogic 131
To check the Java version used by OUD, simply login to the ODSM and on the Home page you will see it immediately.

OUD version 131

Or use the status script from the command line.

[dbafmw@dbidg01 bin]$ /u01/app/oracle/product/Middleware/11.1.2.3/asinst_1/OUD/bin/status -D "cn=Directory Manager" -j ~/.oudpwd  | grep "Java Version"
Java Version:             1.7.0_131
2. Install the new Java version
[dbafmw@dbidg01 ~]$ cd /u01/stage/
[dbafmw@dbidg01 stage]$ unzip p13079846_17000_Linux-x86-64.zip
...
...
[dbafmw@dbidg01 stage]$ mv jdk-7u141-linux-x64.tar.gz /u01/app/oracle/product/Middleware/11.1.2.3/

[dbafmw@dbidg01 stage]$ cd /u01/app/oracle/product/Middleware/11.1.2.3/
[dbafmw@dbidg01 11.1.2.3]$ tar -xzvf jdk-7u141-linux-x64.tar.gz
...
...
[dbafmw@dbidg01 11.1.2.3]$ rm jdk-7u141-linux-x64.tar.gz

[dbafmw@dbidg01 11.1.2.3]$ ls -l | grep jdk
lrwxrwxrwx  1 dbafmw oinstall     12 Mar 27 10:36 jdk -> jdk1.7.0_131
drwxr-x---  8 dbafmw oinstall   4096 Dec 12 23:34 jdk1.7.0_131
drwxr-xr-x  8 dbafmw oinstall   4096 Mar 14 06:10 jdk1.7.0_141
3. Stop WebLogic and the OUD

Stop WebLogic

[dbafmw@dbidg01 bin]$ . /u01/app/oracle/product/Middleware/11.1.2.3/wlserver_10.3/server/bin/setWLSEnv.sh
[dbafmw@dbidg01 bin]$ java weblogic.version -verbose
[dbafmw@dbidg01 bin]$ cd /u01/app/oracle/product/Middleware/11.1.2.3/user_projects/domains/base_domain/bin
[dbafmw@dbidg01 bin]$ ./stopWebLogic.sh
...
...

Stop OUD

[dbafmw@dbidg01 bin]$ /u01/app/oracle/product/Middleware/11.1.2.3/asinst_1/OUD/bin/stop-ds
...
...
4. Adjust the symlink
[dbafmw@dbidg01 11.1.2.3]$ rm jdk
[dbafmw@dbidg01 11.1.2.3]$ ln -s jdk1.7.0_141 jdk
[dbafmw@dbidg01 11.1.2.3]$ ls -l | grep jdk
lrwxrwxrwx  1 dbafmw oinstall     12 May 29 14:10 jdk -> jdk1.7.0_141
drwxr-x---  8 dbafmw oinstall   4096 Dec 12 23:34 jdk1.7.0_131
drwxr-xr-x  8 dbafmw oinstall   4096 Mar 14 06:10 jdk1.7.0_141
5. Update OUD configuration files (java.properties)

The update of the OUD java.properties file is usually done in the INSTALL_HOME and the INSTANCE_HOME, and the activated via the dsjavaproperties script from the appropriate location. The dsjavaproperties script is quite good documented by Oracle. See the following link for more information.

http://docs.oracle.com/cd/E52734_01/oud/OUDAG/appendix_cli.htm#OUDAG01148

Ok. Let’s do it once for your INSTALL_HOME

[dbafmw@dbidg01 11.1.2.3]$ vi $MW_HOME/Oracle_OUD1/config/java.properties
... 
default.java-home=/u01/app/oracle/product/Middleware/11.1.2.3/jdk/jre

Now you can apply the new java configuration with dsjavaproperties shell script.

[dbafmw@dbidg01 11.1.2.3]$ cd $MW_HOME/
[dbafmw@dbidg01 11.1.2.3]$ cd Oracle_OUD1/bin
[dbafmw@dbidg01 bin]$ ./dsjavaproperties
The operation was successful.  The server commands will use the java arguments
and java home specified in the properties file located in
/u01/app/oracle/product/Middleware/11.1.2.3/asinst_1/OUD/config/java.properties

And now for your INSTANCE_HOME

[dbafmw@dbidg01 11.1.2.3]$ vi $MW_HOME/asinst_1/OUD/config/java.properties
...
default.java-home=/u01/app/oracle/product/Middleware/11.1.2.3/jdk/jre

And apply the new java configuration with dsjavaproperties shell script here as well.

[dbafmw@dbidg01 11.1.2.3]$ cd $MW_HOME/
[dbafmw@dbidg01 11.1.2.3]$ cd asinst_1/OUD/bin/
[dbafmw@dbidg01 bin]$ ./dsjavaproperties
The operation was successful.  The server commands will use the java arguments
and java home specified in the properties file located in
/u01/app/oracle/product/Middleware/11.1.2.3/asinst_1/OUD/config/java.properties
6. Update Weblogic configuration files

Adjust the JAVA_HOME in the following files and point it now to the symlink which we have previously created. So, the new Java home should be:

JAVA_HOME=/u01/app/oracle/product/Middleware/11.1.2.3/jdk

These are the files where you typically need to adjust it:

~/.bash_profile  # of the Application owner of the WebLogic/OUD installation
$MW_HOME/wlserver_10.3/common/bin/commEnv.sh
$MW_HOME/wlserver_10.3/samples/domains/wl_server/bin/setDomainEnv.sh
$MW_HOME/wlserver_10.3/samples/domains/medrec/bin/setDomainEnv.sh
$MW_HOME/wlserver_10.3/samples/domains/medrec-spring/bin/setDomainEnv.sh
$MW_HOME/wlserver_10.3/common/nodemanager/nodemanager.properties
$MW_HOME/utils/bsu/bsu.sh 
$MW_HOME/utils/uninstall/uninstall.sh
$MW_HOME/utils/quickstart/quickstart.sh
$MW_HOME/user_projects/domains/base_domain/bin/setDomainEnv.sh

To double check it, you might want to run the following find command, to see if you have somewhere else still the old 131 JDK.

[dbafmw@dbidg01 11.1.2.3]$ cd $MW_HOME
[dbafmw@dbidg01 11.1.2.3]$ find . -type f -name "*.sh" -exec grep -il jdk1.7.0_131 {} \;
7. Start WebLogic and the OUD

Start WebLogic

[dbafmw@dbidg01 bin]$ . /u01/app/oracle/product/Middleware/11.1.2.3/wlserver_10.3/server/bin/setWLSEnv.sh
[dbafmw@dbidg01 bin]$ java weblogic.version -verbose
[dbafmw@dbidg01 bin]$ cd /u01/app/oracle/product/Middleware/11.1.2.3/user_projects/domains/base_domain/bin
[dbafmw@dbidg01 bin]$ nohup ./startWebLogic.sh &
...
...

Start OUD

[dbafmw@dbidg01 bin]$ /u01/app/oracle/product/Middleware/11.1.2.3/asinst_1/OUD/bin/start-ds
...
...
8. Check the new version
[dbafmw@dbidg01 bin]$ echo $JAVA_HOME
/u01/app/oracle/product/Middleware/11.1.2.3/jdk
[dbafmw@dbidg01 bin]$ /u01/app/oracle/product/Middleware/11.1.2.3/jdk/bin/java -version
java version "1.7.0_141"
Java(TM) SE Runtime Environment (build 1.7.0_141-b11)
Java HotSpot(TM) 64-Bit Server VM (build 24.141-b11, mixed mode)

To check the Java vendor and Java version used by WebLogic Server, use again the  WebLogic Admin Console like previously mentioned.

WebLogic 141

And login to the ODSM.

OUD version 141
Or use the status script from the command line.

[dbafmw@dbidg01 bin]$ /u01/app/oracle/product/Middleware/11.1.2.3/asinst_1/OUD/bin/status -D "cn=Directory Manager" -j ~/.oudpwd  | grep "Java Version"
Java Version:             1.7.0_141
Conclusion

A few manual steps are involved, when you want to change the Java version the first time, however, the next time you do it, you simply need to stop WebLogic and OUD, install the new JDK and change the symbolic link and start WebLogic and OUD again.

 

 

Cet article OUD – Oracle Unified Directory 11.1.2.3, How to change the Java version of OUD 11.1.2.3 with Weblogic and ODSM est apparu en premier sur Blog dbi services.

12cR2 needs to connect with password for Cross-PDB DML

Fri, 2017-05-26 14:13

In a previous post, I explained that Cross-PDB DML, executing an update/delete/insert with the CONTAINERS() clause, seems to be implemented with implicit database links. Connecting through a database link requires a password and this blog post is about an error you may encounter: ORA-01017: invalid username/password; logon denied

This blog post also explains a consequence of this implementation, the big inconsistency of CONTAINERS() function because the implementation is completely different for queries (select) and for insert/delete/update, and you may finally write and read from different schemas.

We do not need Application Container for Cross-PDB DML and we don’t even need metadata link tables. Just tables with same columns. Here I have a DEMO table which is just a copy of DUAL, and it is created in CDB$ROOT and in PDB1 (CON_ID=3), owned by SYS.

Implicit database link

I’m connecting to CDB$ROOT with user, password and service name:

SQL> connect sys/oracle@//localhost/CDB1A as sysdba
Connected.

I insert a row into the DEMO table in the PDB1, which is CON_ID=3:

SQL> insert into containers(DEMO) (con_id,dummy) values (3,'Y');
1 row created.

This works in 12.2, is documented, and is an alternative way to switching to the container.

But now, let’s try to do the same when connecting with ‘/ as sysdba':

SQL> connect / as sysdba
Connected.
SQL> insert into containers(DEMO) (con_id,dummy) values (3,'Y');
 
insert into containers(DEMO) (con_id,dummy) values (3,'Y')
*
ERROR at line 1:
ORA-01017: invalid username/password; logon denied
ORA-02063: preceding line from PDB1

The first message mentions invalid user/password, and the second one mentions a database link having the same name as the container.
As I described in the previous post the CONTAINERS() opens an implicit database link when doing some modifications to another container. But a database link requires a connection and no user/password has been provided. It seems that it tries to connect with the same user and password as the one provided to connect to the root.

Then, I provide the user/password but with local connection (no service name):


SQL> connect sys/oracle as sysdba
Connected.
SQL> insert into containers(DEMO) (con_id,dummy) values (3,'Y');
insert into containers(DEMO) (con_id,dummy) values (3,'Y')
*
ERROR at line 1:
ORA-01017: invalid username/password; logon denied

There is no mention of a database link here, but still impossible to connect. Then it seems that the session needs our connection string to find out how to connect to the PDB.

Explicit database link

There is an alternative. You can create the database link explicitly and then it will be used by the container(), having all information required password and service. But the risk is that you define this database link to connect to another user.

Here I have also a DEMO table created in SCOTT:

SQL> create database link PDB1 connect to scott identified by tiger using '//localhost/PDB1';
Database link created.
 
SQL> select * from DEMO@PDB1;
 
D
-
X

From the root I insert with CONTAINERS() without mentioning the schema:

SQL> insert into containers(DEMO) (con_id,dummy) values (3,'S');
1 row created.

I have no errors here (I’m still connected / as sysdba) because I have a database link with the same name as the one it tries to use implicitly. So it works without any error or warning. But my database link does not connect to the same schema (SYS) but to SCOTT. And because a DEMO table was there with same columns, the row was actually inserted into the SCOTT schema:

SQL> select * from DEMO@PDB1;
 
D
-
X
S

The big problem here is that when doing a select through the same CONTAINER() function, a different mechanism is used, not using the database link but session switching to the other container, in same schema, so the row inserted through INSERT INTO CONTAINER() is not displayed by SELECT FROM CONTAINER():
SQL> select * from containers(DEMO);
 
D CON_ID
- ----------
X 1
X 3
Y 3

So what?

I don’t know if the first problem (invalid user/password) will be qualified as a bug but I hope the second one will. Cross-PDB DML will be an important component of Application Containers, and having a completely different implementation for SELECT and for INSERT/UPDATE/DELETE may be a source of problems. In my opinion, both should use container switch within the same session, but that means that a transaction should be able to write in multiple containers, which is not possible currently.

 

Cet article 12cR2 needs to connect with password for Cross-PDB DML est apparu en premier sur Blog dbi services.

Documentum – Setup WildFly in HTTPS

Sun, 2017-05-14 04:24

In a previous blog (See this one), I mentioned some differences between JBoss 7.1.1 (coming with CS 7.2 / xPlore 1.5) and WildFly 9.0.1 (coming with CS 7.3 / xPlore 1.6). In this blog, I will talk about what is needed to setup a WildFly instance in HTTPS only or HTTPS+HTTP. This blog applies to both Content Servers and Full Text Servers. On the Full Text Server side, you might be aware of the ConfigSSL.groovy script which can be used to setup the Dsearch and IndexAgents in HTTPS (see this blog for more information). But if you are using a Multi-node Full Text setup (which most likely involves a lot of CPS (Content Processing Service)) or if you are just installing additional CPS (or Remote CPS), then what can you do?

 

The script ConfigSSL.groovy can only be used to configure a PrimaryDsearch or an IndexAgent in SSL, it’s not able to configure a CPS in SSL. Also, the setup of a CPS in SSL isn’t described anywhere in the official Documentation of EMC (/OpenText) so that’s the main purpose of this blog: what is needed to setup a WildFly instance in HTTPS so you can use that for your CPS as well as for your JMS (Java Method Server).

 

As a prerequisite, you need to have a Java Keystore. There are plenty of documentation around that so I won’t describe this part. In this blog, I will use a Full Text Server and I will configure a CPS in HTTPS. If you want to do that on the Content Server, just adapt the few paths accordingly. As a first thing to do, I will setup two environment variables which will contain the passwords for the Java cacerts and my Java Keystore (JKS):

[xplore@full_text_server_01 ~]$ cd /tmp/certs/
[xplore@full_text_server_01 certs]$ read -s -p "  ----> Please enter the Java cacerts password: " jca_pw
[xplore@full_text_server_01 certs]$ read -s -p "  ----> Please enter the JKS password: " jks_pw

 

When you enter the first read command, the prompt isn’t returned. Just write/paste the jca password and press enter. Then the prompt will be returned and you can execute the second read command in the same way to write/paste the jks password. Now you can update the Java cacerts and import your JKS. The second and third commands below (the ones with the “-delete”) will remove any entries with the alias mentioned (dbi_root_ca and dbi_int_ca). If you aren’t sure about what you are doing, don’t execute these two commands. If the alias already exists in the Java cacerts, you will saw an error while executing the commands. In such cases, just use another alias (or remove the existing one using the “-delete” commands…):

[xplore@full_text_server_01 certs]$ cp $JAVA_HOME/jre/lib/security/cacerts $JAVA_HOME/jre/lib/security/cacerts_bck_$(date +%Y%m%d)
[xplore@full_text_server_01 certs]$ $JAVA_HOME/bin/keytool -delete -noprompt -alias dbi_root_ca -keystore $JAVA_HOME/jre/lib/security/cacerts -storepass ${jca_pw} > /dev/null 2>&1
[xplore@full_text_server_01 certs]$ $JAVA_HOME/bin/keytool -delete -noprompt -alias dbi_int_ca -keystore $JAVA_HOME/jre/lib/security/cacerts -storepass ${jca_pw} > /dev/null 2>&1
[xplore@full_text_server_01 certs]$ $JAVA_HOME/bin/keytool -import -trustcacerts -noprompt -alias dbi_root_ca -keystore $JAVA_HOME/jre/lib/security/cacerts -file "/tmp/certs/dbi_root_certificate.cer" -storepass ${jca_pw}
[xplore@full_text_server_01 certs]$ $JAVA_HOME/bin/keytool -import -trustcacerts -noprompt -alias dbi_int_ca -keystore $JAVA_HOME/jre/lib/security/cacerts -file "/tmp/certs/dbi_int_certificate.cer" -storepass ${jca_pw}

 

In the commands above, you will need to customize at least the “-file” parameter. I used above one Root CA and one Intermediate CA. Depending on your needs, you might need to add the same thing (if you are using a trust chain with 2 CAs) or just import the SSL Certificate directly, aso…

 

When this is done, you need to stop your application. In this case, I will stop my CPS (named “Node2_CPS1″ here). To stay more general, I will define a new variable “server_name”. You can use this variable to keep the same commands on the different servers, just use the appropriate value: “MethodServer” (for a JMS), “PrimaryDsearch” (or the name of your Dsearch), “CPS” (or the name of your CPS) or anything else:

[xplore@full_text_server_01 certs]$ export server_name="Node2_CPS1"
[xplore@full_text_server_01 certs]$ $JBOSS_HOME/server/stop${server_name}.sh

 

In the command above, $JBOSS_HOME is the location of the JBoss/WildFly instance. For an xPlore 1.6 for example, it will “$XPLORE_HOME/wildfly9.0.1″. For a Content Server 7.3, it will be “$DOCUMENTUM_SHARED/wildfly9.0.1″. I kept the name “JBOSS_HOME” because even if it has been renamed, I think it will take time before everybody use WildFly, just like there are still a lot of people using “docbase” while this has been replaced with “repository” with the CS 7.0…

 

Then let’s move the JKS file to the right location:

[xplore@full_text_server_01 certs]$ mv /tmp/certs/full_text_server_01.jks $JBOSS_HOME/server/DctmServer_${server_name}/configuration/my.keystore
[xplore@full_text_server_01 certs]$ chmod 600 $JBOSS_HOME/server/DctmServer_${server_name}/configuration/my.keystore

 

Now you can configure the standalone.xml file for this application to handle the HTTPS communications because by default only HTTP is enabled:

[xplore@full_text_server_01 certs]$ cd $JBOSS_HOME/server/DctmServer_${server_name}/configuration/
[xplore@full_text_server_01 configuration]$ cp standalone.xml standalone.xml_bck_$(date +%Y%m%d)
[xplore@full_text_server_01 configuration]$ 
[xplore@full_text_server_01 configuration]$ sed -i 's/inet-address value="${jboss.bind.address.management:[^}]*}/inet-address value="${jboss.bind.address.management:127.0.0.1}/' standalone.xml
[xplore@full_text_server_01 configuration]$ sed -i '/<security-realm name="sslSecurityRealm"/,/<\/security-realm>/d' standalone.xml
[xplore@full_text_server_01 configuration]$ sed -i '/<https-listener.*name="default-https"/d' standalone.xml
[xplore@full_text_server_01 configuration]$ sed -i '/<security-realms>/a \            <security-realm name="sslSecurityRealm">\n                <server-identities>\n                    <ssl>\n                        <keystore path="'$JBOSS_HOME'/server/DctmServer_'${server_name}'/configuration/my.keystore" keystore-password="'${jks_pw}'"/>\n                    </ssl>\n                </server-identities>\n            </security-realm>' standalone.xml

 

So what are the commands above doing?

  • Line 2: Backup of the standalone.xml file
  • Line 4: Changing the default IP on which WildFly is listening for the Management Interface (by default 0.0.0.0 which means no restriction) to 127.0.0.1 so it’s only accessible locally
  • Line 5: Removing any existing Security Realm named “sslSecurityRealm”, if any (there isn’t any unless you already created one with this specific name…)
  • Line 6: Removing any HTTPS listener, if any (there isn’t any unless you already setup this application in HTTPS…)
  • Line 7: Creating a new Security Realm named “sslSecurityRealm” containing all the needed properties: keystore path and password. You can also define additional parameters like alias, aso… $JBOSS_HOME, ${server_name} and ${jks_pw} are surrounded with single quote so they are evaluated before being put in the xml file.

 

Once this has been done, you defined your security realm but it’s not yet used… So here comes the choice to either completely deactivate HTTP and keep only HTTPS or use both at the same time. From my point of view, it’s always better to keep only HTTPS but in DEV environments for example, you might have requests from the developer to enable the HTTP because it’s simpler for them to develop something (web services for example) using HTTP and then secure it. So it’s up to you!

 

1. HTTP and HTTPS

To keep both enabled, you just have to create a new listener in the standalone.xml file which will be dedicated to HTTPS. This is the command that can do it:

[xplore@full_text_server_01 configuration]$ sed -i '/<http-listener .*name="default".*/a \                <https-listener name="default-https" socket-binding="https" security-realm="sslSecurityRealm" enabled-cipher-suites="PUT_HERE_SSL_CIPHERS"/>' standalone.xml

 

In the above command, just replace “PUT_HERE_SSL_CIPHERS” with the SSL Ciphers that you want your application to be using. Depending on how you configure the SSL in your environment, the list of SSL Ciphers needed will change so I will let you fill that blank. Of course if the name of your http-listener isn’t “default”, then you will need to adapt the command above but it is the default name for all WildFly instances so unless you customized this already, this will do.

 

2. HTTPS Only

If you don’t want to allow HTTP communications, then you just need to remove the HTTP listener and replace it with the HTTPS listener. This is the command that can do it:

[xplore@full_text_server_01 configuration]$ sed -i 's,<http-listener .*name="default".*,<https-listener name="default-https" socket-binding="https" security-realm="sslSecurityRealm" enabled-cipher-suites="PUT_HERE_SSL_CIPHERS"/>,' standalone.xml

 

 

No matter if you choose HTTP+HTTPS or HTTPS only, you then have to start your application again using your prefered way. This is an example of command that will do it:

[xplore@full_text_server_01 configuration]$ sh -c "cd $JBOSS_HOME/server/; nohup ./start${server_name}.sh & sleep 5; mv nohup.out nohup-${server_name}.out"

 

When the application has been started, you can try to access the application URL. These are some examples using the most common ports:

  • JMS => https://content_server_01:9082/DmMethods/servlet/DoMethod
  • PrimaryDsearch => https://full_text_server_01:9302/dsearch
  • IndexAgent => https://full_text_server_01:9202/IndexAgent
  • CPS => https://full_text_server_01:9402/cps/ContentProcessingService?wsdl

 

 

Cet article Documentum – Setup WildFly in HTTPS est apparu en premier sur Blog dbi services.

Documentum – WildFly not able to start in a CS 7.3 / xPlore 1.6

Sun, 2017-05-14 03:15

As mentioned in previous blogs (like this one), we were doing some testing on the newly released versions of Documentum (CS 7.3, xPlore 1.6, aso…). On this blog, I will talk about something that can prevent the WildFly instances to start properly (RedHat renamed the JBoss Application Server to WildFly in 2013 starting with the version 8.0). As this issue is linked to WildFly, you can understand that this might happen on both Content Servers 7.3 and Full Text Servers 1.6 but in this blog, I will use a Content Server for the example.

 

So when I first installed a Content Server 7.3 (in full silent of course!), the Java Method Server wasn’t running properly. Actually worse than that, it wasn’t even starting completely. Here are the entries put in the JMS log file (server.log):

JAVA_OPTS already set in environment; overriding default settings with values: -XX:+UseParallelOldGC -XX:ReservedCodeCacheSize=128m -XX:MaxMetaspaceSize=256m -XX:NewRatio=4 -Xms4096m -Xmx4096m -XX:NewSize=256m -XX:-UseAdaptiveSizePolicy -XX:SurvivorRatio=8 -Xss256k -Djava.awt.headless=true -Djava.io.tmpdir=$DOCUMENTUM/temp/JMS_Temp -Djboss.server.base.dir=$DOCUMENTUM_SHARED/wildfly9.0.1/server/DctmServer_MethodServer
=========================================================================

  JBoss Bootstrap Environment

  JBOSS_HOME: $DOCUMENTUM_SHARED/wildfly9.0.1

  JAVA: $DOCUMENTUM_SHARED/java64/JAVA_LINK/bin/java

  JAVA_OPTS:  -server -XX:+UseCompressedOops  -server -XX:+UseCompressedOops -XX:+UseParallelOldGC -XX:ReservedCodeCacheSize=128m -XX:MaxMetaspaceSize=256m -XX:NewRatio=4 -Xms4096m -Xmx4096m -XX:NewSize=256m -XX:-UseAdaptiveSizePolicy -XX:SurvivorRatio=8 -Xss256k -Djava.awt.headless=true -Djava.io.tmpdir=$DOCUMENTUM/temp/JMS_Temp -Djboss.server.base.dir=$DOCUMENTUM_SHARED/wildfly9.0.1/server/DctmServer_MethodServer

=========================================================================

15:48:41,985 INFO  [org.jboss.modules] (main) JBoss Modules version 1.4.3.Final
15:48:44,301 INFO  [org.jboss.msc] (main) JBoss MSC version 1.2.6.Final
15:48:44,502 INFO  [org.jboss.as] (MSC service thread 1-8) WFLYSRV0049: WildFly Full 9.0.1.Final (WildFly Core 1.0.1.Final) starting
15:48:50,818 INFO  [org.jboss.as.controller.management-deprecated] (ServerService Thread Pool -- 28) WFLYCTL0028: Attribute 'enabled' in the resource at address '/subsystem=datasources/data-source=ExampleDS' is deprecated, and may be removed in future version. See the attribute description in the output of the read-resource-description operation to learn more about the deprecation.
15:48:50,819 INFO  [org.jboss.as.controller.management-deprecated] (ServerService Thread Pool -- 27) WFLYCTL0028: Attribute 'job-repository-type' in the resource at address '/subsystem=batch' is deprecated, and may be removed in future version. See the attribute description in the output of the read-resource-description operation to learn more about the deprecation.
15:48:50,867 WARN  [org.jboss.messaging] (ServerService Thread Pool -- 30) WFLYMSG0071: There is no resource matching the expiry-address jms.queue.ExpiryQueue for the address-settings #, expired messages from destinations matching this address-setting will be lost!
15:48:50,868 WARN  [org.jboss.messaging] (ServerService Thread Pool -- 30) WFLYMSG0072: There is no resource matching the dead-letter-address jms.queue.DLQ for the address-settings #, undelivered messages from destinations matching this address-setting will be lost!
15:48:51,024 INFO  [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) WFLYDS0004: Found acs.ear in deployment directory. To trigger deployment create a file called acs.ear.dodeploy
15:48:51,026 INFO  [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) WFLYDS0004: Found ServerApps.ear in deployment directory. To trigger deployment create a file called ServerApps.ear.dodeploy
15:48:51,109 INFO  [org.xnio] (MSC service thread 1-5) XNIO version 3.3.1.Final
15:48:51,113 INFO  [org.jboss.as.server] (Controller Boot Thread) WFLYSRV0039: Creating http management service using socket-binding (management-http)
15:48:51,128 INFO  [org.xnio.nio] (MSC service thread 1-5) XNIO NIO Implementation Version 3.3.1.Final
15:48:51,164 INFO  [org.jboss.remoting] (MSC service thread 1-5) JBoss Remoting version 4.0.9.Final
15:48:51,325 INFO  [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 39) WFLYCLINF0001: Activating Infinispan subsystem.
15:48:51,335 INFO  [org.wildfly.extension.io] (ServerService Thread Pool -- 38) WFLYIO001: Worker 'default' has auto-configured to 8 core threads with 64 task threads based on your 4 available processors
15:48:51,374 INFO  [org.jboss.as.naming] (ServerService Thread Pool -- 47) WFLYNAM0001: Activating Naming Subsystem
15:48:51,353 INFO  [org.jboss.as.security] (ServerService Thread Pool -- 54) WFLYSEC0002: Activating Security Subsystem
15:48:51,359 WARN  [org.jboss.as.txn] (ServerService Thread Pool -- 55) WFLYTX0013: Node identifier property is set to the default value. Please make sure it is unique.
15:48:51,387 INFO  [org.jboss.as.webservices] (ServerService Thread Pool -- 57) WFLYWS0002: Activating WebServices Extension
15:48:51,410 INFO  [org.jboss.as.jsf] (ServerService Thread Pool -- 45) WFLYJSF0007: Activated the following JSF Implementations: [main]
15:48:51,429 INFO  [org.jboss.as.security] (MSC service thread 1-3) WFLYSEC0001: Current PicketBox version=4.9.2.Final
15:48:51,499 INFO  [org.jboss.as.connector] (MSC service thread 1-7) WFLYJCA0009: Starting JCA Subsystem (IronJacamar 1.2.4.Final)
15:48:51,541 INFO  [org.jboss.as.naming] (MSC service thread 1-4) WFLYNAM0003: Starting Naming Service
15:48:51,554 INFO  [org.jboss.as.mail.extension] (MSC service thread 1-7) WFLYMAIL0001: Bound mail session 
15:48:51,563 INFO  [org.wildfly.extension.undertow] (ServerService Thread Pool -- 56) WFLYUT0003: Undertow 1.2.9.Final starting
15:48:51,564 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-8) WFLYUT0003: Undertow 1.2.9.Final starting
15:48:51,583 INFO  [org.jboss.as.connector.subsystems.datasources] (ServerService Thread Pool -- 34) WFLYJCA0004: Deploying JDBC-compliant driver class org.h2.Driver (version 1.3)
15:48:51,593 INFO  [org.jboss.as.connector.deployers.jdbc] (MSC service thread 1-4) WFLYJCA0018: Started Driver service with driver-name = h2
15:48:51,931 INFO  [org.wildfly.extension.undertow] (ServerService Thread Pool -- 56) WFLYUT0014: Creating file handler for path $DOCUMENTUM_SHARED/wildfly9.0.1/welcome-content
15:48:51,984 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-8) WFLYUT0012: Started server default-server.
15:48:52,049 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-3) WFLYUT0018: Host default-host starting
15:48:52,112 ERROR [org.jboss.msc.service.fail] (MSC service thread 1-8) MSC000001: Failed to start service jboss.undertow.listener.default: org.jboss.msc.service.StartException in service jboss.undertow.listener.default: Could not start http listener
        at org.wildfly.extension.undertow.ListenerService.start(ListenerService.java:150)
        at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1948)
        at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1881)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.SocketException: Protocol family unavailable
        at sun.nio.ch.Net.bind0(Native Method)
        at sun.nio.ch.Net.bind(Net.java:433)
        at sun.nio.ch.Net.bind(Net.java:425)
        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
        at org.xnio.nio.NioXnioWorker.createTcpConnectionServer(NioXnioWorker.java:182)
        at org.xnio.XnioWorker.createStreamConnectionServer(XnioWorker.java:243)
        at org.wildfly.extension.undertow.HttpListenerService.startListening(HttpListenerService.java:115)
        at org.wildfly.extension.undertow.ListenerService.start(ListenerService.java:147)
        ... 5 more

...

15:49:37,585 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("add") failed - address: ([
    ("core-service" => "management"),
    ("management-interface" => "native-interface")
]) - failure description: {"WFLYCTL0080: Failed services" => {"jboss.remoting.server.management" => "org.jboss.msc.service.StartException in service jboss.remoting.server.management: WFLYRMT0005: Failed to start service
    Caused by: java.net.SocketException: Protocol family unavailable"}}
15:49:37,589 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("add") failed - address: ([
    ("core-service" => "management"),
    ("management-interface" => "http-interface")
]) - failure description: {
    "WFLYCTL0080: Failed services" => {"jboss.serverManagement.controller.management.http" => "org.jboss.msc.service.StartException in service jboss.serverManagement.controller.management.http: WFLYSRV0083: Failed to start the http-interface service
    Caused by: java.lang.RuntimeException: java.net.SocketException: Protocol family unavailable
    Caused by: java.net.SocketException: Protocol family unavailable"},
    "WFLYCTL0288: One or more services were unable to start due to one or more indirect dependencies not being available." => {
        "Services that were unable to start:" => ["jboss.serverManagement.controller.management.http.shutdown"],
        "Services that may be the cause:" => ["jboss.remoting.remotingConnectorInfoService.http-remoting-connector"]
    }
}
...
15:49:37,653 INFO  [org.jboss.as.server] (ServerService Thread Pool -- 35) WFLYSRV0010: Deployed "acs.ear" (runtime-name : "acs.ear")
15:49:37,654 INFO  [org.jboss.as.server] (ServerService Thread Pool -- 35) WFLYSRV0010: Deployed "ServerApps.ear" (runtime-name : "ServerApps.ear")
...
15:49:37,901 INFO  [org.jboss.as] (Controller Boot Thread) WFLYSRV0063: Http management interface is not enabled
15:49:37,902 INFO  [org.jboss.as] (Controller Boot Thread) WFLYSRV0054: Admin console is not enabled
15:49:37,903 ERROR [org.jboss.as] (Controller Boot Thread) WFLYSRV0026: WildFly Full 9.0.1.Final (WildFly Core 1.0.1.Final) started (with errors) in 57426ms - Started 225 of 424 services (16 services failed or missing dependencies, 225 services are lazy, passive or on-demand)
...
15:49:38,609 INFO  [org.jboss.as.server] (DeploymentScanner-threads - 2) WFLYSRV0009: Undeployed "acs.ear" (runtime-name: "acs.ear")
15:49:38,610 INFO  [org.jboss.as.server] (DeploymentScanner-threads - 2) WFLYSRV0009: Undeployed "ServerApps.ear" (runtime-name: "ServerApps.ear")
...
15:53:02,276 INFO  [org.jboss.as] (MSC service thread 1-4) WFLYSRV0050: WildFly Full 9.0.1.Final (WildFly Core 1.0.1.Final) stopped in 124ms

 

After all these errors, the logs are showing that the Http Management interface isn’t enabled and then the WildFly is just shutting itself down. So what is this “Protocol family unavailable”? This is actually linked to one of the changes included in WildFly compared to the old version (JBoss 7.1.1): WildFly supports and use by default IPv4 and IPv6. Unfortunately if your OS isn’t configured to use IPv6, this error will appear and will prevent your WildFly to start.

 

Fortunately, it is very easy to correct this, you just have to force WildFly to only use IPv4. For that purpose, we will add a single JVM parameter in the start script of the JMS:

sed -i 's,^JAVA_OPTS="[^"]*,& -Djava.net.preferIPv4Stack=true,' $DOCUMENTUM_SHARED/wildfly9.0.1/server/startMethodServer.sh

 

This command will simply add ” -Djava.net.preferIPv4Stack=true” just before the ending double quote of the line that is starting with JAVA_OPTS= in the file $DOCUMENTUM_SHARED/wildfly9.0.1/server/startMethodServer.sh. As a side note, to do the same thing on a Full Text Server, just change the path and the name of startup script and it will do the same thing (E.g.: $XPLORE_HOME/wildfly9.0.1/server/startPrimaryDsearch.sh).

 

After that, just restart the JMS and it will be able to start without issue:

JAVA_OPTS already set in environment; overriding default settings with values: -XX:+UseParallelOldGC -XX:ReservedCodeCacheSize=128m -XX:MaxMetaspaceSize=256m -XX:NewRatio=4 -Xms4096m -Xmx4096m -XX:NewSize=256m -XX:-UseAdaptiveSizePolicy -XX:SurvivorRatio=8 -Xss256k -Djava.awt.headless=true -Djava.io.tmpdir=$DOCUMENTUM/temp/JMS_Temp -Djboss.server.base.dir=$DOCUMENTUM_SHARED/wildfly9.0.1/server/DctmServer_MethodServer -Djava.net.preferIPv4Stack=true
=========================================================================

  JBoss Bootstrap Environment

  JBOSS_HOME: $DOCUMENTUM_SHARED/wildfly9.0.1

  JAVA: $DOCUMENTUM_SHARED/java64/JAVA_LINK/bin/java

  JAVA_OPTS:  -server -XX:+UseCompressedOops  -server -XX:+UseCompressedOops -XX:+UseParallelOldGC -XX:ReservedCodeCacheSize=128m -XX:MaxMetaspaceSize=256m -XX:NewRatio=4 -Xms4096m -Xmx4096m -XX:NewSize=256m -XX:-UseAdaptiveSizePolicy -XX:SurvivorRatio=8 -Xss256k -Djava.awt.headless=true -Djava.io.tmpdir=$DOCUMENTUM/temp/JMS_Temp -Djboss.server.base.dir=$DOCUMENTUM_SHARED/wildfly9.0.1/server/DctmServer_MethodServer -Djava.net.preferIPv4Stack=true

=========================================================================

16:10:22,285 INFO  [org.jboss.modules] (main) JBoss Modules version 1.4.3.Final
16:10:22,601 INFO  [org.jboss.msc] (main) JBoss MSC version 1.2.6.Final
16:10:22,711 INFO  [org.jboss.as] (MSC service thread 1-7) WFLYSRV0049: WildFly Full 9.0.1.Final (WildFly Core 1.0.1.Final) starting
16:10:24,627 INFO  [org.jboss.as.controller.management-deprecated] (ServerService Thread Pool -- 28) WFLYCTL0028: Attribute 'job-repository-type' in the resource at address '/subsystem=batch' is deprecated, and may be removed in future version. See the attribute description in the output of the read-resource-description operation to learn more about the deprecation.
16:10:24,645 INFO  [org.jboss.as.controller.management-deprecated] (ServerService Thread Pool -- 27) WFLYCTL0028: Attribute 'enabled' in the resource at address '/subsystem=datasources/data-source=ExampleDS' is deprecated, and may be removed in future version. See the attribute description in the output of the read-resource-description operation to learn more about the deprecation.
16:10:24,664 INFO  [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) WFLYDS0004: Found acs.ear in deployment directory. To trigger deployment create a file called acs.ear.dodeploy
16:10:24,665 INFO  [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) WFLYDS0004: Found ServerApps.ear in deployment directory. To trigger deployment create a file called ServerApps.ear.dodeploy
16:10:24,701 WARN  [org.jboss.messaging] (ServerService Thread Pool -- 30) WFLYMSG0071: There is no resource matching the expiry-address jms.queue.ExpiryQueue for the address-settings #, expired messages from destinations matching this address-setting will be lost!
16:10:24,702 WARN  [org.jboss.messaging] (ServerService Thread Pool -- 30) WFLYMSG0072: There is no resource matching the dead-letter-address jms.queue.DLQ for the address-settings #, undelivered messages from destinations matching this address-setting will be lost!
16:10:24,728 INFO  [org.xnio] (MSC service thread 1-2) XNIO version 3.3.1.Final
16:10:24,737 INFO  [org.jboss.as.server] (Controller Boot Thread) WFLYSRV0039: Creating http management service using socket-binding (management-http)
16:10:24,745 INFO  [org.xnio.nio] (MSC service thread 1-2) XNIO NIO Implementation Version 3.3.1.Final
16:10:24,808 INFO  [org.jboss.remoting] (MSC service thread 1-2) JBoss Remoting version 4.0.9.Final
16:10:24,831 INFO  [org.wildfly.extension.io] (ServerService Thread Pool -- 38) WFLYIO001: Worker 'default' has auto-configured to 8 core threads with 64 task threads based on your 4 available processors
16:10:24,852 INFO  [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 39) WFLYCLINF0001: Activating Infinispan subsystem.
16:10:24,867 INFO  [org.jboss.as.jsf] (ServerService Thread Pool -- 45) WFLYJSF0007: Activated the following JSF Implementations: [main]
16:10:24,890 WARN  [org.jboss.as.txn] (ServerService Thread Pool -- 55) WFLYTX0013: Node identifier property is set to the default value. Please make sure it is unique.
16:10:24,899 INFO  [org.jboss.as.security] (ServerService Thread Pool -- 54) WFLYSEC0002: Activating Security Subsystem
16:10:24,901 INFO  [org.jboss.as.naming] (ServerService Thread Pool -- 47) WFLYNAM0001: Activating Naming Subsystem
16:10:24,916 INFO  [org.jboss.as.connector.subsystems.datasources] (ServerService Thread Pool -- 34) WFLYJCA0004: Deploying JDBC-compliant driver class org.h2.Driver (version 1.3)
16:10:24,939 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-6) WFLYUT0003: Undertow 1.2.9.Final starting
16:10:24,939 INFO  [org.wildfly.extension.undertow] (ServerService Thread Pool -- 56) WFLYUT0003: Undertow 1.2.9.Final starting
16:10:24,959 INFO  [org.jboss.as.security] (MSC service thread 1-2) WFLYSEC0001: Current PicketBox version=4.9.2.Final
16:10:24,967 INFO  [org.jboss.as.webservices] (ServerService Thread Pool -- 57) WFLYWS0002: Activating WebServices Extension
16:10:24,977 INFO  [org.jboss.as.connector] (MSC service thread 1-4) WFLYJCA0009: Starting JCA Subsystem (IronJacamar 1.2.4.Final)
16:10:25,034 INFO  [org.jboss.as.naming] (MSC service thread 1-7) WFLYNAM0003: Starting Naming Service
16:10:25,035 INFO  [org.jboss.as.mail.extension] (MSC service thread 1-7) WFLYMAIL0001: Bound mail session 
16:10:25,044 INFO  [org.jboss.as.connector.deployers.jdbc] (MSC service thread 1-6) WFLYJCA0018: Started Driver service with driver-name = h2
16:10:25,394 INFO  [org.wildfly.extension.undertow] (ServerService Thread Pool -- 56) WFLYUT0014: Creating file handler for path $DOCUMENTUM_SHARED/wildfly9.0.1/welcome-content
16:10:25,414 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-3) WFLYUT0012: Started server default-server.
16:10:25,508 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-6) WFLYUT0018: Host default-host starting
16:10:25,581 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-3) WFLYUT0006: Undertow HTTP listener default listening on /0.0.0.0:9080
16:10:25,933 INFO  [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-6) WFLYJCA0001: Bound data source 
16:10:26,177 INFO  [org.jboss.as.server.deployment.scanner] (MSC service thread 1-6) WFLYDS0013: Started FileSystemDeploymentService for directory $DOCUMENTUM_SHARED/wildfly9.0.1/server/DctmServer_MethodServer/deployments
16:10:26,198 INFO  [org.jboss.as.server.deployment] (MSC service thread 1-7) WFLYSRV0027: Starting deployment of "ServerApps.ear" (runtime-name: "ServerApps.ear")
16:10:26,201 INFO  [org.jboss.as.server.deployment] (MSC service thread 1-4) WFLYSRV0027: Starting deployment of "acs.ear" (runtime-name: "acs.ear")
16:10:26,223 INFO  [org.jboss.as.remoting] (MSC service thread 1-5) WFLYRMT0001: Listening on 0.0.0.0:9084
...
16:10:42,398 INFO  [org.wildfly.extension.undertow] (ServerService Thread Pool -- 63) WFLYUT0021: Registered web context: /DmMethods
16:10:44,519 INFO  [org.wildfly.extension.undertow] (ServerService Thread Pool -- 64) WFLYUT0021: Registered web context: /ACS
16:10:44,563 INFO  [org.jboss.as.server] (ServerService Thread Pool -- 35) WFLYSRV0010: Deployed "acs.ear" (runtime-name : "acs.ear")
16:10:44,564 INFO  [org.jboss.as.server] (ServerService Thread Pool -- 35) WFLYSRV0010: Deployed "ServerApps.ear" (runtime-name : "ServerApps.ear")
16:10:44,911 INFO  [org.jboss.as] (Controller Boot Thread) WFLYSRV0060: Http management interface listening on http://0.0.0.0:9085/management
16:10:44,912 INFO  [org.jboss.as] (Controller Boot Thread) WFLYSRV0051: Admin console listening on http://0.0.0.0:9085
16:10:44,912 INFO  [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: WildFly Full 9.0.1.Final (WildFly Core 1.0.1.Final) started in 23083ms - Started 686 of 905 services (298 services are lazy, passive or on-demand)

 

As shown above, the WildFly has been started successfully. In this example, I really took a default JMS setup with just a few custom JVM parameters (Xms, Xmx, aso…) but as you can see in the logs above, there are a few issues with the default setup like deprecated or missing parameters. So I would strongly suggest you to take a look at that and configure properly your JMS once it is up & running!

 

Earlier in this blog, I mentioned that there are a few things that changed between JBoss 7.1.1 (CS 7.2 / xPlore 1.5) and WildFly 9.0.1 (CS 7.3 / xPlore 1.6). In a future blog, I will talk about another change which is how to setup a WildFly instance in SSL.

 

 

Cet article Documentum – WildFly not able to start in a CS 7.3 / xPlore 1.6 est apparu en premier sur Blog dbi services.

Pages