Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 17 hours 40 min ago

ADWC – a Docker container to start/stop Oracle Cloud services

Thu, 2018-05-03 01:08

In the previous post, I’ve explained how to start and stop the Autonomous Data Warehouse Cloud service from PSM (PaaS Service Manager). There’s a setup phase, and a run phase starting with service-start and ending with service-stop. And the setup is specific to an Oracle Cloud account, storing information in the local user home. You may want to run different setups, and even provide an easy way to start/stop an Oracle Cloud service without knowing the user, password and tenant name.

A Docker container is perfect to isolate this.

Dockerfile

Here is my quick (aka experimental) Dockerfile:

FROM alpine:latest
RUN apk add --update --no-cache python3 curl
ENV user=my.cloud.account@me.com
ENV password=MyP@ssw0rd
ENV tenant=idcs-31bbd63c3cb9466cb8a96f627b6b6116
ENV region=us
# get PSM
RUN curl -X GET -u ${user}:${password} -H X-ID-TENANT-NAME:${tenant} https://psm.us.oraclecloud.com/paas/core/api/v1.1/cli/${tenant}/client -o psmcli.zip
# install PSM
RUN pip3 install -U psmcli.zip
# setup PSM
RUN echo "{\"username\":\"${user}\",\"password\":\"${password}\",\"identityDomain\":\"${tenant}\",\"region\":\"${region}\",\"outputFormat\":\"short\"}" > config-payload &&\
psm setup --config-payload config-payload &&\
rm 421d64918638 # remove file with password
# patch PSM for bugs
RUN sed -ie 's/core\\[/]//g' ~/.psm/data/ADWC.json
# variables which can be overwritten at run time
ENV service=ADWC
ENV name=MYADWC1
CMD trap 'echo "Stopping service ${service}...";psm ${service} stop-service --service-name ${name} -wc true ; exit 0' SIGINT SIGSTOP SIGKILL; echo "Starting service ${service}...";psm ${service} start-service --service-name ${name} -wc true || exit 1 ; echo "You can access to ${service} console with ADMIN user at:";echo;psm ${service} service -s ${name} -of json | jq -r '.serviceConsole';echo ; while sleep 60 ; do echo "Status of service ${service} at $(date)...";psm ADWC service -s ADWC ; done

We need curl to download PSM, and pip3 to install it, and python3 to run it.
You can set your Oracle Cloud Account credentials as environment variables.
Then it fills all required information in a ‘config-payload’ file, runs ‘psm setup’ and removes that file.

At run, it calls a ‘start-service’ and loops while showing the status every minute (you can see them with docker container logs). The INT, STOP and KILL signals call ‘stop-service’. Then, the idea is that while the container exists, the Cloud Service is running. And it is shutdown at the container end of life. There’s no other action to do with the container: it display the console url where you have everything to interact with the service (download client credentials, manage users, go to Machine Learning notebooks,…).

Run example:

Here is a simple example:

# docker run --rm franck/psm/awdc
 
Starting service MYADWC1...
Message: Job submitted successfully for start of service/system
Job ID: 25583108
Waiting for the job to complete... (it cannot be cancelled)
Command completed with status [SUCCEED].
You can access to MYADWC1 console with ADMIN user at:
 
https://adwc.uscom-east-1.oraclecloud.com/console/index.html?tenant_name=idcs-31bbd63c3cb9466cb8a96f627b6b6116&database_name=MYADWC1
 
Status of service MYADWC1 at Sun Apr 29 18:20:50 UTC 2018...
Service: MYADWC1
Status: Ready
Version: 18.1.4.0
Edition: N/A
Compute Site: N/A
Cloud Storage Container: N/A
Created On: 2018-04-19T19:22:18.360+0000
Status of service MYADWC1 at Sun Apr 29 18:21:51 UTC 2018...
Service: MYADWC1
Status: Ready
Version: 18.1.4.0
Edition: N/A
Compute Site: N/A
Cloud Storage Container: N/A
Created On: 2018-04-19T19:22:18.360+0000
 
^C
 
Stopping service MYADWC1...
Message: Job submitted successfully for stop of service/system
Job ID: 25620930
Waiting for the job to complete... (it cannot be cancelled)
Command completed with status [SUCCEED].

After 2 minutes I’ve hit ‘Control-C’ to stop the container. The service has been cleanly shut-down.

 

Cet article ADWC – a Docker container to start/stop Oracle Cloud services est apparu en premier sur Blog dbi services.

Load Balancing with Docker Swarm mode and SQL Server containers

Thu, 2018-05-03 00:57

Docker swarm mode provides an easy way to publish ports for services. Indeed, in this mode all nodes participate in an ingress routing mesh and accept connections to any service port published. Thus, all incoming requests are routed to available nodes hosting a service.

That’s a pretty cool feature but it has some drawbacks. Let’s say that a node goes wrong … In this case the application must retry connecting to the next available node and it will likely imply changing the application connection string or using a DNS redirection. A load balancer outside of the Swarm provides a better way to connect to the containers without having to worry about the cluster nodes from an application perspective. Thanks to the ingress routing all of the published services are available through any of the swarm nodes and the load balancer can be set to use the swarm private IP addresses without a concern of which node is hosting what service.

After taking a look at the existing open source projects / solutions in the market, I quickly reduced the scope of my search to only 2 ones: nginx and HAProxy. Each product has pros and cons but I choose HAProxy that fit well with my tests. In fact, I used more specifically the dockercloud-haproxy because it includes an HAProxy docker image that runs with Docker Swarm mode including docker-compose. I’m not sure this project is dead or alive referring to the dockerhub page that states the following: Unfortunately, this project is currently put into maintenance mode. Please do not send any PR for new features. We will still fix bugs if there is any. It exists some other alternatives as proxy.dockerflow but once again the dockercloud-haproxy project remained relevant for what I wanted to test.

Here my docker-compose file:

version: '3.1'
services:
  db: 
    build: .
    image: 127.0.0.1:5000/dbi_linux_sql2017:CU4
    ports: 
      - "1433:1433"
    volumes:
      - /u00/db2:/u00
      - /u01/db2:/u01
      - /u02/db2:/u02
      - /u03/db2:/u03
      - /u98/db2:/u98
    environment:
      - MSSQL_SA_PASSWORD_FILE=/run/secrets/mssql_sa_password
      - ACCEPT_EULA=Y
      - MSSQL_PID=Developer
      - MSSQL_USER=dbi
      - MSSQL_USER_PASSWORD_FILE=/run/secrets/mssql_user_password
      - TZ=Europe/Berlin
      - SERVICE_PORTS=1433
    networks:
      - back-tier
    deploy:
      replicas: 1
      placement:
        constraints: [node.role != manager]
      resources:
        limits:
          cpus: '1'
          memory: 3584M
        reservations:
          cpus: '1'
          memory: 3072M
    secrets:
      - source: mssql_sa_password
        target: mssql_sa_password
      - source: mssql_user_password
        target: mssql_user_password

  proxy:
      image: dockercloud/haproxy
      depends_on:
        - db
      environment:
        - BALANCE=leastconn
        - ADDITIONAL_SERVICES=project_dir:db
        - MODE=tcp
      volumes:
        - /var/run/docker.sock:/var/run/docker.sock
      ports:
        - 80:80
        - 1936:1936
      networks:
        - back-tier
      deploy:
        placement:
          constraints: [node.role == manager]

networks:
  back-tier:
    external:
      name: backend-server
  front-tier:
    external:
      name: frontend-server

secrets: # top level secrets block
  mssql_sa_password:
    external: true
  mssql_user_password:
    external: true

 

It includes 2 services:

  • db (my SQL Server container)
  • proxy (my HAProxy server)

I used some Swarm Mode specific environment variables:

  • SERVICE_PORTS=1433 – to expose the port of my db service (mssql default port)
  • BALANCE=leastconn – because it is recommended with long sessions as LDAP, SQL, TSE etc…
  • ADDITIONAL_SERVICES=project_dir:db – to identify the db service from the proxy service
  • MODE=tcp – mode of load balancing for HAProxy (TCP in my case)

I deployed the stack as following:

$ docker stack deploy -c docker-compose-test.yml mssql
Ignoring unsupported options: build

Creating service mssql_db
Creating service mssql_proxy

 

My service state was as follows:

$ docker service ls --filter name=mssql
ID                  NAME                MODE                REPLICAS            IMAGE                                  PORTS
26zdisl9r64y        mssql_db            replicated          1/1                 127.0.0.1:5000/dbi_linux_sql2017:CU4   *:1433->1433/tcp
nhje3081gwr8        mssql_proxy         replicated          1/1                 dockercloud/haproxy:latest             *:80->80/tcp,*:1936->1936/tcp

 

Concerning the mssql_proxy service we may notice the exposed ports 80 and 1936. The former will be used to connect to the HAProxy for the SQL Server connection redirection and the latter concers the default port to get HAProxy stats information (from a web browser).

blog 132 - 2 - docker swarm node status1

Here an example of my HAProxy output:

blog 132 - 1 - docker swarm HA proxy stats

The db service task (mssql_db.1.o9s2xxxxx) – that corresponds in fact to my service task (or container) is well-identified by the HA proxy.

So, I tried to connect to my SQL Server DB container through the HAProxy:

C:\Users\clustadmin>sqlcmd -S docker1,80 -Usa -PPassw0rd1 -Q"SELECT @@SERVERNAME"

---------------------------------------------------------------------------------
cf7f9d6036f3

 

It worked like a charm!

The next step consisted in simulating a failure of my docker node DOCKER3 …

blog 132 - 3 - docker swarm node status2

… and try again a connection to the SQL Server DB container that has restarted on DOCKER2 node. As expected, the connection kept working and was automatically redirected by the HAProxy to the next available node. Obviously in this case the redirection was not transparent for the application. We have to take to into account the short period of outage that corresponds to the mssql_db container restart duration. By the way, if you take a look at the HAProxy default_service section you may notice that the task id has changed (mssql_db.1.zo8yqxxxxxx) meaning the system has created a new task (or container). However, my mssql_db container host name remained the same in my case.

blog 132 - 4 - docker swarm HA proxy stats2

C:\Users\clustadmin>sqlcmd -S docker1,80 -Usa -PPassw0rd1 -Q"SELECT @@SERVERNAME"

---------------------------------------------------------------------------------
cf7f9d6036f3

 

Another interesting point I had to dig further concerns the HAProxy health check by itself. In my case I used TCP-based health check to ensure the service is alive but it is probably not good enough to check if my SQL Server instance (in other words my application) is working correctly. I spent some times to read articles and documentations and I discovered we may use custom health checks with HAProxy with databases as MySQL, PostgreSQL and Redis. Unfortunately, at the time I’m writing this blog post nothing concerning SQL Server probably because it is pretty new in this area (maybe I get wrong … so please feel free to comment) . I found out some other alternatives as xinetd that may be used to trigger a custom routine (by using sqlcmd command line tool for example) but in my context it makes the game a little bit more complex because I have to include it to my SQL Server docker image. After thinking a little bit about this issue, I realized we already implemented such routine directly inside the SQL Server image itself (HEALTHCHECK section) that uses precisely the sqlcmd command line tool to check the SQL Server connection is working correctly.

/opt/mssql-tools/bin/sqlcmd -S localhost,$MSSQL_TCP_PORT -U sa -P ${SA_PASSWORD} -Q "select 1"

 

By the way, the task status can be viewed through the docker ps command as follows:

$docker ps -f name=mssql --format 'table {{.ID}}\t{{.Names}}\t{{.CreatedAt}}\t{{.Status}}'
CONTAINER ID        NAMES	CREATED AT                       STATUS
9bfe04a97617        mssql_db.1.zc8yqp9llmjdcn3df49izyj72	2018-05-01 23:01:39 +0200 CEST   Up 9 hours (healthy)

 

So, the combination of the docker health check routine and the HAProxy TCP health check seems to be a good solution to address different kind of failure including docker swarm node failures and application failures as well.

This is just the first step about using an HAProxy with SQL Server (redirection capabilities). You may also extend this first scenario with a low cost solution including multiples SQL Server Express containers to address scale-out needs with Reporting queries for instance. I remember Christophe Laporte(b) gave a session about some years ago. I will try to blog about soon!

See you!

 

 

 

 

 

Cet article Load Balancing with Docker Swarm mode and SQL Server containers est apparu en premier sur Blog dbi services.

How uid mapping works in Docker containers?

Wed, 2018-05-02 09:56

It can be interesting to see how uids between the docker host and docker containers are mapped. For example, for security concerns.
As a reminder, docker containers are based on two linux kernel features: linux namespaces and cgroups.

Basically, linux namespaces provide isolation for running processes and cgroups allows you to isolate resource usage.

Let’s first run a docker container. Here, we will run a mariadb docker in background with -d option

 
[docker@docker1 ~]$ docker run -d -e MYSQL_ROOT_PASSWORD=test123 mariadb
5c4450939d71814070945f86f9712ba78893417e2342fb48aafced8160cd0d15

Now the container mariadb is running. Let’s see what is happening on a host level.

 
[docker@docker1 ~]$ ps -ef
UID        PID  PPID  C STIME TTY          TIME CMD
polkitd   1729  1718  0 08:14 ?        00:00:00 mysqld

On a container level:

 
root@5c4450939d71:~# ps -ef
UID        PID  PPID  C STIME TTY          TIME CMD
mysql        1     0  0 06:14 ?        00:00:00 mysqld
root       174     0  0 06:22 pts/0    00:00:00 bash

On the host level the mysqld process is running by polkitd and on a container level the process is running by mysql. Any ideas?
This is because the user id (UID) of the mysql user created in mariadb container corresponds to the same UID of the polkitd user on the host.

Let’s see what is the userid of the mysql user in the mariadb container

 
root@5c4450939d71:~# id mysql
uid=999(mysql) gid=999(mysql) groups=999(mysql)

The UID of mysql is 999. On the host:

 
[docker@docker1 ~]$ cat /etc/passwd | grep 999
polkitd:x:999:997:User for polkitd:/:/sbin/nologin

We can see that 999 corresponds to the polkitd user id.

How to change this?

Well, this could be a problem because we don’t want to run docker containers with a system user that we don’t know.

One solution could be to create a mysql user with a certain UID on the host:

 
[root@docker1 ~]# useradd -g mysql -u 1099 -m -r mysql

Then, we modify the user id inside the docker image. To do so, we need to rebuild a new mariadb image :-)
Let’s first clone the docker mariadb project

 
[docker@docker1 ~]$ git clone https://github.com/docker-library/mariadb.git
Cloning into 'mariadb'...
remote: Counting objects: 751, done.
remote: Compressing objects: 100% (15/15), done.
remote: Total 751 (delta 9), reused 18 (delta 8), pack-reused 728
Receiving objects: 100% (751/751), 152.38 KiB | 0 bytes/s, done.
Resolving deltas: 100% (338/338), done.

We enter the directory of the mariadb version 10.3

 
[docker@docker1 ~]$ cd mariadb/10.3/

We need to modify the Dockerfile where all instructions are described

 
[docker@docker1 10.3]$ vi Dockerfile

Change this line

 
# vim:set ft=dockerfile:
FROM debian:jessie

# add our user and group first to make sure their IDs get assigned consistently, regardless of whatever dependencies get added
RUN groupadd -r mysql && useradd -r -g mysql mysql

To this line

 
# vim:set ft=dockerfile:
FROM debian:jessie

# add our user and group first to make sure their IDs get assigned consistently, regardless of whatever dependencies get added
RUN groupadd -g 1099 -r mysql && useradd -u 1099 -r -g mysql mysql

We rebuild a new image, let’s call it mariadbcustom

 
[docker@docker1 10.3]$ docker build -t mariadbcustom:latest .
Sending build context to Docker daemon  13.31kB
Step 1/19 : FROM debian:jessie
 ---> 5dd74d62fab8
Step 2/19 : RUN groupadd -g 1099 -r mysql && useradd -u 1099 -r -g mysql mysql
 ---> Using cache
 ---> a285892faa45
Step 3/19 : ENV GOSU_VERSION 1.10
 ---> Using cache
 ---> 069252945f7a
Step 4/19 : RUN set -ex;                fetchDeps='             ca-certificates                 wget    ';      apt-get update;         apt-get install -y --no-install-recommends $fetchDeps;   rm -rf /var/lib/apt/lists/*;            dpkgArch="$(dpkg --print-architecture | awk -F- '{ print $NF }')";      wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch";     wget -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch.asc";             export GNUPGHOME="$(mktemp -d)";        gpg --keyserver ha.pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4;         gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu;       rm -r "$GNUPGHOME" /usr/local/bin/gosu.asc;             chmod +x /usr/local/bin/gosu;    gosu nobody true;               apt-get purge -y --auto-remove $fetchDeps
 ---> Using cache
 ---> c82d4738b781
Step 5/19 : RUN mkdir /docker-entrypoint-initdb.d
 ---> Using cache
 ---> 08acd0843256
Step 6/19 : RUN apt-get update && apt-get install -y --no-install-recommends            apt-transport-https ca-certificates             pwgen   && rm -rf /var/lib/apt/lists/*
 ---> Using cache
 ---> 3ed44a5e3cf5
Step 7/19 : ENV GPG_KEYS        199369E5404BD5FC7D2FE43BCBCB082A1BB943DB        430BDF5C56E7C94E848EE60C1C4CBDCDCD2EFD2A        4D1BB29D63D98E422B2113B19334A25F8507EFA5
 ---> Using cache
 ---> b30af869afbb
Step 8/19 : RUN set -ex;        export GNUPGHOME="$(mktemp -d)";        for key in $GPG_KEYS; do                gpg --keyserver ha.pool.sks-keyservers.net --recv-keys "$key";   done;   gpg --export $GPG_KEYS > /etc/apt/trusted.gpg.d/mariadb.gpg;    rm -r "$GNUPGHOME";     apt-key list
 ---> Using cache
 ---> 7a6e03190271
Step 9/19 : RUN echo "deb https://repo.percona.com/apt jessie main" > /etc/apt/sources.list.d/percona.list      &> /etc/apt/preferences.d/percona
 ---> Using cache
 ---> e55705d326a2
Step 10/19 : ENV MARIADB_MAJOR 10.3
 ---> Using cache
 ---> bb3bc4adcf42
Step 11/19 : ENV MARIADB_VERSION 1:10.3.6+maria~jessie
 ---> Using cache
 ---> 05bb1dc686c8
Step 12/19 : RUN echo "deb http://ftp.osuosl.org/pub/mariadb/repo/$MARIADB_MAJOR/debian jessie main" > /etc/apt/sources.list.d/mariadb.list     &> /etc/apt/preferences.d/mariadb
 ---> Using cache
 ---> 3626c50c8d83
Step 13/19 : RUN {              echo "mariadb-server-$MARIADB_MAJOR" mysql-server/root_password password 'unused';              echo "mariadb-server-$MARIADB_MAJOR" mysql-server/root_password_again password 'unused';         } | debconf-set-selections      && apt-get update       && apt-get install -y           "mariadb-server=$MARIADB_VERSION"                percona-xtrabackup-24           socat   && rm -rf /var/lib/apt/lists/*  && sed -ri 's/^user\s/#&/' /etc/mysql/my.cnf /etc/mysql/conf.d/*        && rm -rf /var/lib/mysql && mkdir -p /var/lib/mysql /var/run/mysqld      && chown -R mysql:mysql /var/lib/mysql /var/run/mysqld  && chmod 777 /var/run/mysqld    && find /etc/mysql/ -name '*.cnf' -print0                | xargs -0 grep -lZE '^(bind-address|log)'              | xargs -rt -0 sed -Ei 's/^(bind-address|log)/#&/'      && echo '[mysqld]\nskip-host-cache\nskip-name-resolve' > /etc/mysql/conf.d/docker.cnf
 ---> Using cache
 ---> 7d3d52632798
Step 14/19 : VOLUME /var/lib/mysql
 ---> Using cache
 ---> 3880f6c65676
Step 15/19 : COPY docker-entrypoint.sh /usr/local/bin/
 ---> Using cache
 ---> 98aa1e3161c4
Step 16/19 : RUN ln -s usr/local/bin/docker-entrypoint.sh / # backwards compat
 ---> Using cache
 ---> a5394275c2b2
Step 17/19 : ENTRYPOINT ["docker-entrypoint.sh"]
 ---> Using cache
 ---> c456c7b34697
Step 18/19 : EXPOSE 3306
 ---> Using cache
 ---> 05068b456523
Step 19/19 : CMD ["mysqld"]
 ---> Using cache
 ---> 5973a27bfd43
Successfully built 5973a27bfd43
Successfully tagged mariadbcustom:latest

Let’s check our image is here

 
[docker@docker1 10.3]$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
mariadbcustom       latest              5973a27bfd43        8 days ago          403MB

we run a docker container with our new customized image

 
[docker@docker1 10.3]$ docker run -d -e MYSQL_ROOT_PASSWORD=test123 mariadbcustom
7e344d87c4bc2a9c62298b9ec97aa4a331d8311cb1f077f47fcb673f1b3d8fa7

Let’s check if the user id was properly initialized to the mysql user

 
[docker@docker1 10.3]$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
7e344d87c4bc        mariadbcustom       "docker-entrypoint.s…"   6 minutes ago       Up 6 minutes        3306/tcp            hungry_heisenberg
 
[docker@docker1 10.3]$ docker exec -it hungry_heisenberg /bin/bash
 
root@7e344d87c4bc:~# id mysql
uid=1099(mysql) gid=1099(mysql) groups=1099(mysql)

We check also that the mysqld process run on the host as mysql user

 
[docker@docker1 10.3]$ ps -ef
UID        PID  PPID  C STIME TTY          TIME CMD
mysql     2727  2716  2 14:05 ?        00:00:00 mysqld

On the host, we can see that the mysqld process runs as mysql user. Why? Because now the user id of the mysql user existing on the docker container corresponds to the one existing on the host. In this case, the user id is 1099.

Conclusion:

In some use cases you might want to use a specific user to run some process and not using root or whatever user.However, in order to do that you sometimes need to change the Dockerfile or add a specific user on the host.

 

Cet article How uid mapping works in Docker containers? est apparu en premier sur Blog dbi services.

Deploy a Cloudera cluster with Terraform and Ansible in Azure – part 2

Wed, 2018-05-02 04:25

In this part of the blog posts series, we will show how ansible helps us to configure our cluster and install all pre-requisite needed for Cloudera Manager. Ansible is one of the most important automation tools currently.

Ansible will help us to configure all nodes for a manual installation using Cloudera Manager. Our playbook will contain the following roles:

  • cm_repo: add the same C.M repo into all nodes.
  • os_config: Adjust all OS parameter for installing a Cloudera cluster. 
  • java: Java JDK 1.7.80 installation.
  • cm_agents: Installation of the C.M agent’s packages
  • MariaDB: Installation of a MariaDB. C.M needs an Oracle, MySQL (MariaDB) or PostgreSQL database for Cloudera Manager meta-data storage and Hive meta-store.
  • mysql_connector: Installation of the MySQL connector for connecting to MariaDB. 
  • scm: Install and start the Cloudera Manager Server.

In a Big Data cluster, we split the node into roles.

  • Manager: dedicated node for all Cloudera Manager daemons
  • Master: NameNode daemon + Secondary NameNode daemon
  • Workers: DataNode daemons

The first step is to define the Ansible hosts inventory file. Below my inventory file.

[db_server]
manager ansible_host=<manager_ip> id=6

[cdh_manager]
manager  ansible_host=<manager_ip> id=6

[cdh_master]
master ansible_host=<master_ip>  id=5

[cdh_worker]
worker1 ansible_host=<worker1>  id=2
worker2 ansible_host=<worker2>  id=3
worker3 ansible_host=<worker3>  id=4

[cdh_servers:children]
cdh_worker
cdh_master
cdh_manager


[all:vars]
ansible_user=centos
ansible_ssh_pass=<YOUR_PASSWORD>
ansible_sudo_pass=<YOUR_PASSWORD>

We will now, define all variable needed for our roles. Variables are split into roles:

Below the example of variables definition for CDH server instances: cdh_servers.yml

---

db_hostname: "{{ hostvars[groups['db_server'][0]]['inventory_hostname'] }}"
scm_hostname: "{{ hostvars[groups['cdh_manager'][0]]['inventory_hostname'] }}"

cdh_version: 5.14.2
cluster_display_name: cluster_1

# Users and Groups
group:
  - dbi
user:
  - dbi

# Java variables
java_download_url: http://ftp.osuosl.org/pub/funtoo/distfiles/oracle-java/jdk-7u80-linux-x64.tar.gz
java_download_folder: /usr/java
java_name: "{{java_download_folder}}/jdk1.7_80"
java_archive: "{{java_download_folder}}/jdk-7u80-linux-x64.tar.gz"

# Mysql Java connector
mysql_java: mysql-connector-java-5.1.46
mysql_java_download_url: https://dev.mysql.com/get/Downloads/Connector-J/"{{mysql_java_archive}}"
mysql_java_download_folder: /usr/share/mysql-java/
mysql_java_archive: "{{ mysql_java_download_folder }}/{{ mysql_java }}.tar.gz"

mysql_java_jar: /usr/share/java/mysql-connector-java.jar

Same files will created for database server variable (db_server.yml) and Cloudera Manager server variables (scm_server.yml).

After the variables definition, we can start creating the different roles and their associated tasks.

 Cloudera Manager repo

The goal of this role is to add the same C.M repo in all cluster hosts. We will use a template of the repository file.

cloudera-manager.repo.j2

[cloudera-manager]
# Packages for Cloudera Manager, Version 5, on RedHat or CentOS 7 x86_64
name=Cloudera Manager
baseurl=https://archive.cloudera.com/cm5/redhat/7/x86_64/cm/{{cdh_version}}/
gpgkey=https://archive.cloudera.com/cm5/redhat/7/x86_64/cm/RPM-GPG-KEY-cloudera
gpgcheck = 1

cm_repo:

---
- name: Add Cloudera repo
  template:
    src: ../templates/cloudera-manager.repo.j2
    dest: "/etc/yum.repos.d/cloudera-manager{{cdh_version}}.repo"

The definition of the Cloudera Manager version has previously done in the cdh_servers.yml variable file.

OS Configuration

Some requirements are needed before installing a Cloudera cluster. This role will configure all hosts with Cloudera requirements: https://www.cloudera.com/documentation/enterprise/release-notes/topics/rn_consolidated_pcm.html#cmig_topic_4 .

---
- name: Create groups
  group:
    name: "{{item}}"
    state: present
  with_items: "{{group}}"

- name: Create user
  user:
    name: "{{item}}"
    shell: /bin/bash
    uid: 1050
    groups: "{{group}}"
  with_items: "{{user}}"

- name: "Build hosts file"
  lineinfile:
    dest: /etc/hosts
    regexp: '.*{{ item }}$'
    line: "{{ hostvars
  • ['ansible_default_ipv4']['address'] }} {{item}}"     state: present   when: hostvars
  • ['ansible_default_ipv4']['address'] is defined   with_items: '{{groups.all}}' - name: Disable transparent huge page - defrag   shell: echo "never" > /sys/kernel/mm/transparent_hugepage/defrag - name: Disable transparent huge page - enabled   shell: echo "never" > /sys/kernel/mm/transparent_hugepage/enabled - name: VM swappiness - 1   shell: echo "1" > /proc/sys/vm/swappiness - name: Set VM swappiness - 2   sysctl:     name: vm.swappiness     value: 1     state: present - name: Create /data dir   file:     path: /data     state: directory     mode: 0775     owner: dbi     group: dbi - name: Create file system on volume   filesystem:     fstype: ext4     dev: /dev/xvdb - name: Mount volume as /data   mount:     name: /data     src: /dev/xvdb     fstype: ext4     opts: defaults,noatime     state: mounted - name: install the latest version of ntp   yum:     name: ntp     state: latest - name: install the latest version of nscd   yum:     name: nscd     state: latest - name: install wget   yum:     name: wget     state: latest - name: Disable SELinux   selinux:     state: disabled - name: Reboot for SELinux if needed   command: /sbin/shutdown -r +1   async: 0   poll: 0
    Java installation

    The Java installation is one of the most complex parts of the installation. First, we need to choose a supported version of JDK. Then we need to be sure that Java has been installed properly in all hosts. The installation tasks is split into the following part:

    • Create installation directories: /usr/share/java and /usr/java
    • Download Java JDK 1.7.80 which is a supported version for Cloudera Manager
    • Unarchive Java JDK
    • Fix ownership
    • Make Java available for the system with alternatives
    • Clean up installation download folder
    • Add Java home path by exporting $JAVA_HOME variable

    Below the java install tasks.

    
    
    ---
    - name: Create directories
      file:
        path: "{{ item }}"
        state: directory
      with_items:
        - "{{ java_download_folder }}"
        - "/usr/share/java"
    
    - name: Creates directory
      file:
        path:  "{{ java_download_folder }}"
        state: directory
    
    
    - name: Download Java
      get_url:
        url: "{{ java_download_url }}"
        dest: "{{ java_archive }}"
        headers: "Cookie:' gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie'"
        validate_certs: no
    
    - name: Unarchive Java archive
      unarchive:
        src: "{{ java_archive }}"
        dest: "{{ java_download_folder }}"
        copy: no
    
    - name: Fix ownership
      file:
        state: directory
        path: "{{ java_name }}"
        owner: root
        group: root
        recurse: yes
    
    - name: Make Java available for system with alternatives
      command: 'alternatives --install "/usr/bin/java" "java" "{{java_name}}/bin/java" 2'
    
    - name: Clean up Java download
      file:
        state: absent
        path: "{{java_archive}}"
    
    - name: Add java home path
      blockinfile:
        dest: /etc/profile
        block: |
          export JAVA_HOME=/usr/java/jdk1.7.0_80
          export PATH=$JAVA_HOME/bin:$PATH
          regexp: "JAVA_HOME"
        state: present
    MariaDB installation

    After installing Java, we can start the installation and configuration of MariaDB database. You can find the entire role for MariaDB installation here.

    MySQL connector

    MySQL connector installation steps will follow approximatively the same steps as Java installation. All details here.

    Cloudera Manager Server installation

    The last role of this playbook is the installation of Cloudera Manager server. This role will simply install the Cloudera Manager server package in the cdh_manager host and start the 2 following deamons:

    • cloudera-manager-daemons
    • cloudera-manager-server
    ---
    - include_vars: ../../../group_vars/db_server.yml
    
    - name: Install the Cloudera Manager Server Packages
      yum:
        name: "{{ item }}"
        state: installed
      with_items:
        - cloudera-manager-daemons
        - cloudera-manager-server
    
    # - name: Prepare Cloudera Manager Server External Database
    #   command: /usr/share/cmf/schema/scm_prepare_database.sh
    #              -f
    #              --host {{ hostvars[db_hostname]['inventory_hostname'] }}
    #              mysql {{ databases.scm.name }} {{ databases.scm.user }} {{ databases.scm.pass }}
    #   changed_when: False
    
    - name: Start the Cloudera Manager Server
      service:
        name: "{{ item }}"
        state: restarted
        enabled: yes
      notify:
        - wait cloudera-scm-server
      with_items:
        - cloudera-scm-server
        - cloudera-scm-agent
    
    # Trigger handler to wait for SCM to startup
    - meta: flush_handlers

     

    site.yml

    After creating all roles, we need to define our site.yml in order to execute all tasks in the desired order.

    ---
    # Cloudera playbook
    
    - name: Configure Cloudera Manager Repository
      become: ansible_become
      hosts: cdh_servers
      roles:
        - cm_repo
      tags: cm_repo
    
    - name: Configure Epel repository
      become: ansible_become
      hosts: cdh_servers
      roles:
        - epel
      tags: epel_repo
    
    - name: OS Configuration
      become: ansible_become
      hosts: cdh_servers
      roles:
          - os_config
      tags: os_config
    
    - name: Install Java JDK 7
      become: ansible_become
      hosts: cdh_servers
      roles:
        - java
      tags: java
    
    - name: Install MySQL Java Connector
      become: ansible_become
      hosts: cdh_servers
      roles:
        - mysql_connector
      tags: mysql_java_connector
    
    - name: Install MariaDB and create databases
      hosts: db_server
      roles:
        - mariadb
      tags: mysql
    
    # ##############
    - name: Install Cloudera Manager Agents
      hosts: cdh_servers
      roles:
        - cm_agents
      tags: cm_agents
    
    - name: Install Cloudera Manager Server
      hosts: cdh_manager
      roles:
        - scm
      tags: cluster_template

     

    When all steps will finish, you can access to Cloudera Manager web interface by the following:

    http://<cdh_manager_ip>:7180

    Be sure, your network configuration is well configured to allow access to Cloudera Manager webUI through the default 7180 port.

    Cloudera-Manager

    The entire project with all files is available here.

     

    Cet article Deploy a Cloudera cluster with Terraform and Ansible in Azure – part 2 est apparu en premier sur Blog dbi services.

    ADWC: start/stop with PSM Command Line Interface

    Tue, 2018-05-01 23:10

    In the previous post, I explained how to create an Autonomous Data Warehouse with PSM (PaaS Service Manager Command Line Interface). The most common operation you want to do with it is starting and stopping the service. This is the best way to save credits for hourly billed services. And PSM is the easiest: run from everywhere (it is Python 3) and no need to provide credentials each time. In the previous post, I explained how to setup PSM for the ADWC service.

    Unfortunately, for starting and stopping the instance you may realize that:

    • It is not in the documentation
    • Syntax exists but doesn’t work


    The documentation is there, but no mention of start-service, stop-service nor restart-service: https://docs.oracle.com/en/cloud/paas/autonomous-data-warehouse-cloud/user/adwc-commands.html

    The online help has start/stop/restart-service:

    $ psm adwc h
     
    DESCRIPTION
    Oracle Autonomous Data Warehouse Cloud
     
    SYNOPSIS
    psm ADWC [parameters]  
    AVAILABLE COMMANDS
    o services
    List all Autonomous Data Warehouse Cloud instances
    o service
    List Autonomous Data Warehouse Cloud instance
    o create-service
    Provision Autonomous Data Warehouse
    o delete-service
    Unprovision Autonomous Data Warehouse
    o scale-service
    Scale Autonomous Data Warehouse
    o start-service
    This operation will set the operational state of service as started
    o stop-service
    This operation will set the operational state of service as stopped
    o restart-service
    This operation will set the operational state of service as after...
    o view-backups
    List all backups of Autonomous Data Warehouse Cloud instance
    o view-backup
    List a backup of Autonomous Data Warehouse Cloud instance
    o backup
    Backup Autonomous Data Warehouse
    o view-restores
    List all restore operations for Autonomous Data Warehouse Cloud instance
    o view-restore
    List a specified restore operation for Autonomous Data Warehouse Cloud...
    o restore
    Restore Autonomous Data Warehouse
    o check-health
    Health Check operation
    o operation-status
    View status of Autonomous Data Warehouse Cloud instance operation
    o activities
    View activities for Autonomous Data Warehouse Cloud instance
    o help
    Show help

    All 3 take the same parameters, the service name, the REST API output format, and a boolean for wait of the completion of the job:

    $ psm adwc start-service h
     
    DESCRIPTION
    This operation will set the operational state of service as started
     
    SYNOPSIS
    psm ADWC start-service [parameters] -s, --service-name
    [-of, --output-format ] [-wc, --wait-until-complete ]  
    AVAILABLE PARAMETERS
    -s, --service-name (string)
    Name of the Autonomous Data Warehouse Cloud instance
     
    -of, --output-format (string)
    Desired output format. Valid values are [short, json, html]  
    -wc, --wait-until-complete (boolean)
    Wait until the command is complete. Valid values are [true, false]. Default is
    'false'.
     
    EXAMPLES
    psm ADWC start-service -s ExampleInstance

    But…

    So, the online help show it and I try it:

    $ psm adwc start-service --service-name ADWC --output-format short -wc true
     
    Error: Not Found. <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Draft//EN">
    <HTML>
    <HEAD>
    <TITLE>Error 404--Not Found</TITLE>
    </HEAD>
    <BODY bgcolor="white">
    <FONT FACE=Helvetica><BR CLEAR=all>
    <TABLE border=0 cellspacing=5><TR><TD><BR CLEAR=all>
    <FONT FACE="Helvetica" COLOR="black" SIZE="3"><H2>Error 404--Not Found</H2>
    </FONT></TD></TR>
    </TABLE>
    <TABLE border=0 width=100% cellpadding=10><TR><TD VALIGN=top WIDTH=100% BGCOLOR=white><FONT FACE="Courier New"><FONT FACE="Helvetica" SIZE="3"><H3>From RFC 2068 <i>Hypertext Transfer Protocol -- HTTP/1.1</i>:</H3>
    </FONT><FONT FACE="Helvetica" SIZE="3"><H4>10.4.5 404 Not Found</H4>
    </FONT><P><FONT FACE="Courier New">The server has not found anything matching the Request-URI. No indication is given of whether the condition is temporary or permanent.</p><p>If the server does not wish to make this information available to the client, the status code 403 (Forbidden) can be used instead. The 410 (Gone) status code SHOULD be used if the server knows, through some internally configurable mechanism, that an old resource is permanently unavailable and has no forwarding address.</FONT></P>
    </FONT></TD></TR>
    </TABLE>
     
    </BODY>
    </HTML>

    Unfortunately, this doesn’t work. Is it that those commands are not supported yet, reason why we don’t find them in the documentation? Or maybe the opposite: they do not work and rather than fix them, they removed them from the documentation. One thing I’m 100% sure: start-service and stop-service are the most useful commands for a CLI giving easy access to an hourly billed and I want them to work. And it is Python, JSON and HTML – nothing hidden there.

    Hack Fix

    The error message is about no matching URL. PSM metadata is stored in your user directory (~/.psm/data on Linux) with one JSON file for each Oracle platform service. Having a look at the URLs in ADWC.json the bug is obvious:

    $ jq . ~/.psm/data/ADWC.json | grep uri
    "uri": "/paas/api/v1.1/instancemgmt/{identityDomainId}/services/{serviceType}/instances",
    "uri": "/paas/api/v1.1/instancemgmt/{identityDomainId}/services/{serviceType}/instances/{serviceName}",
    "uri": "/paas/api/v1.1/instancemgmt/{identityDomainId}/services/{serviceType}/instances",
    "uri": "/paas/api/v1.1/instancemgmt/{identityDomainId}/services/{serviceType}/instances/{serviceName}",
    "uri": "/paas/api/v1.1/instancemgmt/{identityDomainId}/services/{serviceType}/instances/{serviceName}/scale",
    "uri": "/paas/core/api/v1.1/instancemgmt/{identityDomainId}/services/{serviceType}/instances/{serviceName}/start",
    "uri": "/paas/core/api/v1.1/instancemgmt/{identityDomainId}/services/{serviceType}/instances/{serviceName}/stop",
    "uri": "/paas/core/api/v1.1/instancemgmt/{identityDomainId}/services/{serviceType}/instances/{serviceName}/restart",
    "uri": "/paas/api/v1.1/instancemgmt/{identityDomainId}/services/{serviceType}/instances/{serviceName}/backups",
    "uri": "/paas/api/v1.1/instancemgmt/{identityDomainId}/services/{serviceType}/instances/{serviceName}/backups/{backupId}",
    "uri": "/paas/api/v1.1/instancemgmt/{identityDomainId}/services/{serviceType}/instances/{serviceName}/backups",
    "uri": "/paas/api/v1.1/instancemgmt/{identityDomainId}/services/{serviceType}/instances/{serviceName}/restoredbackups",
    "uri": "/paas/api/v1.1/instancemgmt/{identityDomainId}/services/{serviceType}/instances/{serviceName}/restoredbackups/{jobId}",
    "uri": "/paas/api/v1.1/instancemgmt/{identityDomainId}/services/{serviceType}/instances/{serviceName}/restoredbackups",
    "uri": "/paas/api/v1.1/instancemgmt/{identityDomainId}/services/{serviceType}/instances/{serviceName}/healthcheck",
    "uri": "/paas/api/v1.1/activitylog/{identityDomainId}/job/{jobId}",
    "uri": "/paas/api/v1.1/activitylog/{identityDomainId}/filter",

    Let’s remove this ‘/core’ from the uri:

    sed -ie 's/core\\[/]//g' ~/.psm/data/ADWC.json

    And run again the start-service:

    $ psm adwc start-service -s ADWC -wc true
     
    Message: Job submitted successfully for start of service/system
    Job ID: 25617877
    Waiting for the job to complete... (it cannot be cancelled)

    Here it is.

    As my laptop is on Windows where I use Cygwin, I have setup two icons with:
    C:\cygwin64\bin\mintty.exe -w min /usr/bin/psm adwc stop-service -s ADWC -wc true
    C:\cygwin64\bin\mintty.exe -w min /usr/bin/psm adwc start-service -s ADWC -wc true

    Start/Stop time

    CaptureStartStop
    This ADWC service is a PDBaaS. Starting and Stopping is as easy as opening and closing a pluggable database. Here are the timestamps after starting and stoppin in a loop (with graph on 24 loops).

    The startup time is around 30 seconds. The stop time is about a minute. Really easy to use.


    $ psm adwc activities -s ADWC -l 50
    Operation Type Status Start Time End Time
    START_SERVICE SUCCEED 2018-04-29T14:30:36.888+0000 2018-04-29T14:31:21.563+0000
    STOP_SERVICE SUCCEED 2018-04-29T14:27:26.551+0000 2018-04-29T14:27:35.610+0000
    STOP_SERVICE SUCCEED 2018-04-29T14:25:22.172+0000 2018-04-29T14:25:51.586+0000
    START_SERVICE SUCCEED 2018-04-29T14:20:47.957+0000 2018-04-29T14:21:38.131+0000
    STOP_SERVICE SUCCEED 2018-04-29T14:08:09.409+0000 2018-04-29T14:08:48.125+0000
    START_SERVICE SUCCEED 2018-04-29T14:07:24.892+0000 2018-04-29T14:08:08.244+0000
    STOP_SERVICE SUCCEED 2018-04-29T14:04:57.566+0000 2018-04-29T14:05:27.458+0000
    START_SERVICE SUCCEED 2018-04-29T14:03:51.035+0000 2018-04-29T14:04:34.108+0000
    STOP_SERVICE SUCCEED 2018-04-29T14:03:17.701+0000 2018-04-29T14:03:47.262+0000
    START_SERVICE SUCCEED 2018-04-29T14:02:00.944+0000 2018-04-29T14:02:50.978+0000
    STOP_SERVICE SUCCEED 2018-04-29T14:00:56.990+0000 2018-04-29T14:01:29.567+0000
    START_SERVICE SUCCEED 2018-04-29T13:59:52.898+0000 2018-04-29T14:00:39.373+0000
    STOP_SERVICE SUCCEED 2018-04-29T13:59:19.380+0000 2018-04-29T13:59:49.011+0000
    START_SERVICE SUCCEED 2018-04-29T13:58:15.594+0000 2018-04-29T13:58:58.937+0000
    STOP_SERVICE SUCCEED 2018-04-29T13:57:42.355+0000 2018-04-29T13:58:11.845+0000
    ...

    Easy command line without having to provide a password interactively, wait for completion, fast operation, this gives a great user experience for this service. The only problem is when you play with several cloud accounts. I’ll show an idea in the next post.

     

    Cet article ADWC: start/stop with PSM Command Line Interface est apparu en premier sur Blog dbi services.

    ADWC: Creation of Autonomous Database Cloud service

    Mon, 2018-04-30 23:00

    You want to try the Autonomous Database Cloud Service? That’s easy. Here is a Step-by-Step.

    Cloud Credits

    CaptureOCIcredits
    First, you need Cloud Credits. You may have bought them (any recent negotiation with Oracle Sales, even for on-premises, involves some Cloud Credits). You can have a free trial with 300$ Cloud Credits available for 1 month. To get another month, you need a different e-mail address and different Credit Card number (not charged). It is quite easy to have different e-mail addresses and your bank may provide virtual credit card where the number changes each time. Or you may have the 5000$ Cloud Credits available for 1 year from the Education program. I got those thanks to ACE Director program.

    Update 01-MAY-2018 – There’s also the 500$ credits from the “white glove” program – you can ask to your Sales representative

    In all cases you will be able to test the service without spending too much credits because:

    • This service is not expensive ($2.5 per OCPU per Hour in Pay As You Go)
    • It is very easy to start and stop the service, and then pay only for the hours where you connect
    • If you choose ‘Bring You Own License’ in the creation, the per OCPU per Hour is only $0.48 (but be sure that you have covered
      See https://cloud.oracle.com/en_US/datawarehouse/pricing)
    • Capturebilling

    • And finally, during the trial promotion, the credits are consumed at discounted rate
      (after 9 hours of usage, I got less than 1$ used)
    OCI Account

    CaptureOCIaccount
    The first generation of Oracle Cloud, is now called ‘OCI Classic’, and you distinguish it when connecting as the Sign-In page mentions ‘Traditional Cloud Account’. You cannot access to ADWC with this account.

    You need an access to the OCI (Oracle Cloud Infrastructure – the version 2 of Oracle Cloud).
    If, when Sign-In, you are welcomed by this guy looking at his phone, you are at the right place. I’m always curious about how they choose an image for a page used every day and for several years. The oracle.com login page is easy with the headquarters blurry shot. For the OCI account, they choose the “Man On Smart Phone – Young Business Man Texting In Airport – Casual Urban Professional Businessman Using Smartphone App Smiling Happy Inside Office Building Or Airport” from the Adobe image stock.

    Ashburn

    CaptureOCIregion
    For the moment, the ADWC service is available only in the Ashburn Cloud Center. Not yet in Europe (but planned for Frankfurt). You can see the regions here: https://cloud.oracle.com/data-regions. Then, when you receive your access to the Oracle Cloud Services, chose the Ashburn Data Center.

    Update 01-MAY-2018 – It seems that the service is available in Frankfurt.

    Create Instance

    CaptureOCIcreateCaptureOCIcreate2
    The instance creation is easy and fast. It will create a Pluggable Database (PDB) in the Oracle Cloud CDB. You provide a name, and ADMIN password (be careful, rule is at least 12 characters) which is the password you’ll use to connect as the ADMIN user. You can change it later and add new users. The Shape is different from the DBaaS here. You define the number of threads you want to use (it actually sets the CPU_COUNT for the PDB) and the size of PDB datafiles. You can change both later with Scale Up/Down.

    PaaS Service Manager Command Line Interface

    You can also create an ADWC service from the command line. I’ll show how to install and use PSM, the PaaS Service Manager Command Line Interface). Rodrigo Jorge has a nice description for DBaaS on his blog.

    So, you download PSM:

    curl -X GET -u my.cloud.account@me.com:MyP@ssw0rd -H X-ID-TENANT-NAME:idcs-31bbd63c3cb9466cb8a96f627b6b6116 https://psm.us.oraclecloud.com/paas/core/api/v1.1/cli/idcs-31bbd63c3cb9466cb8a96f627b6b6116/client -o psmcli.zip
     
    % Total % Received % Xferd Average Speed Time Time Time Current
    Dload Upload Total Spent Left Speed
    0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0
    100 86945 0 86945 0 0 16806 0 --:--:-- 0:00:05 --:--:-- 23820

    CaptureOCIurl

    The user:password are those you use in the account Sign-In.

    The ‘Tenant Name’, you get it from the URL of this Man On Smart Phone Sign-in web page. You will see it also mentioned later as ‘Identity domain’ (like in OCI-Classic). If you have a doubt, create the service from the web console, click on it and you will see the Tenant Name.

    CapturePIP3
    So, you have a zip file and do not unzip it. It is a Python 3 module and you install it with ‘pip3′. You can do that in any OS.

    I have the strange idea to run my laptop on Windows with Cygwin for command line stuff. Here are the python3 packages I have here.

    Here is the installation of PDM:

    pip3 install -U psmcli.zip
     
    Processing ./psmcli.zip
    ...

    And now the nice thing is that you will configure once your credentials with ‘psm setup’. You provide the user, password and tenant name (which is called ‘identity domain’ here):

    $ psm setup
     
    Username: my.cloud.account@me.com
    Password: MyP@ssw0rd
    Retype Password: MyP@ssw0rd
    Identity domain: idcs-31bbd63c3cb9466cb8a96f627b6b6116
    Region [us]:
    Output format [short]:
    Use OAuth? [n]:
    ----------------------------------------------------
    'psm setup' was successful. Available services are:
     
    o ADWC : Oracle Autonomous Data Warehouse Cloud
    o ADWCP : Oracle Autonomous Data Warehouse Cloud Platform
    o ANALYTICS : Oracle Analytics Cloud
    o APICS : Oracle API Platform Cloud Service
    o APICatalog : Oracle API Catalog Service
    ...

    ADWC is on the list. You are ready to manage ADWC instances, such as create one:


    $ psm adwc create-service -c - <<<' {
    "serviceName": "ADWCx",
    "adminPassword": "Ach1z00dAch1",
    "numCpus": "1",
    "storageCapacity": "1",
    "serviceLevel": "PAAS",
    "serviceVersion": "18.1.1",
    "managedSystemType": "oracle",
    "enableNotification": true,
    "notificationEmail": "notifocations@me.com",
    "isBYOL": true
    } '
     
    Message: Submitted job to create service [ADWCx] in domain [idcs-31bbd63c3cb9466cb8a96f627b6b6116].
    Job ID: 25509908

    We can check the status of job
    $ psm adwc activities --service-name ADWC
    Operation Type Status Start Time End Time
    CREATE_SERVICE RUNNING 2018-04-28T19:57:31.056+0000 N/A

    And a few minutes later the service is there:
    $ psm adwc activities --service-name ADWC
    Operation Type Status Start Time End Time
    CREATE_SERVICE SUCCEED 2018-04-28T19:57:31.056+0000 2018-04-28T19:59:51.900+0000

    We will see how to connect in a future post. Very easy from SQL Developer or SQLcl.

    You can delete the service when you don’t need it anymore:

    psm adwc delete-service --service-name ADWC

    To save credits, you want an easy way to stop and start the service. That’s for tne next post as PSN requires a little hack there.

     

    Cet article ADWC: Creation of Autonomous Database Cloud service est apparu en premier sur Blog dbi services.

    Oracle 18c clone PDB and Transparent Data Encryption

    Mon, 2018-04-30 10:02

    Let’s do some tests with the Oracle 18c new feature in creating PDB clone with DBCA. Unfortunately, this feature does not work when you have TDE enabled.

    Just to remember, with Oracle 12.2 we had the possibility to create PDBs with dbca just from PDBseed to from unplugged PDBs:

    psi1

    Now in version 18c we can create PDBs from existing PDBs as follows (for this test TDE is disabled):

    psi2

    You can choose the Pluggable Database you want to clone.

    psi3

    You select the name of your cloned PDB, and in less than 1 minute your original PDB is cloned:

    oracle@localhost:/u00/app/oracle/oradata/PSI/ [DB18] sq
    SQL*Plus: Release 18.0.0.0.0 Production on Mon Apr 30 12:55:54 2018
    Version 18.1.0.0.0
    Copyright (c) 1982, 2017, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
    Version 18.1.0.0.0
    
    SQL> show pdbs
       CON_ID CON_NAME         OPEN MODE  RESTRICTED
    
          2   PDB$SEED         READ ONLY      NO
    
          3   PDB1             READ WRITE     NO
    
          5   PDBNEW           READ WRITE     NO

     Let’s make some tests with PDBs and TDE.

    In Oracle 18c, it is no more mandatory to configure the sqlnet.ora file, we only have to define wallet_root and tde_configuration as follows:

    SQL> alter system set wallet_root='/u00/app/oracle/admin/DB18/wallet_cdb' scope=spfile;
    SQL> startup force;
    ORACLE instance started.
     Total System Global Area 1677717664 bytes
    Fixed Size          8896672 bytes
    Variable Size         520093696 bytes
    Database Buffers     1140850688 bytes
    Redo Buffers            7876608 bytes
    Database mounted.
    Database opened.
    
    SQL> alter system set tde_configuration="keystore_configuration=file" scope=both;
    System altered.

    We create a management key in the CDB

    SQL> administer key management create keystore identified by manager_cdb;
    keystore altered.

    The wallet file is created:

    SQL> !ls /u00/app/oracle/admin/DB18/wallet_cdb/tde
    ewallet.p12

    We open the keystore for the CDB and the PDBs:

    SQL> administer key management set keystore open identified by manager_cdb 
    container=all;
    keystore altered.

    We check in the pluggable database:

    SQL> alter session set container=pdb1;
    Session altered.
    SQL> select  status from v$encryption_wallet;
    STATUS
    OPEN_NO_MASTER_KEY

    But we receive open_no_master_key …

    We return to the CDB:

    SQL> connect / as sysdba
    Connected.
    SQL> administer key management set key identified by manager_cdb with backup;
    keystore altered.

    In the PDB:

    SQL> alter session set container=pdb1;
     
    Session altered.
     
    SQL> ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY manager_cdb with backup;
     
    keystore altered.
     
    SQL> select status from v$encryption_wallet;
     
    STATUS
     
    OPEN

    The keystore is now opened in the PDB, we can encrypt the data. With Oracle 18c there are two modes: united (the CDB owns the keystore for itself and the PDBs) or isolated (the PDB has its own keystore). In our case we are in united mode, let’s see if we can clone the PDB.

    SQL> SELECT KEYSTORE_MODE FROM V$ENCRYPTION_WALLET;
    
    KEYSTORE
    
    UNITED
    
    

    We do the same operations as previously, but the assistant is asking us for the keystore password:

    psi4

    By looking at oracle error messages, we can find a similar error on PDB switchover : “Metalink Note 2378945.1: “We only support this with auto login wallet”

    So I decided to implement auto login in my configuration and try to clone my PDB:

    SQL> administer key management create local auto_login keystore from keystore '/u00/app/oracle/admin/DB18/wallet_cdb/tde' identified by manager_cdb;
    
    SQL> startup force;
    ORACLE instance started.
    
    Total System Global Area 1677717664 bytes
    Fixed Size		    8896672 bytes
    Variable Size		  520093696 bytes
    Database Buffers	 1140850688 bytes
    Redo Buffers		    7876608 bytes
    Database mounted.
    Database opened.

    My PDB TDE configuration is in auto login mode:

    SQL> select wrl_type,status, wallet_type from v$encryption_wallet;
    
    WRL_TYPE	     STATUS			    WALLET_TYPE
    FILE		     OPEN			    LOCAL_AUTOLOGIN

    But even if TDE is implemented in auto login mode, the PDB clone operation fails with the same ORA-46697 error message.

    We also encounter this bad behavior with the 18c new features about PDBs snapshot, which allows to create PDBs snapshots manually or automatically:

    SQL> create pluggable database snap_pdb1 from pdb1
      2  file_name_convert = ('snap_pdb1', 'pdb1')
      3* snapshot mode every 60 minutes
    create pluggable database snap_pdb1 from pdb1
    *
    ERROR at line 1:
    ORA-46697: Keystore password required.
    
    

    Cloning PDBs is a very useful tool in order to realize mass deployment to development teams, it should be nice to make it work with TDE enabled.

    
    
     

    Cet article Oracle 18c clone PDB and Transparent Data Encryption est apparu en premier sur Blog dbi services.

    Managing SQL Server sa credentials with Docker secrets on Swarm

    Fri, 2018-04-27 05:39

    A couple of weeks ago, I was working on a MSSQL Server docker image in a context of Hidora, a swiss cloud provider based on jelastic and for Docker-based applications.

    When writing my jps manifest file I was agreeably surprised about the section concerning the MSSQL Server credentials information. We may able to define global variables for SQL Server sa password with ${fn.password} as function as shown below:

    globals:
      sa_password: ${fn.password}
      user_password: ${fn.password}
    
    nodes:
      - nodeGroup: cp
        count: 1
        cloudlets: 30
        displayName: mssqlserver-linux-2017-cu4 dbi services
        env:
          MSSQL_SA_PASSWORD: ${globals.sa_password}
          MSSQL_PASSWORD: ${globals.user_password}
          MSSQL_USER: ${globals.user_login} 
          TZ: Europe/Berlin
        image: dbi_linux_sql2017_cu4
        registryUrl: node21755-env-6328816:5000

     

    In fact, jelastic provides this interesting function that generates a random password during the creation of the image container preventing to put the security information in clear text into the manifest. A very good idea!

    But let’s going back to a traditional docker infrastructure. Usually as developers, we put sensible information in the docker-compose file deployment but that’s not a big deal in this case (at least for almost cases). If we start containers based on our image in a Docker infrastructure, we still able to get the SQL Server sa password by running the docker inspect command.

    $docker inspect 23107bf057ef | jq .[].Config.Env
    [
      "ACCEPT_EULA=Y",
      "SA_PASSWORD=SuperSecretPassword1234",
      "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
    ]

     

    blog 131 - 0 - docker swarm secret

    Definitely, one thing we want to avoid in Production. Sometimes ago, I wrote about Docker Swarm feature that enables addressing production-oriented workload including scalability, high-availability and others. In production, the game may change a lot because we have to manage sensible data as login credentials and fortunately we may rely on a Docker Swarm feature called Docker secrets.

    As stated to the Docker documentation, when we create and add a secret to a swarm, Docker sends the secret to the swarm manager over a mutual TLS connection. The secret is stored in the Raft log, which is encrypted. The entire Raft log is replicated across the other managers, ensuring the same high availability guarantees for secrets as for the rest of the swarm management data.

    This feature may address the security concern with MSSQL Server containers and sa credentials in production workloads. Indeed, as database administrator, we don’t want to provide the sa password to the application users and we will go further by providing a SQL Server login without any scope-limited permissions and without providing any password in clear text in the docker deployment file.

    Thus, we managed to modify our initial MSSQL Docker image to support Docker secrets on Docker Swarm. In this blog, let’s focus on SQL Server sa password. Firstly we have to create a docker secret concerning the SQL Server sa password

    $echo "Passw0rd1" | docker secret create mssql_sa_password –
    $ docker secret ls
    ID                          NAME                  DRIVER              CREATED             UPDATED
    fpqykdgr4ytcher1j3sb5tgfv   mssql_sa_password                         35 minutes ago      35 minutes ago

     

    The mssql_sa_password secret is then replicated to the other nodes by the Docker Swarm using TLS as explained above.

    The second step consisted in modifying the docker file as well as the docker-compose file for deployment. The former contains two important sections where we had to put the additional code to extract the docker secret information as entrypoint.sh and healthcheck.sh

    The Docker file (I put only the interesting sample here):

    # Entry point # 
    ENTRYPOINT ["./entrypoint.sh"]
    
    # Tail the setup logs to trap the process
    CMD ["tail -f /dev/null"]
    
    # Healthcheck routine for mssql instance
    HEALTHCHECK --interval=15s CMD [ "./healthcheck.sh" ]

     

    entrypoint.sh bash script includes starting up the sqlservr process and healhcheck.sh a custom health check routine based on sqlcmd command line tool (meaning this approach requires mssql-tools package is already installed in your image).

    /opt/mssql-tools/bin/sqlcmd -S localhost,$MSSQL_TCP_PORT -U sa -P $SA_PASSWORD -Q "select 1" && grep -q "MSSQL CONFIG COMPLETED" ./config.log

     

    The code to leverage Docker secrets was as follows:

    if [  ! -z ${MSSQL_SA_PASSWORD_FILE} ];
    then 
        SA_PASSWORD=$(cat $MSSQL_SA_PASSWORD_FILE)
    else
        SA_PASSWORD=${MSSQL_SA_PASSWORD}
    fi

     

    We added a new $MSSQL_SA_PASSWORD_FILE variable that takes priority over the $MSSQL_SA_PASSWORD if exists. The $MSSQL_SA_PASSWORD_FILE points to the path where the secret is available inside the Docker container by design: /run/secrets/<secret file>. We tried to follow a standard rule that consists in adding the _FILE prefix to the existing SQL Server sa variable (MSSQL_SA_PASSWORD) for convenience.

    Finally, we modified the docker-compose file for deployment that contains all information to connect to the secret password so we may switch easily between using the traditional approach with the password in clear text in the deployment file and the securest way to manage sensible data on Docker Swarm.

    blog 131 - docker secret deploy file_

    After applying the code update, using docker inspect command doesn’t reveal the password anymore.

    $docker inspect 62c42040174a | jq .[].Config.Env
    [
      "ACCEPT_EULA=Y",
      "MSSQL_PID=Developer",
      "MSSQL_SA_PASSWORD_FILE=/run/secrets/mssql_sa_password",
    …
    ]

     

    As you probably know, Docker EE 2.0 is now able to manage container applications both on Docker Swarm and Kubernetes. I’m looking forward to write about for both environments in the context of MSSQL Server databases and managing sensible data :)

    See you!

     

     

     

    Cet article Managing SQL Server sa credentials with Docker secrets on Swarm est apparu en premier sur Blog dbi services.

    IOUG Collaborate 18

    Thu, 2018-04-26 18:12

    IMG_5325
    The IOUG Collaborate 18 is now done.
    I presented 2 sessions there:

    From Transportable Tablespaces to Pluggable Databases

    The introduction comes from a 5 minutes talk at Oracle Open World 2016 in the ‘EOUC Database ACES Share Their Favorite Database Things’, on the history of having tablespaces self-contained (with a relative file number in 8.0 and locally managed tablespaces in 8.1). I’ve added a demo on a feature that is not well known – using RMAN to transport tablespaces without the need to have the source in read-only, available since 10g. And I demoed all PDB movement features in 12cR2, 12cR2 and 18c: remote clone, refreshable clones, PDB switchover, Online relocation,…

    A full article on the topic is available on Oracle Scene: http://viewer.zmags.com/publication/07098028#/07098028/8 and feel free to gove feedback here if you are using those features. Lot of interesting comments went after the session.

    IMG_5324

    12.2 Multitenant New Security Features to Clarify DevOps and DBA role separation

    This session is basically a demo of lockdown profiles and resource manager settings at PDB level. With an introduction on DevOps because the goal of those features is to be able to lower the roundtrips between Dev and Ops by giving nearly full privileges on the PDB. Those features were developed by Oracle for their own managed cloud services: Exadata Express Cloud Service and Autonomous Data Warehouse. You are the administrator of your PDB there, but locked down to what cannot break the CDB, and limited to the resources you pay for.

    I’ll give this session next Month in Dusseldorf at DOAG Datenbank: https://programm.doag.org/datenbank/2018/#/scheduledEvent/558645, so you still have the occasion to see how this Autonomous Data Warehouse Cloud service works from command line.

    This is clearly an alternative to having Oracle Database on Docker, where containers have a clear separation between the user data and metadata (in the PDB) and the software distribution and data (in ORACLE_HOME, and in CDB$ROOT). But experience shows a slow adoption of multitenant, and developers are asking for Docker containers. But the separation is not so easy: it is clear that the user data must be in an external volume and the software (the Oracle Home – or at least the minimal part of it required to run the database and options). But a big part of the software (for example the dbms_… packages) is also in the database, in CDB$ROOT. Here again feel free to comment.

     

    Cet article IOUG Collaborate 18 est apparu en premier sur Blog dbi services.

    Deploying PostgreSQL in MiniShift/OpenShift

    Thu, 2018-04-26 00:05

    The last post quickly outlined on how you can setup MiniShift for playing around with OpenShift on your workstation. In this post we’ll setup PostgreSQL in MiniShift using the default PostgreSQL image that already comes with MiniShift.

    When MiniShift is currently stopped start it up:

    dwe@box:~$ minishift start
    dwe@box:~$ eval $(minishift oc-env)
    dwe@box:~$ which oc
    /home/dwe/.minishift/cache/oc/v3.9.0/linux/oc
    

    With OpenShift everything is organized into projects and the first thing you need to do is to create a project. You can either do that using the command line or the web interface. Doing it using the command line is quite simple and fast. The first step is to login to OpenShift:

    dwe@box:~$ oc login
    Authentication required for https://192.168.99.100:8443 (openshift)
    Username: system
    Password: 
    Login successful.
    
    You don't have any projects. You can try to create a new project, by running
    
        oc new-project 
    

    The output of the login command already tells what you need to do to create a new project:

    dwe@box:~$ oc new-project postgres
    Now using project "postgres" on server "https://192.168.99.100:8443".
    
    You can add applications to this project with the 'new-app' command. For example, try:
    
        oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git
    
    to build a new example application in Ruby.
    

    Doing the same with the web interface of course needs more time:

    Selection_043
    Selection_044
    Selection_045Selection_046

    To get a list of available images from the command line:

    dwe@box:~$ oc get imagestreams --namespace openshift
    NAME             DOCKER REPO                                TAGS                           UPDATED
    dotnet           172.30.1.1:5000/openshift/dotnet           2.0,latest                     4 hours ago
    dotnet-runtime   172.30.1.1:5000/openshift/dotnet-runtime   2.0,latest                     4 hours ago
    httpd            172.30.1.1:5000/openshift/httpd            2.4,latest                     4 hours ago
    jenkins          172.30.1.1:5000/openshift/jenkins          1,2,latest                     4 hours ago
    mariadb          172.30.1.1:5000/openshift/mariadb          10.1,10.2,latest               4 hours ago
    mongodb          172.30.1.1:5000/openshift/mongodb          2.4,2.6,3.2 + 2 more...        4 hours ago
    mysql            172.30.1.1:5000/openshift/mysql            5.5,5.6,5.7 + 1 more...        4 hours ago
    nginx            172.30.1.1:5000/openshift/nginx            1.10,1.12,1.8 + 1 more...      4 hours ago
    nodejs           172.30.1.1:5000/openshift/nodejs           8,latest,0.10 + 2 more...      4 hours ago
    perl             172.30.1.1:5000/openshift/perl             5.24,latest,5.16 + 1 more...   4 hours ago
    php              172.30.1.1:5000/openshift/php              5.5,5.6,7.0 + 2 more...        4 hours ago
    postgresql       172.30.1.1:5000/openshift/postgresql       9.2,9.4,9.5 + 2 more...        4 hours ago
    python           172.30.1.1:5000/openshift/python           3.3,3.4,3.5 + 3 more...        4 hours ago
    redis            172.30.1.1:5000/openshift/redis            3.2,latest                     4 hours ago
    ruby             172.30.1.1:5000/openshift/ruby             2.0,2.2,2.3 + 2 more...        4 hours ago
    wildfly          172.30.1.1:5000/openshift/wildfly          10.1,8.1,9.0 + 2 more...       4 hours ago
    

    Quite a few to choose from but we of course are interested in the postgresql one.

    dwe@box:~$ oc new-app -e POSTGRESQL_USER=blubb -e POSTGRESQL_PASSWORD=blubb -e POSTGRESQL_DATABASE=blubb postgresql
    

    Checking the logs is always a good idea. For this we need to know the pod:

    dwe@box:~$ oc get pods
    NAME                 READY     STATUS    RESTARTS   AGE
    postgresql-1-8n85h   1/1       Running   0          5m
    

    Now that we know the pod we can ask for the logs:

    dwe@box:~$ oc logs postgresql-1-8n85h
    The files belonging to this database system will be owned by user "postgres".
    This user must also own the server process.
    
    The database cluster will be initialized with locale "en_US.utf8".
    The default database encoding has accordingly been set to "UTF8".
    The default text search configuration will be set to "english".
    
    Data page checksums are disabled.
    
    fixing permissions on existing directory /var/lib/pgsql/data/userdata ... ok
    creating subdirectories ... ok
    selecting default max_connections ... 100
    selecting default shared_buffers ... 128MB
    selecting dynamic shared memory implementation ... posix
    creating configuration files ... ok
    running bootstrap script ... ok
    performing post-bootstrap initialization ... ok
    syncing data to disk ... ok
    
    Success. You can now start the database server using:
    
        pg_ctl -D /var/lib/pgsql/data/userdata -l logfile start
    
    
    WARNING: enabling "trust" authentication for local connections
    You can change this by editing pg_hba.conf or using the option -A, or
    --auth-local and --auth-host, the next time you run initdb.
    waiting for server to start....LOG:  redirecting log output to logging collector process
    HINT:  Future log output will appear in directory "pg_log".
     done
    server started
    => sourcing /usr/share/container-scripts/postgresql/start/set_passwords.sh ...
    ALTER ROLE
    waiting for server to shut down.... done
    server stopped
    Starting server...
    LOG:  redirecting log output to logging collector process
    HINT:  Future log output will appear in directory "pg_log".
    

    Looks good so far. How can we work with the PostgreSQL instance now? One way is to start a remote shell:

    dwe@box:~$ oc rsh postgresql-1-8n85h
    sh-4.2$ ps -ef
    UID        PID  PPID  C STIME TTY          TIME CMD
    1000100+     1     0  0 13:29 ?        00:00:00 postgres
    1000100+    57     1  0 13:29 ?        00:00:00 postgres: logger process  
    1000100+    59     1  0 13:29 ?        00:00:00 postgres: checkpointer process  
    1000100+    60     1  0 13:29 ?        00:00:00 postgres: writer process  
    1000100+    61     1  0 13:29 ?        00:00:00 postgres: wal writer process  
    1000100+    62     1  0 13:29 ?        00:00:00 postgres: autovacuum launcher pr
    1000100+    63     1  0 13:29 ?        00:00:00 postgres: stats collector proces
    1000100+    85     0  0 13:46 ?        00:00:00 /bin/sh
    
    sh-4.2$ psql -c "\l"
                                     List of databases
       Name    |  Owner   | Encoding |  Collate   |   Ctype    |   Access privileges
       
    -----------+----------+----------+------------+------------+--------------------
    ---
     blubb     | blubb    | UTF8     | en_US.utf8 | en_US.utf8 | 
     postgres  | postgres | UTF8     | en_US.utf8 | en_US.utf8 | 
     template0 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres        
      +
               |          |          |            |            | postgres=CTc/postgr
    es
     template1 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres        
      +
               |          |          |            |            | postgres=CTc/postgr
    es
    (4 rows)
    

    This is usually not what you want to do. What you rather want is to make the instance available from outside the cluster. How can you do that? Either you do port forwarding:

    dwe@box:~$ oc port-forward postgresql-1-8n85h 5432
    Forwarding from 127.0.0.1:5432 -> 5432
    

    This will stay in the foreground. From another session you can use psql to connect:

    dwe@box:~$ psql -h localhost -U blubb blubb
    psql (9.5.12, server 9.6.5)
    WARNING: psql major version 9.5, server major version 9.6.
             Some psql features might not work.
    Type "help" for help.
    
    blubb=> 
    

    … or you can expose a service:

    dwe@box:~$ oc expose dc postgresql --type=LoadBalancer --name=mpostgresql-ingress
    service "mpostgresql-ingress" exposed
    dwe@box:~$ oc get svc
    NAME                  TYPE           CLUSTER-IP     EXTERNAL-IP                     PORT(S)          AGE
    mpostgresql-ingress   LoadBalancer   172.30.15.98   172.29.104.134,172.29.104.134   5432:31734/TCP   38s
    

    From now on you can connect using the MiniShift IP and the port listed above:

    dwe@box:~$ psql -h $(minishift ip) -p 31734 -U blubb
    Password for user blubb: 
    psql (9.5.12, server 9.6.5)
    WARNING: psql major version 9.5, server major version 9.6.
             Some psql features might not work.
    Type "help" for help.
    
    blubb=> \l
                                     List of databases
       Name    |  Owner   | Encoding |  Collate   |   Ctype    |   Access privileges   
    -----------+----------+----------+------------+------------+-----------------------
     blubb     | blubb    | UTF8     | en_US.utf8 | en_US.utf8 | 
     postgres  | postgres | UTF8     | en_US.utf8 | en_US.utf8 | 
     template0 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres          +
               |          |          |            |            | postgres=CTc/postgres
     template1 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres          +
               |          |          |            |            | postgres=CTc/postgres
    (4 rows)
    
    blubb=> 
    

    Be aware that the storage is not persistent in that case and you’ll lose everything when the container is stopped. In the next post we’ll look into how we can deploy the containers which are provided by EnterpriseDB.

     

    Cet article Deploying PostgreSQL in MiniShift/OpenShift est apparu en premier sur Blog dbi services.

    Oracle Fusion Middleware Infrastructure 12.2.1.2.0 – Probable Incorrect Firewall Configuration

    Wed, 2018-04-25 15:14

    It was a long time that I wrote my last blog. Lots of customer activities and I had no time to write one. With the acquired knowledges, it’s time to write more blogs and share knowledge, don’t you think ?

    Let’s begin with an easy one. During customer activity when upgrading a complex Fusion Middleware Platform, it was asked to us to provide support during a move to a secure zone. I can’t tell you what’s really behind the secure zone, probably a more protected and more restricted network with more firewall restriction, don’t ask I have no idea but we unfortunately had an issue. Before that move, I was quite confident as it was informed that it’s only impacting the current IP address. No stress as all the Fusion Middleware Component configuration were using a network alias leveraging the impact of any network changes that could happen. So as a sample for this blog post, the WebLogic instances listen addresses was set with the network alias “dbi-cust-1983.dbi-services.com” pointing to the real hostname in the DNS “vmtestdbiofm01.dbi-services.com”. The NodeManager was set the same for the configured machine. Please see some screenshot to help to understand the configuration we had.

    WebLogic Admin Server Listen Address

    ofm-wls-firewall-AdminServer-ListenAddress

    NodeManager machine Listen Address

    ofm-wls-firewall-NM-ListenAddress

    NodeManager Listen Address

    weblogic@:/home/weblogic/ [dbiOFMHDV] cd $DOMAIN_HOME
    weblogic@:/data/weblogic/config/domains/dbiOFMHDV/ [dbiOFMHDV] find ./ -name nodemanager.properties
    ./nodemanager/nodemanager.properties
    weblogic@:/data/weblogic/config/domains/dbiOFMHDV/ [dbiOFMHDV] cat nodemanager/nodemanager.properties | grep ListenAddress
    ListenAddress=dbi-cust-1983.dbi-services.com

    Let’s also add that the network naming resolution for the Middleware Components was also done through the DNS and the local naming resolution (/etc/hosts) wasn’t containing the real host nor the DNS alias used.

    weblogic@:/data/weblogic/config/domains/dbiOFMHDV/ [dbiOFMHDV] cat /etc/hosts
    127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

    So the operation began, we shutdown all the Middleware and Database Components, the Network and Unix Team worked on their tasks (IP changes, DNS correlated changes, forcing changes propagation over the network, aso). Once they completed and once we confirmed that Network Config change have been properly applied (ifconfig, ping, nslookup), we started the platform again without any changes from the Middleware part. Application team performed smoke testing and there was no functional impact.

    After reviewing the logs for any error, I found an Error with the RJVM module as follow

    ####<Apr 16, 2018, 11:08:06,438 AM CEST> <Error> <RJVM> <vmtestdbiofm01.dbi-services.com> <AdminServer> <ExecuteThread: '0' for queue: 'weblogic.socket.Muxer'> <<WLS Kernel>> <> <d41ced14-a5e8-4ef9-bd90-19f63910849d-00000059> <1523869686438> <[severity-value: 8] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000572> <The server rejected a connection attempt JVMMessage from: '3739718027889105070S:dbi-cust-1983.dbi-services.com:[-1,-1,9001,9001,-1,-1,-1]:dbiOFMHDV:WLS_FORMS' to: '0B:10.32.0.12:[8443,-1,-1,-1,-1,-1,-1]' cmd: 'CMD_IDENTIFY_REQUEST', QOS: '101', responseId: '-1', invokableId: '-1', flags: 'JVMIDs Sent, TX Context Not Sent, 0x1', abbrev offset: '183' probably due to an incorrect firewall configuration or administrative command.>
    ####<Apr 16, 2018, 11:08:55,371 AM CEST> <Error> <RJVM> <vmtestdbiofm01.dbi-services.com> <AdminServer> <ExecuteThread: '3' for queue: 'weblogic.socket.Muxer'> <<WLS Kernel>> <> <d41ced14-a5e8-4ef9-bd90-19f63910849d-0000005d> <1523869735371> <[severity-value: 8] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000572> <The server rejected a connection attempt JVMMessage from: '-2190410908851642855S:dbi-cust-1983.dbi-services.com:[-1,-1,9002,9002,-1,-1,-1]:dbiOFMHDV:WLS_REPORTS' to: '0B:10.32.0.12:[8443,-1,-1,-1,-1,-1,-1]' cmd: 'CMD_IDENTIFY_REQUEST', QOS: '101', responseId: '-1', invokableId: '-1', flags: 'JVMIDs Sent, TX Context Not Sent, 0x1', abbrev offset: '183' probably due to an incorrect firewall configuration or administrative command.>
    ####<Apr 16, 2018, 11:09:06,509 AM CEST> <Error> <RJVM> <vmtestdbiofm01.dbi-services.com> <AdminServer> <ExecuteThread: '2' for queue: 'weblogic.socket.Muxer'> <<WLS Kernel>> <> <d41ced14-a5e8-4ef9-bd90-19f63910849d-0000005e> <1523869746509> <[severity-value: 8] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000572> <The server rejected a connection attempt JVMMessage from: '3739718027889105070S:dbi-cust-1983.dbi-services.com:[-1,-1,9001,9001,-1,-1,-1]:dbiOFMHDV:WLS_FORMS' to: '0B:10.32.0.12:[8443,-1,-1,-1,-1,-1,-1]' cmd: 'CMD_IDENTIFY_REQUEST', QOS: '101', responseId: '-1', invokableId: '-1', flags: 'JVMIDs Sent, TX Context Not Sent, 0x1', abbrev offset: '183' probably due to an incorrect firewall configuration or administrative command.>
    ####<Apr 16, 2018, 11:09:59,162 AM CEST> <Error> <RJVM> <vmtestdbiofm01.dbi-services.com> <AdminServer> <ExecuteThread: '0' for queue: 'weblogic.socket.Muxer'> <<WLS Kernel>> <> <d41ced14-a5e8-4ef9-bd90-19f63910849d-00000059> <1523869799162> <[severity-value: 8] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000572> <The server rejected a connection attempt JVMMessage from: '-2190410908851642855S:dbi-cust-1983.dbi-services.com:[-1,-1,9002,9002,-1,-1,-1]:dbiOFMHDV:WLS_REPORTS' to: '0B:10.32.0.12:[8443,-1,-1,-1,-1,-1,-1]' cmd: 'CMD_IDENTIFY_REQUEST', QOS: '101', responseId: '-1', invokableId: '-1', flags: 'JVMIDs Sent, TX Context Not Sent, 0x1', abbrev offset: '183' probably due to an incorrect firewall configuration or administrative command.>

    Facing this issue, I ask colleagues for any recommendation or finding to solve the issue ASAP. They recommended to have a look on the cluster configuration for any broadcasting messaging mode, the default cluster configuration messaging mode of my Middleware component was unicast.

    WebLogic Server – Default Forms & Reports Cluster configuration

    ofm-wls-firewall-Cluster-ListUnicast

    Let’s check the BEA error code to continue the investigation.

    The BEA-00572 error description

    ofm-wls-firewall-BEA-ErrorDescription

    The message is quite clear “The server rejected a connection attempt JVMMessage From …” As said before, all the Fusion Middleware was properly started and the functional tests shown that there was no impact at all. It’s always recommended to find a solution to remove such kind of error message. After some research on the MOS, I found a document “Doc ID 860340.1″ applicable the WebLogic Server since the version 7.0. It clearly described that this issue was not reported, nor tested on more recent WebLogic Server version 10.3 or 12c version. I was not really confident with this Oracle Support Note but I tested in case of and I was surprised.

    The provided solution by Oracle work very well on my case, I append the  JAVA Option with the flag “-Dweblogic.rjvm.enableprotocolswitch=true” in the setUserOverrides.sh script located under the $DOMAIN_HOME/bin.

    export JAVA_OPTIONS="${JAVA_OPTIONS} -Ddomain.home=/u02/weblogic/config/domains/dbiOFMHDV -Dweblogic.nodemanager.ServiceEnabled=true -Dweblogic.security.SSL.minimumProtocolVersion=TLSv1.2 -Dweblogic.security.disableNullCipher=true -Djava.security.egd=file:///dev/./urandom -Dweblogic.rjvm.enableprotocolswitch=true"

    After this change and after having restarted the whole domain, there error was gone. No more probable Firewall Configuration Error.

    Hope this will help other people that will have to move a Fusion Middleware 12c Platform into a secure zone.

     

    Cet article Oracle Fusion Middleware Infrastructure 12.2.1.2.0 – Probable Incorrect Firewall Configuration est apparu en premier sur Blog dbi services.

    Setting up MiniShift

    Wed, 2018-04-25 04:18

    Currently we have many requests to support customers in running PostgreSQL in Docker containers. Some of those use redhat OpenShift as the management platform for their Docker deployments. Setting up an OpenShift cluster requires quite some resources and is nothing you want to do on your workstation usually. To overcome that you can use MiniShift which launches a one node OpenShift cluster which you can use for testing. Setting that up is quite easy.

    Obviously MiniShift needs to be downloaded. I’ll be running MiniShift on my workstation and will use VirtualBox as the Hypervisor for Minishift:

    dwe@box:~$ sudo mkdir /opt/minishift
    [sudo] password for dwe: 
    dwe@box:~$ sudo chown dwe:dwe /opt/minishift
    dwe@box:~$ cd /opt/minishift
    dwe@box:/opt/minishift$ wget https://github.com/minishift/minishift/releases/download/v1.16.1/minishift-1.16.1-linux-amd64.tgz
    dwe@box:/opt/minishift$ tar -axf minishift-1.16.1-linux-amd64.tgz
    dwe@box:/opt/minishift$ cd minishift-1.16.1-linux-amd64/
    dwe@box:/opt/minishift/minishift-1.16.1-linux-amd64$ export PATH=/opt/minishift/minishift-1.16.1-linux-amd64:$PATH
    dwe@box:/opt/minishift/minishift-1.16.1-linux-amd64$ minishift config set vm-driver virtualbox
    No Minishift instance exists. New 'vm-driver' setting will be applied on next 'minishift start'
    

    Now MiniShift can be started:

    dwe@box:/opt/minishift/minishift-1.16.1-linux-amd64$ minishift start
    

    The output should look similar to this:

    -- Starting profile 'minishift'
    -- Checking if https://github.com is reachable (using proxy: "No") ... OK
    -- Checking if requested OpenShift version 'v3.9.0' is valid ... OK
    -- Checking if requested OpenShift version 'v3.9.0' is supported ... OK
    -- Checking if requested hypervisor 'virtualbox' is supported on this platform ... OK
    -- Checking if VirtualBox is installed ... OK
    -- Checking the ISO URL ... OK
    -- Downloading OpenShift binary 'oc' version 'v3.9.0'
     42.11 MiB / 42.11 MiB [========================================================================================================================] 100.00% 0s-- Downloading OpenShift v3.9.0 checksums ... OK
    -- Checking if provided oc flags are supported ... OK
    -- Starting local OpenShift cluster using 'virtualbox' hypervisor ...
    -- Minishift VM will be configured with ...
       Memory:    2 GB
       vCPUs :    2
       Disk size: 20 GB
    
       Downloading ISO 'https://github.com/minishift/minishift-b2d-iso/releases/download/v1.2.0/minishift-b2d.iso'
     40.00 MiB / 40.00 MiB [========================================================================================================================] 100.00% 0s
    -- Starting Minishift VM ........................... OK
    -- Checking for IP address ... OK
    -- Checking for nameservers ... OK
    -- Checking if external host is reachable from the Minishift VM ... 
       Pinging 8.8.8.8 ... OK
    -- Checking HTTP connectivity from the VM ... 
       Retrieving http://minishift.io/index.html ... OK
    -- Checking if persistent storage volume is mounted ... OK
    -- Checking available disk space ... 0% used OK
       Importing 'openshift/origin:v3.9.0' . CACHE MISS
       Importing 'openshift/origin-docker-registry:v3.9.0'  CACHE MISS
       Importing 'openshift/origin-haproxy-router:v3.9.0'  CACHE MISS
    -- OpenShift cluster will be configured with ...
       Version: v3.9.0
    Pulling image openshift/origin:v3.9.0
    Pulled 1/4 layers, 26% complete
    Pulled 2/4 layers, 72% complete
    Pulled 3/4 layers, 82% complete
    Pulled 4/4 layers, 100% complete
    Extracting
    Image pull complete
    Using Docker shared volumes for OpenShift volumes
    Using 192.168.99.100 as the server IP
    Starting OpenShift using openshift/origin:v3.9.0 ...
    OpenShift server started.
    
    The server is accessible via web console at:
    
    https://192.168.99.100:8443
    
    You are logged in as:
        User:     developer
        Password: 
    
    To login as administrator:
        oc login -u system:admin
    
    -- Exporting of OpenShift images is occuring in background process with pid 7708.
    

    You should also see a new VM in VirtualBox:
    Selection_040

    As we will need the oc binary to work with OpenShift we need to add that to the PATH:

    dwe@box:/opt/minishift/minishift-1.16.1-linux-amd64$ minishift oc-env
    export PATH="/home/dwe/.minishift/cache/oc/v3.9.0/linux:$PATH"
    # Run this command to configure your shell:
    # eval $(minishift oc-env)
    dwe@box:/opt/minishift/minishift-1.16.1-linux-amd64$ eval $(minishift oc-env)
    dwe@box:/opt/minishift/minishift-1.16.1-linux-amd64$ which oc
    

    And we are ready to use OpenShift:

    oc login -u system:admin
    Logged into "https://192.168.99.100:8443" as "system:admin" using existing credentials.
    
    You have access to the following projects and can switch between them with 'oc project ':
    
        default
      * edb
        kube-public
        kube-system
        myproject
        openshift
        openshift-infra
        openshift-node
        openshift-web-console
    
    Using project "edb".
    

    The web interface is up and running is well:
    Selection_041

    Logging in as system/admin:
    Selection_042

    … we could already start and deploy a PostgreSQL container but this is the topic for the next post. To stop MiniShift:

    dwe@box:~$ minishift stop
    Stopping local OpenShift cluster...
    Cluster stopped.
    

    Hope that helps.

     

    Cet article Setting up MiniShift est apparu en premier sur Blog dbi services.

    Can I do it with PostgreSQL? – 19 – Create user … identified by values

    Sat, 2018-04-21 06:39

    Puh, that last post in this series is already half a year old. Time is moving too fast :( Today, while being at a customer again, this question came up: Can I do something comparable in PostgreSQL to what I can do in Oracle, which is: Create a user and provide the hashed password so that the password is the same on the source and the target (which implies not knowing the password at all)? In Oracle you can find the hashed passwords in user$ where can I find that in PostgreSQL? Lets go.

    When we look at the “create user” command there is no option which seems to do that:

    postgres=# \h create user
    Command:     CREATE USER
    Description: define a new database role
    Syntax:
    CREATE USER name [ [ WITH ] option [ ... ] ]
    
    where option can be:
    
          SUPERUSER | NOSUPERUSER
        | CREATEDB | NOCREATEDB
        | CREATEROLE | NOCREATEROLE
        | INHERIT | NOINHERIT
        | LOGIN | NOLOGIN
        | REPLICATION | NOREPLICATION
        | BYPASSRLS | NOBYPASSRLS
        | CONNECTION LIMIT connlimit
        | [ ENCRYPTED ] PASSWORD 'password'
        | VALID UNTIL 'timestamp'
        | IN ROLE role_name [, ...]
        | IN GROUP role_name [, ...]
        | ROLE role_name [, ...]
        | ADMIN role_name [, ...]
        | USER role_name [, ...]
        | SYSID uid
    

    Maybe we can just pass the hashed password? Lets try be creating a new user:

    postgres=# create user u with login password 'u';
    CREATE ROLE
    

    The hashed passwords in PostgreSQL are stored in pg_shadow:

    postgres=# select passwd from pg_shadow where usename = 'u';
                   passwd                
    -------------------------------------
     md56277e2a7446059985dc9bcf0a4ac1a8f
    (1 row)
    

    Lets use that hash and create a new user:

    postgres=# create user w login encrypted password 'md56277e2a7446059985dc9bcf0a4ac1a8f';
    CREATE ROLE
    

    Can we login as w using “u” as a password?

    postgres@pgbox:/home/postgres/ [PG10] psql -X -h 192.168.22.99 -p $PGPORT -U w postgres -W
    Password for user u: 
    psql: FATAL:  no pg_hba.conf entry for host "192.168.22.99", user "w", database "postgres", SSL off
    

    Ok, makes sense. After fixing that:

    postgres@pgbox:/home/postgres/ [PG10] psql -X -h 192.168.22.99 -p $PGPORT -U w postgres -W
    Password for user w: 
    psql: FATAL:  password authentication failed for user "w"
    

    So obviously this is not the way to do it. Do we have the same hashes in pg_shadow?

    postgres=# select usename,passwd from pg_shadow where usename in ('w','u');
     usename |               passwd                
    ---------+-------------------------------------
     u       | md56277e2a7446059985dc9bcf0a4ac1a8f
     w       | md56277e2a7446059985dc9bcf0a4ac1a8f
    (2 rows)
    

    Hm, exactly the same. Why can’t we login then? The answer is in the documentation:”Because MD5-encrypted passwords use the role name as cryptographic salt, …”. We can verify that be re-creating the “w” user using the same password as that of user “u”:

    postgres=# drop user w;
    DROP ROLE
    postgres=# create user w login password 'u';
    CREATE ROLE
    postgres=# select usename,passwd from pg_shadow where usename in ('w','u');
     usename |               passwd                
    ---------+-------------------------------------
     u       | md56277e2a7446059985dc9bcf0a4ac1a8f
     w       | md53eae63594a41739e87141e8333d15f73
    (2 rows)
    

    The hashed values are not the same anymore. What of course is working is to re-create the user with that hash:

    postgres=# drop role w;
    DROP ROLE
    postgres=# create user w login password 'md53eae63594a41739e87141e8333d15f73';
    CREATE ROLE
    

    Now we should be able to login with the password ‘u':

    postgres@pgbox:/home/postgres/ [PG10] psql -X -h 192.168.22.99 -p $PGPORT -U w postgres -W
    Password for user w: 
    psql (10.0 dbi services build)
    Type "help" for help.
    
    postgres=> 
    

    Fine. Another way of getting the password hashes is to use pg_dumpall using the “–globals-only” switch:

    postgres@pgbox:/home/postgres/ [PG10] pg_dumpall --globals-only > a.sql
    postgres@pgbox:/home/postgres/ [PG10] grep -w w a.sql 
    CREATE ROLE w;
    ALTER ROLE w WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB LOGIN NOREPLICATION NOBYPASSRLS PASSWORD 'md53eae63594a41739e87141e8333d15f73';
    

    Hope that helps.

     

    Cet article Can I do it with PostgreSQL? – 19 – Create user … identified by values est apparu en premier sur Blog dbi services.

    2018.pgconf.de, recap

    Mon, 2018-04-16 11:43

    Finally I am home from pgconf.de in Berlin at the beautiful Müggelsee. Beside meeting core PostreSQL people such Devrim and Bruce, Andreas and joining Jan again for great discussions and some beers, joking with Anja, being at the dbi services booth, discussing with people, kidding with Hans: was it worth the effort? Yes, it was, and here is why.

    Selection_022

    We had very interesting discussions at our booth, ranging from migrations to PostgreSQL, PostgreSQL training corporations and interest in our OpenDB appliance.
    DapJvl5XkAAxinQ

    The opening session “Umdenken! 11 Gebote zum IT-Management” raised a question we do always ask our selfs as well: When you do HA how much complexity does the HA layer add? Maybe it is the HA layer that was causing the outage and that would not have happened without that? Reducing complexity is key to robust and reliable IT operations.

    Listening to Bruce Momjian is always a joy: This time it was about PostgreSQL sharding. Much is already in place, some will come with PostgreSQL 11 and other stuff is being worked on for PostgreSQL 12 next year. Just check the slides which should be available for download from the website soon.

    Most important: The increasing interest in PostgreSQL. We can see that at our customers, at conferences and in the interest in our blog posts about that topic. Sadly, when you have a booth, you are not able to listen to all the talks you would like to. This is the downside :(

    So, mark your calendar: Next years date and location are already fixed: May 10, 2019, in Leipzip. I am sure we will have some updates to:

    large

     

    Cet article 2018.pgconf.de, recap est apparu en premier sur Blog dbi services.

    Covering indexes in Oracle, and branch size

    Fri, 2018-04-13 16:01

    A covering index is an index that contains all the columns required by your query, so that you don’t have to do a TABLE ACCESS BY INDEX ROWID, which is the major cost of an index range scan. You don’t need any special feature to do that in Oracle. Just add the required columns at the end of the index. In the execution plan you will see the columns used as index keys for the range scan displayed in ‘access’ predicates, and the further filtering done on the remaining columns with ‘filter’ predicates. The ‘projection’ shows the columns that are returned in the rowset result.
    However you may have seen that SQL Server has a special ‘INCLUDING’ keyword to separate those non-key columns added only for filtering or projection but not for access. What does it bring that Oracle doesn’t have?

    An index entry is composed of a key and data associated to the key. The index is sorted on the key. The data for each key have no special order, like in a heap table. The idea of the SQL Server INCLUDING keyword is to separate the columns belonging to the key and the columns belonging to the data. It is not mandatory. You can add all columns to the key but depending on the implementation, the benefit can be:

    • some data types may not be allowed in the key but allowed as data
    • sorting the data when not required may be a performance overhead
    • there can be limitations on the size of the key
    • having a larger key may require more space in the branches
    • adding sorted columns may change the clustering factor

    In Oracle, there are very few data types that cannot be indexed (like LONG). The limitation on the size of the key may come into play for large 12c Extended Datatypes. You can substring them, but that defeats the goal of covering indexes. I see two reasons why ‘INCLUDING’ indexes can be useful. The first reason is about the clustering factor. The second about sorting the whole index entry and referencing it from the branches. I’ll detail those reasons later, but first here is an example.


    SQL> create table DEMO (UNIQU ,RANGE ,RANDOM_TEXT ,CONSTANT_TEXT ) as select rownum UNIQU , mod(rownum,4) RANGE , dbms_random.string('u',80) RANDOM_TEXT , lpad('x',80,'x') CONSTANT_TEXT from xmltable('1 to 100000');
    Table DEMO created.
    SQL> commit;
    Commit complete.

    This table has an all-distinct-values column UNIQ, a few-distinct-values on (RANGE) and I’ll use them for the key. And I’ve two columns I’ll add as additional column for covering queries: one is with lot of distinct values (RANDOM_TEXT) and the other has few distinct values (CONSTANT_TEXT).
    The first rows look like this:

    SQL> select * from DEMO order by ROWID fetch first 5 rows only;
    UNIQU RANGE RANDOM_TEXT CONSTANT_TEXT
    ----- ----- -------------------------------------------------------------------------------- --------------------------------------------------------------------------------
    1 1 XCFNWCRCFBEPJPSHREUVVVTBUCCXLZMRPJPNQDTHWYRZRUORBPDOBCIRFHLICETULTCZTMPOCMUNQITV xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
    2 2 XUSPNDOMPQKOIRCVDDTVYAGKRDGIXOSVUNMRAQLSRQGYKOFEXRQMCPXPYZYKRHHKDXGIINOUUAUJOLOO xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
    3 3 ZBCVFTDSRUFIUTSIWOOOBWIRMEFUXNWLADAPUPFNPVYDLPQTOUZVXJKMGIPCGZESXFXOIYVMKNSMMZKB xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
    4 0 VOIRCXFVSRVZQRZDRLQRHZWNGQJAAWJXWXJKRCJVPWYDJSZLJIOEWAMCFSRCUPSPPEKITJYHHOUQSVYQ xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
    5 1 UUSAMEVRWNLPGCUVMJWVVPDAENRYKIWWMIHTUJSZRQASMTYOVQNCGZGZIJZWNSOJVSIBMMUEAXOHJCOA xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

    I’m adding indexes fo access on RANGE as the index key, with only the key, or covering the random or constant text:

    SQL> create index DEMO_RANGE on DEMO(RANGE) pctfree 50;
    Index DEMO_RANGE created.
    SQL> create index DEMO_RANGE_COVERING_RANDOM on DEMO(RANGE,RANDOM_TEXT) pctfree 50;
    Index DEMO_RANGE_COVERING_RANDOM created.
    SQL> create index DEMO_RANGE_COVERING_CONSTANT on DEMO(RANGE,CONSTANT_TEXT) pctfree 50;
    Index DEMO_RANGE_COVERING_CONSTANT created.

    An additional one adding the unique column in-between:

    SQL> create index DEMO_RANGE_COVERING_WITH_PK on DEMO(RANGE,UNIQU,CONSTANT_TEXT) pctfree 50;
    Index DEMO_RANGE_COVERING_WITH_PK created.

    And now for access with the unique column as a key:

    SQL> create index DEMO_UNIQU_COVERING_RANDOM on DEMO(UNIQU,RANDOM_TEXT) pctfree 50;
    Index DEMO_UNIQU_COVERING_RANDOM created.
    SQL> create index DEMO_UNIQU_COVERING_CONSTANT on DEMO(UNIQU,CONSTANT_TEXT) pctfree 50;
    Index DEMO_UNIQU_COVERING_CONSTANT created.

    Here are some interesting stats:

    SQL> exec dbms_stats.gather_table_stats(user,'DEMO');
    PL/SQL procedure successfully completed.
    SQL> select index_name,blevel,leaf_blocks,num_rows,clustering_factor from user_indexes where table_name='DEMO' order by 2,3;
    INDEX_NAME BLEVEL LEAF_BLOCKS NUM_ROWS CLUSTERING_FACTOR
    -------------------------------- ------ ----------- -------- -----------------
    DEMO_RANGE 1 353 100000 9757
    DEMO_RANGE_COVERING_RANDOM 2 2440 100000 99967
    DEMO_RANGE_COVERING_CONSTANT 2 2440 100000 9757
    DEMO_UNIQU_COVERING_RANDOM 2 2500 100000 2440
    DEMO_UNIQU_COVERING_CONSTANT 2 2500 100000 2440
    DEMO_RANGE_COVERING_WITH_PK 2 2565 100000 9757
    6 rows selected.

    Leaf size

    About the size, the covering indexes have approximately the same number of leaf blocks because the included column (RANDOM_TEXT or CONSTANT_TEXT) has the same size (80 bytes). Of course, the non-covering index is smaller (but will need table access to query additional column). The key on UNIQU is slightly larger than the one on RANGE because the numbers go higher. The index with 3 columns is the largest.

    Clustering factor

    About the clustering factor, there’s one outlier here which deserves an explanation. But before that, you must understand that this higher clustering factor is not important for a query using the covering index, such as a SELECT RANDOM_TEXT WHERE RANGE=0, because in that case you don’t read the table. However for some queries you may cover only the filter predicates and go to the table for projection.
    But the big problem is that when you add a column to an index to address a specific query, you don’t want to risk a side effect on another query, and changing the clustering factor is a risk here. One solution is to keep the old non-covering index (DEMO_RANGE) but then the side effect is on DML overhead.

    To understand the change in clustering factor we must go deeper on Oracle index key and data implementation. The ‘data’ part exists in Oracle indexes even when not specified explicitely with an INCLUDING clause. The ROWID is the data part. An index entry associates a key (the indexed columns) with a pointer to the table row (the ROWID). At least, this is for UNIQUE indexes where each key is unique.

    Non-unique indexes are a special case. Actually, Oracle implements only unique key indexes. When the indexed columns are not unique, the ROWID is stored on the key part of the index entry, and there is no data part. You should read Richard Foote, Differences between Unique and Non-Unique Indexes for detailed explanation.

    Branch size

    The previous statistics displayed only the number of branch level, which was the same, but we can have more detail about the branch size with an ANALYZE INDEX.

    The non-covering index has only one branch block, the root, which references all the 353 leaf blocks containing the 100000 entries, with an average of 5479/352=15 bytes per branch entry:

    SQL> analyze index DEMO_RANGE validate structure offline;
    Index DEMO_RANGE analyzed.
    SQL> select height,blocks,lf_blks,lf_rows_len,br_rows,br_blks,br_rows_len,most_repeated_key,btree_space,used_space,pct_used,rows_per_key,blks_gets_per_access,opt_cmpr_pctsave,lf_uncmp_rows_len,lf_uncmp_blks from index_stats
    HEIGHT BLOCKS LF_BLKS LF_ROWS_LEN BR_ROWS BR_BLKS BR_ROWS_LEN MOST_REPEATED_KEY BTREE_SPACE USED_SPACE PCT_USED ROWS_PER_KEY BLKS_GETS_PER_ACCESS OPT_CMPR_PCTSAVE LF_UNCMP_ROWS_LEN LF_UNCMP_BLKS
    ------ ------ ------- ----------- ------- ------- ----------- ----------------- ----------- ---------- -------- ------------ -------------------- ---------------- ----------------- -------------
    2 384 353 1375000 352 1 5479 25000 2830616 1380479 49 25000 12502.5 19 1375000 353

    The covering index with lot of distinct values for the non-key columns has more branch blocks, with an average of 34623/2439=14 bytes per branch entry:

    SQL> analyze index DEMO_RANGE_COVERING_RANDOM validate structure offline;
    Index DEMO_RANGE_COVERING_RANDOM analyzed.
    SQL> select height,blocks,lf_blks,lf_rows_len,br_rows,br_blks,br_rows_len,most_repeated_key,btree_space,used_space,pct_used,rows_per_key,blks_gets_per_access,opt_cmpr_pctsave,lf_uncmp_rows_len,lf_uncmp_blks from index_stats
    HEIGHT BLOCKS LF_BLKS LF_ROWS_LEN BR_ROWS BR_BLKS BR_ROWS_LEN MOST_REPEATED_KEY BTREE_SPACE USED_SPACE PCT_USED ROWS_PER_KEY BLKS_GETS_PER_ACCESS OPT_CMPR_PCTSAVE LF_UNCMP_ROWS_LEN LF_UNCMP_BLKS
    ------ ------ ------- ----------- ------- ------- ----------- ----------------- ----------- ---------- -------- ------------ -------------------- ---------------- ----------------- -------------
    3 2560 2440 9475000 2439 6 34623 1 19558408 9509623 49 1 4 2 9475000 2440

    Here the number of branches is higher only because there are more leaves (as we have more columns), but not because of the size in the branch entries, which are even smaller. They are smaller because the branch does not have to store the full value of all columns in order to identify one leaf block. Then, only the first bytes are needed and not the full 80 bytes of them.

    The covering index with few of distinct values for the non-key columns has a lot more branch blocks, with an average of 234755/2439=96 bytes per branch entry:

    SQL> analyze index DEMO_RANGE_COVERING_CONSTANT validate structure offline;
    Index DEMO_RANGE_COVERING_CONSTANT analyzed.
    SQL> select height,blocks,lf_blks,lf_rows_len,br_rows,br_blks,br_rows_len,most_repeated_key,btree_space,used_space,pct_used,rows_per_key,blks_gets_per_access,opt_cmpr_pctsave,lf_uncmp_rows_len,lf_uncmp_blks from index_stats
     
    HEIGHT BLOCKS LF_BLKS LF_ROWS_LEN BR_ROWS BR_BLKS BR_ROWS_LEN MOST_REPEATED_KEY BTREE_SPACE USED_SPACE PCT_USED ROWS_PER_KEY BLKS_GETS_PER_ACCESS OPT_CMPR_PCTSAVE LF_UNCMP_ROWS_LEN LF_UNCMP_BLKS
    ------ ------ ------- ----------- ------- ------- ----------- ----------------- ----------- ---------- -------- ------------ -------------------- ---------------- ----------------- -------------
    3 2560 2440 9475000 2439 31 234755 25000 19759108 9709755 50 25000 12503.5 86 9475000 2440

    So, here the size of the branch blocks is higher because we have multiple leaves blocks with the value of COVERING_CONSTANT the second column is not sufficient to identify only one leaf block. The full 80 bytes must be stored, and the rowid in addition to it.

    When the indexed column has only unique values, there is no need to store more in the branches (not the additional columns, not the rowid) and only 12 bytes are needed here on average:

    SQL> analyze index DEMO_UNIQU_COVERING_RANDOM validate structure offline;
    Index DEMO_UNIQU_COVERING_RANDOM analyzed.
    SQL> select height,blocks,lf_blks,lf_rows_len,br_rows,br_blks,br_rows_len,most_repeated_key,btree_space,used_space,pct_used,rows_per_key,blks_gets_per_access,opt_cmpr_pctsave,lf_uncmp_rows_len,lf_uncmp_blks from index_stats
     
    HEIGHT BLOCKS LF_BLKS LF_ROWS_LEN BR_ROWS BR_BLKS BR_ROWS_LEN MOST_REPEATED_KEY BTREE_SPACE USED_SPACE PCT_USED ROWS_PER_KEY BLKS_GETS_PER_ACCESS OPT_CMPR_PCTSAVE LF_UNCMP_ROWS_LEN LF_UNCMP_BLKS
    ------ ------ ------- ----------- ------- ------- ----------- ----------------- ----------- ---------- -------- ------------ -------------------- ---------------- ----------------- -------------
    3 2560 2500 9688892 2499 5 29737 1 20030140 9718629 49 1 4 0 9688892 2500

    As the second column is not needed, the size of branch is the same whether we use RANDOM_TEXT or CONSTANT_TEXT:

    SQL> analyze index DEMO_UNIQU_COVERING_CONSTANT validate structure offline;
    Index DEMO_UNIQU_COVERING_CONSTANT analyzed.
    SQL> select height,blocks,lf_blks,lf_rows_len,br_rows,br_blks,br_rows_len,most_repeated_key,btree_space,used_space,pct_used,rows_per_key,blks_gets_per_access,opt_cmpr_pctsave,lf_uncmp_rows_len,lf_uncmp_blks from index_stats
     
    HEIGHT BLOCKS LF_BLKS LF_ROWS_LEN BR_ROWS BR_BLKS BR_ROWS_LEN MOST_REPEATED_KEY BTREE_SPACE USED_SPACE PCT_USED ROWS_PER_KEY BLKS_GETS_PER_ACCESS OPT_CMPR_PCTSAVE LF_UNCMP_ROWS_LEN LF_UNCMP_BLKS
    ------ ------ ------- ----------- ------- ------- ----------- ----------------- ----------- ---------- -------- ------------ -------------------- ---------------- ----------------- -------------
    3 2560 2500 9688892 2499 5 29737 1 20030140 9718629 49 1 4 0 9688892 2500

    Now, the last one is my workaround for the higher size when adding a column that do not have a lot of distinct values: just add a column before with more distinct values. Here I use the UNIQU one, but you probably have one that can be useful for your queries.

    SQL> analyze index DEMO_RANGE_COVERING_WITH_PK validate structure offline;
    Index DEMO_RANGE_COVERING_WITH_PK analyzed.
    SQL> select height,blocks,lf_blks,lf_rows_len,br_rows,br_blks,br_rows_len,most_repeated_key,btree_space,used_space,pct_used,rows_per_key,blks_gets_per_access,opt_cmpr_pctsave,lf_uncmp_rows_len,lf_uncmp_blks from index_stats
     
    HEIGHT BLOCKS LF_BLKS LF_ROWS_LEN BR_ROWS BR_BLKS BR_ROWS_LEN MOST_REPEATED_KEY BTREE_SPACE USED_SPACE PCT_USED ROWS_PER_KEY BLKS_GETS_PER_ACCESS OPT_CMPR_PCTSAVE LF_UNCMP_ROWS_LEN LF_UNCMP_BLKS
    ------ ------ ------- ----------- ------- ------- ----------- ----------------- ----------- ---------- -------- ------------ -------------------- ---------------- ----------------- -------------
    3 2688 2565 9963892 2564 6 37456 1 20557908 10001348 49 1 4 2 9963892 2565

    Now you get the idea. When creating an index, or adding columns for covering index, and you have the choice of column order, then try to have their first bytes selective enough so that the branch needs only a small substring to identify each leaf block (or lower level branches).

    Block dumps

    If you want to see the details about the branch length, here are some info from block dumps. I got them with the following:

    SQL> column value new_value tracefile
    SQL> select value from v$diag_info where name='Default Trace File';
    VALUE
    /u01/app/oracle/diag/rdbms/cdb1/CDB1/trace/CDB1_ora_6799.trc
    SQL> exec for i in (select header_file, header_block from dba_segments where owner='DEMO' and segment_name='DEMO_RANGE') loop execute immediate 'alter system dump datafile '||i.header_file||' block '||(i.header_block+1); end loop;
    PL/SQL procedure successfully completed.
    SQL> host tail -20 &tracefile

    Here is the last branch entry for the root block of DEMO_RANGE where the first column is not very selective and then the rowid is required in the branch:

    row#351[3279] dba: 113261807=0x6c03cef
    col 0; len 2; (2): c1 04
    col 1; len 6; (6): 07 00 05 7b 00 25

    Here is the last branch entry for the root block of DEMO_RANGE_COVERING_RANDOM where instead of the rowid the 3 first bytes of the RANDOM_TEXT column are sufficient:

    row#3[8006] dba: 113263037=0x6c041bd
    col 0; len 2; (2): c1 04
    col 1; len 3; (3): 53 51 52
    col 2; TERM

    Here is the last branch entry for the root block of DEMO_RANGE_COVERING_CONSTANT where the full 80 bytes of CONSTANT_TEXT are not even sufficient, and the ROWID is needed as a 3rd column:

    row#28[5316] dba: 117444566=0x7000fd6
    col 0; len 2; (2): c1 04
    col 1; len 80; (80):
    78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
    78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
    78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
    78 78 78 78 78
    col 2; len 6; (6): 07 00 05 43 00 25

    Here is the last branch entry for the root block of DEMO_UNIQU_COVERING_CONSTANT where the first column is sufficient:

    row#2[8026] dba: 117447160=0x70019f8
    col 0; len 4; (4): c3 09 0d 04
    col 1; TERM

    So what?

    We probably don’t need a feature like SQL Server INCLUDING indexes in most of the cases. However, this may require thinking about the order of columns, mainly:

    • ensure that selective columns appear as early as possible (without compromising the index access efficiency of course) in order to lower the bytes required to address branches and leaves
    • when adding columns, try to add first a column that will keep the clustering factor you had with the rowid, such as a date of insert
     

    Cet article Covering indexes in Oracle, and branch size est apparu en premier sur Blog dbi services.

    MySQL – Time mismatch between system and log files

    Fri, 2018-04-13 03:26

    I was working on MySQL upgrades to version 5.7 at a customer, when I found out that for my instance, time in the error log file didn’t correspond to local system time:

    mysql@test1:/u01/app/mysql/admin/mysqld1/log/ [mysqld1] mysql -uroot -p
    Enter password:
    ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
    
    mysql@test1:/u01/app/mysql/admin/mysqld1/log/ [mysqld1] ls -ltr mysqld1.err
    -rw-r-----. 1 mysql mysql 13323 Apr 12 10:54 mysqld1.err
    
    mysql@test1:/u01/app/mysql/admin/mysqld1/log/ [mysqld1] tail -1 mysqld1.err
    2018-04-12T08:54:55.300633Z 67 [Note] Access denied for user 'root'@'localhost' (using password: YES)
    

    I performed the operation at 10h54 but in the error log file I saw 8h54.
    I checked if there was a problem between system and instance time, but that was not the case:

    mysql@test1:/u01/app/mysql/admin/mysqld1/log/ [mysqld1] date
    Thu Apr 12 10:56:02 CEST 2018
    
    mysqld1-(root@localhost) [(none)]> select now();
    +---------------------+
    | now()               |
    +---------------------+
    | 2018-04-12 10:56:35 |
    +---------------------+
    1 row in set (0.00 sec)
    

    Actually, starting from version 5.7.2, there is a variable called log_timestamps which is set to UTC as default:

    mysqld1-(root@localhost) [(none)]> select @@global.log_timestamps;
    +-------------------------+
    | @@global.log_timestamps |
    +-------------------------+
    | UTC                     |
    +-------------------------+
    1 row in set (0.00 sec)
    

    I modified this variable from UTC to SYSTEM:

    mysqld1-(root@localhost) [(none)]> set global log_timestamps='SYSTEM';
    Query OK, 0 rows affected (0.00 sec)
    
    mysqld1-(root@localhost) [(none)]> select @@global.log_timestamps;
    +-------------------------+
    | @@global.log_timestamps |
    +-------------------------+
    | SYSTEM                  |
    +-------------------------+
    1 row in set (0.00 sec)
    

    and after that everything was OK, local system time was now aligned with timestamp time zone in MySQL log files:

    mysql@test1:/u01/app/mysql/admin/mysqld1/log/ [mysqld1] mysql -uroot -p
    Enter password:
    ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
    
    mysql@test1:/u01/app/mysql/admin/mysqld1/log/ [mysqld1] ls -ltr mysqld1.err
    -rw-r-----. 1 mysql mysql 13323 Apr 12 10:58 mysqld1.err
    
    mysql@test1:/u01/app/mysql/admin/mysqld1/log/ [mysqld1] tail -1 mysqld1.err
    2018-04-12T10:58:08.467615+01:00 69 [Note] Access denied for user 'root'@'localhost' (using password: YES)
    

    Remember to set this variable also in the configuration file, to make this change persistent at reboot:

    mysql@test1:/u01/app/mysql/admin/mysqld1/log/ [mysqld1] vi /etc/my.cnf
    
    # 12042018 - dbi services - Timestamp time zone of log files from UTC (default) to SYSTEM
    log_timestamps = SYSTEM
    
     

    Cet article MySQL – Time mismatch between system and log files est apparu en premier sur Blog dbi services.

    Deploy a Cloudera cluster with Terraform and Ansible in Azure – part 1

    Tue, 2018-04-10 05:13

    Deploying a Cloudera distribution of Hadoop automatically is very interesting in terms of time-saving. Infrastructure as Code tools such as Ansible, Puppet, Chef, Terraform, allow now to provision, manage and deploy configuration for large clusters.

    In this blog posts series, we will see how to deploy and install a CDH cluster with Terraform and Ansible in the Azure cloud.

    The first part consists of provisioning the environment with Terraform in Azure. Terraform features an extension to interact with cloud providers such as Azure and AWS. You can find here the Terraform documentation for the Azure module.

    Desired architecture

     

    Azure_architecture

    Above a representation of the wished architecture for our CDH environment. 5 nodes for a testing infrastructure, including a Cloudera manager node, a second master node for the Hadoop Secondary NameNode and 3 workers.

     Prerequisites

    Terraform must be installed on your system. https://docs.microsoft.com/en-us/azure/virtual-machines/linux/terraform-install-configure#install-terraform

    Generate a Client ID and a Client Secret from Azure CLI to authenticate in Azure with Terraform.

    1. Sign in to administer your Azure subscription:

    [root@centos Terraform]# az login

    2. Get the subscription ID and tenant ID:

    [root@centos Terraform]# az account show --query "{subscriptionId:id, tenantId:tenantId}"

    3. Create separate credentials for TF:

    [root@centos Terraform]# az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/${SUBSCRIPTION_ID}"

    4. Save the following information:

    • subscription_id
    • client_id
    • client_secret
    • tenant_id

    Now we are ready to start using Terraform with AzureRM API.

    Build your cluster

    With Terraform, we will provision the following resources in Azure:

    • A resource group – “Cloudera-Cluster”
    • 1 virtual network – “cdh_vnet”
    • 1 network security group – “cdh-nsg”
    • 1 storage account – “dbistorage”
    • 5 network interfaces – “instance_name_network_interface”
    • 5 Public / Private IP – “cdh-pip1-4″

    First, we will create a variable file, which contains all variables needed without specific values.  The values are specified in the var_values.tfvars file.

     variables.tf
    /* Configure Azure Provider and declare all the Variables that will be used in Terraform configurations */
    provider "azurerm" {
      subscription_id 	= "${var.subscription_id}"
      client_id 		= "${var.client_id}"
      client_secret 	= "${var.client_secret}"
      tenant_id 		= "${var.tenant_id}"
    }
    
    variable "subscription_id" {
      description = "Enter Subscription ID for provisioning resources in Azure"
    }
    
    variable "client_id" {
      description = "Enter Client ID for Application created in Azure AD"
    }
    
    variable "client_secret" {
      description = "Enter Client secret for Application in Azure AD"
    }
    
    variable "tenant_id" {
      description = "Enter Tenant ID / Directory ID of your Azure AD. Run Get-AzureSubscription to know your Tenant ID"
    }
    
    variable "location" {
      description = "The default Azure region for the resource provisioning"
    }
    
    variable "resource_group_name" {
      description = "Resource group name that will contain various resources"
    }
    
    variable "vnet_cidr" {
      description = "CIDR block for Virtual Network"
    }
    
    variable "subnet1_cidr" {
      description = "CIDR block for Subnet within a Virtual Network"
    }
    
    variable "subnet2_cidr" {
      description = "CIDR block for Subnet within a Virtual Network"
    }
    
    variable "vm_username_manager" {
      description = "Enter admin username to SSH into Linux VM"
    }
    
    variable "vm_username_master" {
      description = "Enter admin username to SSH into Linux VM"
    }
    
    variable "vm_username_worker1" {
      description = "Enter admin username to SSH into Linux VM"
    }
    
    variable "vm_username_worker2" {
      description = "Enter admin username to SSH into Linux VM"
    }
    
    variable "vm_username_worker3" {
      description = "Enter admin username to SSH into Linux VM"
    }
    
    variable "vm_password" {
      description = "Enter admin password to SSH into VM"
    }

     

    var_values.tfvars
    subscription_id 	= "xxxxxxx"
    client_id 		= "xxxxxxx"
    client_secret 		= "xxxxxxx"
    tenant_id 		= "xxxxxxx"
    location 		= "YOUR-LOCATION"
    resource_group_name     = "Cloudera-Cluster"
    vnet_cidr 		= "192.168.0.0/16"
    subnet1_cidr 		= "192.168.1.0/24"
    subnet2_cidr		= "192.168.2.0/24"
    vm_username_manager 		= "dbi"
    vm_username_master 		= "dbi"
    vm_username_worker1 		= "dbi"
    vm_username_worker2 		= "dbi"
    vm_username_worker3 		= "dbi"
    vm_password 		= "YOUR-PASSWORD"

     

    Next, we will configure the virtual network with 1 subnet for all hosts.

    network.tf
    resource "azurerm_resource_group" "terraform_rg" {
      name 		= "${var.resource_group_name}"
      location 	= "${var.location}"
    }
    
    resource "azurerm_virtual_network" "vnet" {
      name 			= "cdh-vnet"
      address_space 	= ["${var.vnet_cidr}"]
      location 		= "${var.location}"
      resource_group_name   = "${azurerm_resource_group.terraform_rg.name}"
    
      tags {
    	group = "Coudera-Cluster"
      }
    }
    
    resource "azurerm_subnet" "subnet_1" {
      name 			= "Subnet-1"
      address_prefix 	= "${var.subnet1_cidr}"
      virtual_network_name 	= "${azurerm_virtual_network.vnet.name}"
      resource_group_name 	= "${azurerm_resource_group.terraform_rg.name}"
    }
    
    

    Next, we can create the storage account related to the resource group with a container.

    storage.tf
    resource "azurerm_storage_account" "storage" {
      name 			= "dbistorage1"
      resource_group_name 	= "${azurerm_resource_group.terraform_rg.name}"
      location 		= "${var.location}"
      account_tier    = "Standard"
      account_replication_type = "LRS"
    
      tags {
    	group = "Coudera-Cluster"
      }
    }
    
    resource "azurerm_storage_container" "container" {
      name 			= "vhds"
      resource_group_name 	= "${azurerm_resource_group.terraform_rg.name}"
      storage_account_name 	= "${azurerm_storage_account.storage.name}"
      container_access_type = "private"
    }

    The next step will be to create a security group for all VM. We will implement 2 rules. Allow SSH and HTTP connection from everywhere, which it’s basically not really secure, but don’t forget that we are in a volatile testing infrastructure.

    security_group.tf
    resource "azurerm_network_security_group" "nsg_cluster" {
      name 			= "cdh-nsg"
      location 		= "${var.location}"
      resource_group_name 	= "${azurerm_resource_group.terraform_rg.name}"
    
      security_rule {
    	name 			= "AllowSSH"
    	priority 		= 100
    	direction 		= "Inbound"
    	access 		        = "Allow"
    	protocol 		= "Tcp"
    	source_port_range       = "*"
        destination_port_range     	= "22"
        source_address_prefix      	= "*"
        destination_address_prefix 	= "*"
      }
    
      security_rule {
    	name 			= "AllowHTTP"
    	priority		= 200
    	direction		= "Inbound"
    	access 			= "Allow"
    	protocol 		= "Tcp"
    	source_port_range       = "*"
        destination_port_range     	= "80"
        source_address_prefix      	= "Internet"
        destination_address_prefix 	= "*"
      }
    
      tags {
    	group = "Coudera-Cluster"
      }
    }

     Next we will create the private / public IP for our instances.

    ip.tf
    resource "azurerm_public_ip" "manager_pip" {
    name = "cdh-pip"
    location = "${var.location}"
    resource_group_name = "${azurerm_resource_group.terraform_rg.name}"
    public_ip_address_allocation = "static"
    
    tags {
    group = "Coudera-Cluster"
    }
    }
    
    resource "azurerm_network_interface" "public_nic" {
    name = "manager_network_interface"
    location = "${var.location}"
    resource_group_name = "${azurerm_resource_group.terraform_rg.name}"
    network_security_group_id = "${azurerm_network_security_group.nsg_cluster.id}"
    
    ip_configuration {
    name = "manager_ip"
    subnet_id = "${azurerm_subnet.subnet_1.id}"
    private_ip_address_allocation = "dynamic"
    public_ip_address_id = "${azurerm_public_ip.manager_pip.id}"
    }
    tags {
    group = "Coudera-Cluster"
    }
    }
    
    
    resource "azurerm_public_ip" "master_pip" {
    name = "cdh-pip1"
    location = "${var.location}"
    resource_group_name = "${azurerm_resource_group.terraform_rg.name}"
    public_ip_address_allocation = "static"
    
    tags {
    group = "Coudera-Cluster"
    }
    }
    
    resource "azurerm_network_interface" "public_nic1" {
    name = "master_network_interface"
    location = "${var.location}"
    resource_group_name = "${azurerm_resource_group.terraform_rg.name}"
    network_security_group_id = "${azurerm_network_security_group.nsg_cluster.id}"
    
    ip_configuration {
    name = "master_ip"
    subnet_id = "${azurerm_subnet.subnet_2.id}"
    private_ip_address_allocation = "dynamic"
    public_ip_address_id = "${azurerm_public_ip.master_pip.id}"
    }
    tags {
    group = "Coudera-Cluster"
    }
    }
    
    
    resource "azurerm_public_ip" "worker1_pip" {
    name = "cdh-pip2"
    location = "${var.location}"
    resource_group_name = "${azurerm_resource_group.terraform_rg.name}"
    public_ip_address_allocation = "static"
    
    tags {
    group = "Coudera-Cluster"
    }
    }
    
    resource "azurerm_network_interface" "public_nic2" {
    name = "worker1_network_interface"
    location = "${var.location}"
    resource_group_name = "${azurerm_resource_group.terraform_rg.name}"
    network_security_group_id = "${azurerm_network_security_group.nsg_cluster.id}"
    
    ip_configuration {
    name = "worker1_ip"
    subnet_id = "${azurerm_subnet.subnet_2.id}"
    private_ip_address_allocation = "dynamic"
    public_ip_address_id = "${azurerm_public_ip.worker1_pip.id}"
    }
    tags {
    group = "Coudera-Cluster"
    }
    }
    
    
    resource "azurerm_public_ip" "worker2_pip" {
    name = "cdh-pip3"
    location = "${var.location}"
    resource_group_name = "${azurerm_resource_group.terraform_rg.name}"
    public_ip_address_allocation = "static"
    
    tags {
    group = "Coudera-Cluster"
    }
    }
    
    resource "azurerm_network_interface" "public_nic3" {
    name = "worker2_network_interface"
    location = "${var.location}"
    resource_group_name = "${azurerm_resource_group.terraform_rg.name}"
    network_security_group_id = "${azurerm_network_security_group.nsg_cluster.id}"
    
    ip_configuration {
    name = "worker2_ip"
    subnet_id = "${azurerm_subnet.subnet_2.id}"
    private_ip_address_allocation = "dynamic"
    public_ip_address_id = "${azurerm_public_ip.worker2_pip.id}"
    }
    tags {
    group = "Coudera-Cluster"
    }
    }
    
    
    resource "azurerm_public_ip" "worker3_pip" {
    name = "cdh-pip4"
    location = "${var.location}"
    resource_group_name = "${azurerm_resource_group.terraform_rg.name}"
    public_ip_address_allocation = "static"
    
    tags {
    group = "Coudera-Cluster"
    }
    }
    
    resource "azurerm_network_interface" "public_nic4" {
    name = "worker3_network_interface"
    location = "${var.location}"
    resource_group_name = "${azurerm_resource_group.terraform_rg.name}"
    network_security_group_id = "${azurerm_network_security_group.nsg_cluster.id}"
    
    ip_configuration {
    name = "worker3_ip"
    subnet_id = "${azurerm_subnet.subnet_2.id}"
    private_ip_address_allocation = "dynamic"
    public_ip_address_id = "${azurerm_public_ip.worker3_pip.id}"
    }
    tags {
    group = "Coudera-Cluster"
    }
    }

    Once the network, storage and the security group are configured we can now provision our VM instances with the following configuration:

    • 5 instances
    • Master and Manager VM size: Standard_DS3_v2
    • Worker VM size: Standard_DS2_v2
    • Centos 7.3
    • 1 OS disk + 1 data disk of 100GB
    vm.tf
    resource "azurerm_virtual_machine" "la_manager" {
      name                  = "cdh_manager"
      location              = "${var.location}"
      resource_group_name   = "${azurerm_resource_group.terraform_rg.name}"
      network_interface_ids = ["${azurerm_network_interface.public_nic.id}"]
      vm_size               = "Standard_DS3_v2"
    
    #This will delete the OS disk automatically when deleting the VM
      delete_os_disk_on_termination = true
    
      storage_image_reference {
        publisher = "OpenLogic"
        offer     = "CentOS"
        sku       = "7.3"
        version   = "latest"
    
      }
    
      storage_os_disk {
        name          = "osdisk-1"
        vhd_uri       = "${azurerm_storage_account.storage.primary_blob_endpoint}${azurerm_storage_container.container.name}/osdisk-1.vhd"
        caching       = "ReadWrite"
        create_option = "FromImage"
      }
    
      # Optional data disks
        storage_data_disk {
          name          = "data"
          vhd_uri       = "${azurerm_storage_account.storage.primary_blob_endpoint}${azurerm_storage_container.container.name}/data1.vhd"
          disk_size_gb  = "100"
          create_option = "Empty"
          lun           = 0
        }
    
      os_profile {
        computer_name  = "manager"
        admin_username = "${var.vm_username_manager}"
        admin_password = "${var.vm_password}"
      }
    
      os_profile_linux_config {
        disable_password_authentication = false
      }
    
      tags {
        group = "Cloudera-Cluster"
      }
    }
    
    
    # Master (High Availability)
    resource "azurerm_virtual_machine" "la_master" {
      name                  = "cdh_master"
      location              = "${var.location}"
      resource_group_name   = "${azurerm_resource_group.terraform_rg.name}"
      network_interface_ids = ["${azurerm_network_interface.public_nic1.id}"]
      vm_size               = "Standard_DS3_v2"
    
    #This will delete the OS disk automatically when deleting the VM
      delete_os_disk_on_termination = true
    
      storage_image_reference {
        publisher = "OpenLogic"
        offer     = "CentOS"
        sku       = "7.3"
        version   = "latest"
    
      }
    
      storage_os_disk {
        name          = "osdisk-2"
        vhd_uri       = "${azurerm_storage_account.storage.primary_blob_endpoint}${azurerm_storage_container.container.name}/osdisk-2.vhd"
        caching       = "ReadWrite"
        create_option = "FromImage"
      }
    
      # Optional data disks
        storage_data_disk {
          name          = "data"
          vhd_uri       = "${azurerm_storage_account.storage.primary_blob_endpoint}${azurerm_storage_container.container.name}/data2.vhd"
          disk_size_gb  = "100"
          create_option = "Empty"
          lun           = 0
        }
    
      os_profile {
        computer_name  = "master"
        admin_username = "${var.vm_username_master}"
        admin_password = "${var.vm_password}"
      }
    
      os_profile_linux_config {
        disable_password_authentication = false
      }
    
      tags {
        group = "Cloudera-Cluster"
      }
    }
    
    
    # Worker 1
    resource "azurerm_virtual_machine" "la_worker1" {
      name                  = "cdh_worker1"
      location              = "${var.location}"
      resource_group_name   = "${azurerm_resource_group.terraform_rg.name}"
      network_interface_ids = ["${azurerm_network_interface.public_nic2.id}"]
      vm_size               = "Standard_DS2_v2"
    
    #This will delete the OS disk automatically when deleting the VM
      delete_os_disk_on_termination = true
    
      storage_image_reference {
        publisher = "OpenLogic"
        offer     = "CentOS"
        sku       = "7.3"
        version   = "latest"
      }
    
      storage_os_disk {
        name          = "osdisk-3"
        vhd_uri       = "${azurerm_storage_account.storage.primary_blob_endpoint}${azurerm_storage_container.container.name}/osdisk-3.vhd"
        caching       = "ReadWrite"
        create_option = "FromImage"
      }
    
      # Optional data disks
        storage_data_disk {
          name          = "data"
          vhd_uri       = "${azurerm_storage_account.storage.primary_blob_endpoint}${azurerm_storage_container.container.name}/data3.vhd"
          disk_size_gb  = "100"
          create_option = "Empty"
          lun           = 0
        }
    
      os_profile {
        computer_name  = "worker1"
        admin_username = "${var.vm_username_worker1}"
        admin_password = "${var.vm_password}"
      }
    
      os_profile_linux_config {
        disable_password_authentication = false
      }
    
      tags {
        group = "Cloudera-Cluster"
      }
    }
    
    # Worker 2
    resource "azurerm_virtual_machine" "la_worker2" {
      name                  = "cdh_worker2"
      location              = "${var.location}"
      resource_group_name   = "${azurerm_resource_group.terraform_rg.name}"
      network_interface_ids = ["${azurerm_network_interface.public_nic3.id}"]
      vm_size               = "Standard_DS2_v2"
    
    #This will delete the OS disk automatically when deleting the VM
      delete_os_disk_on_termination = true
    
      storage_image_reference {
        publisher = "OpenLogic"
        offer     = "CentOS"
        sku       = "7.3"
        version   = "latest"
      }
    
      storage_os_disk {
        name          = "osdisk-4"
        vhd_uri       = "${azurerm_storage_account.storage.primary_blob_endpoint}${azurerm_storage_container.container.name}/osdisk-4.vhd"
        caching       = "ReadWrite"
        create_option = "FromImage"
      }
    
      # Optional data disks
        storage_data_disk {
          name          = "data"
          vhd_uri       = "${azurerm_storage_account.storage.primary_blob_endpoint}${azurerm_storage_container.container.name}/data4.vhd"
          disk_size_gb  = "100"
          create_option = "Empty"
          lun           = 0
        }
    
      os_profile {
        computer_name  = "worker2"
        admin_username = "${var.vm_username_worker2}"
        admin_password = "${var.vm_password}"
      }
    
      os_profile_linux_config {
        disable_password_authentication = false
      }
    
      tags {
        group = "Cloudera-Cluster"
      }
    }
    
    # Worker 3
    resource "azurerm_virtual_machine" "la_worker3" {
      name                  = "cdh_worker3"
      location              = "${var.location}"
      resource_group_name   = "${azurerm_resource_group.terraform_rg.name}"
      network_interface_ids = ["${azurerm_network_interface.public_nic4.id}"]
      vm_size               = "Standard_DS2_v2"
    
    #This will delete the OS disk automatically when deleting the VM
      delete_os_disk_on_termination = true
    
      storage_image_reference {
        publisher = "OpenLogic"
        offer     = "CentOS"
        sku       = "7.3"
        version   = "latest"
      }
    
      storage_os_disk {
        name          = "osdisk-5"
        vhd_uri       = "${azurerm_storage_account.storage.primary_blob_endpoint}${azurerm_storage_container.container.name}/osdisk-5.vhd"
        caching       = "ReadWrite"
        create_option = "FromImage"
      }
    
      # Optional data disks
        storage_data_disk {
          name          = "data"
          vhd_uri       = "${azurerm_storage_account.storage.primary_blob_endpoint}${azurerm_storage_container.container.name}/data5.vhd"
          disk_size_gb  = "100"
          create_option = "Empty"
          lun           = 0
        }
    
      os_profile {
        computer_name  = "worker3"
        admin_username = "${var.vm_username_worker3}"
        admin_password = "${var.vm_password}"
      }
    
      os_profile_linux_config {
        disable_password_authentication = false
      }
    
      tags {
        group = "Cloudera-Cluster"
      }
    }
     Execution

    We can now execute the following command from a shell environment. Note that all files must be placed in the same directory.

    [root@centos Terraform]# terraform plan -var-file=var_values.tfvars
    
    [root@centos Terraform]# terraform apply -var-file=var_values.tfvars

     

    After few minutes, check your resources in your Azure portal.

    Screen Shot 2018-04-10 at 11.49.24

    To destroy the infrastructure, run the following command:

    [root@centos Terraform]# terraform destroy -var-file=var_values.tfvars

     

    Once our infrastructure has been fully provisioned by Terraform we can start the installation of the Cloudera distribution of Hadoop with Ansible.

     

     

    Cet article Deploy a Cloudera cluster with Terraform and Ansible in Azure – part 1 est apparu en premier sur Blog dbi services.

    Dbvisit Standby Archive Log Daemon

    Mon, 2018-04-09 07:27

    Dbvisit Standby version 8 comes with a nice feature, a daemon, which gives the benefit to send and apply the archive log automatically in the background. Bypassing the system scheduling, the daemon will facilitate customer RPO (Recovery Point Objective) and RTO (Recovery Time Objective) fine tuning. Monitoring to apply logs to the Standby only when needed, will also optimize use of resources. Originally available for the Linux based environments, the feature has been made available for the Windows based platforms starting 8.0.06. This blog will cover its implementation and show its benefit.

    Demo databases environments have been easily managed thanks to DBI DMK tool.

    Environment
    DBVP : Primary Server
    DBVS : Standby Server
    DBVPDB_SITE1 : Primary database
    DBVPDB_SITE2 : Physical Standby database

     

    Daemon start/stop/status
    oracle@DBVP:/home/oracle/ [DBVPDB] /u01/app/dbvisit/standby/dbvctl -d DBVPDB_SITE1 -D start
    Starting Dbvisit Daemon...
    Started successfully.
    
    oracle@DBVP:/home/oracle/ [DBVPDB] /u01/app/dbvisit/standby/dbvctl -d DBVPDB_SITE1 -D status
    Dbvisit Daemon process is running with pid 11546.
    
    oracle@DBVP:/home/oracle/ [DBVPDB] /u01/app/dbvisit/standby/dbvctl -d DBVPDB_SITE1 -D stop
    Stopping Dbvisit Daemon...
    Successfully stopped.

     

    Automatic startup
    In order to start the daemon automatically at boot,  and easily manage its status, we will create a dbvlogdaemon Service.
    [root@DBVP ~]# vi /etc/systemd/system/dbvlogdaemon.service
    
    [root@DBVP ~]# cat /etc/systemd/system/dbvlogdaemon.service
    [Unit]
    Description=DB Visit log daemon Service
    After=oracle.service
    
    [Service]
    Type=simple
    RemainAfterExit=yes
    User=oracle
    Group=oinstall
    Restart=always
    ExecStart=/u01/app/dbvisit/standby/dbvctl -d DBVPDB_SITE1 -D start
    ExecStop=/u01/app/dbvisit/standby/dbvctl -d DBVPDB_SITE1 -D stop
    
    [Install]
    WantedBy=multi-user.target
    
    [root@DBVP ~]# chmod 644 /etc/systemd/system/dbvlogdaemon.service
    
    [root@DBVP ~]# systemctl daemon-reload
    
    [root@DBVP ~]# systemctl enable dbvlogdaemon.service

    Of course this would not avoid impact in case of daemon crash which could be simulated with a kill command.

     

    Check running daemon
    oracle@DBVP:/u01/app/dbvisit/standby/ [DBVPDB] ps -ef | grep dbvctl | grep -v grep
    oracle    4299     1  0 08:25 ?        00:00:02 /u01/app/dbvisit/standby/dbvctl -d DBVPDB_SITE1 -D start
    
    oracle@DBVP:/u01/app/dbvisit/standby/ [DBVPDB] ./dbvctl -d DBVPDB_SITE1 -D status
    Dbvisit Daemon process is running with pid 4299.
    
    oracle@DBVS:/u01/app/dbvisit/standby/ [DBVPDB] ps -ef | grep dbvctl | grep -v grep
    oracle    4138     1  0 08:25 ?        00:00:01 /u01/app/dbvisit/standby/dbvctl -d DBVPDB_SITE1 -D start
    
    oracle@DBVS:/u01/app/dbvisit/standby/ [DBVPDB] ./dbvctl -d DBVPDB_SITE1 -D status
    Dbvisit Daemon process is running with pid 4138.
    
    

     

    Daemon Parameter
    # DMN_DBVISIT_INTERVAL     - interval in sec for dbvisit schedule on source
    # DMN_MONITOR_INTERVAL     - interval in sec for log monitor schedule on source
    # DMN_DBVISIT_TIMEOUT      - max sec for a dbvisit process to complete on source
    # DMN_MONITOR_TIMEOUT      - max sec for a monitor process to complete on source
    # DMN_MONITOR_LOG_NUM      - number of logs to monitor on source
    # DMN_MAX_FAIL_NOTIFICATIONS - max number of emails sent on failure on source
    # DMN_BLACKOUT_STARTTIME   - blackout window start time HH:MI on source
    # DMN_BLACKOUT_ENDTIME     - blackout window end time HH:MI on source
    # DMN_DBVISIT_INTERVAL_DR  - interval in sec for dbvisit schedule on destination
    # DMN_MONITOR_INTERVAL_DR  - interval in sec for log monitor schedule on destination
    # DMN_DBVISIT_TIMEOUT_DR   - max sec for a dbvisit process to complete on destination
    # DMN_MONITOR_TIMEOUT_DR   - max sec for a monitor process to complete on destination
    # DMN_MONITOR_LOG_NUM_DR   - number of logs to monitor on destination
    # DMN_MAX_FAIL_NOTIFICATIONS_DR - max number of emails sent on failure on destination
    # DMN_BLACKOUT_STARTTIME_DR- blackout window start time HH:MI on destination
    # DMN_BLACKOUT_ENDTIME_DR  - blackout window end time HH:MI on destination
    
    

    With the daemon, we can pause the archive send/apply process using the DMN_BLACKOUT parameters.

    To setup our lab we will act on the most important parameters :
    • DMN_MONITOR_INTERVAL  (Primary) and DMN_MONITOR_INTERVAL_DR (Standby).
      The Monitor Interval will give the frequency for Dbvisit to check for new archive log and only act if existing.
    • DMN_DBVISIT_INTERVAL (Primary) and DMN_DBVISIT_INTERVAL_DR (Standby)
      The Dbvisit Interval will give the frequency for Dbvisit to force a send/apply process. This action will be dependent of the LOGSWITCH DCC parameter. Recommendation is not to go below 5 minutes.
    oracle@DBVP:/oracle/u01/app/dbvisit/standby/conf/ [DBVPDB] pwd
    /oracle/u01/app/dbvisit/standby/conf
    
    oracle@DBVP:/u01/app/dbvisit/standby/conf/ [DBVPDB] egrep 'DMN_DBVISIT_INTERVAL|DMN_MONITOR_INTERVAL' dbv_DBVPDB_SITE1.env
    # DMN_DBVISIT_INTERVAL     - interval in sec for dbvisit schedule on source
    # DMN_MONITOR_INTERVAL     - interval in sec for log monitor schedule on source
    # DMN_DBVISIT_INTERVAL_DR  - interval in sec for dbvisit schedule on destination
    # DMN_MONITOR_INTERVAL_DR  - interval in sec for log monitor schedule on destination
    DMN_DBVISIT_INTERVAL = 300
    DMN_MONITOR_INTERVAL = 60
    DMN_DBVISIT_INTERVAL_DR = 300
    DMN_MONITOR_INTERVAL_DR = 60

     

    The LOGSWITCH parameter determines if a database log switch (alter system switch logfile) should be trigger at Dbvisit execution.
    N (default value) : Only if there are no new archive logs to transfer.
    Y : At every execution, independently of the archive log creation.
    I(Ignore) : Never. To be use with caution.

    A daemon restart is mandatory post DDC configuration file updates.
    [root@DBVP ~]# service dbvlogdaemon stop
    Redirecting to /bin/systemctl stop dbvlogdaemon.service
    [root@DBVP ~]# service dbvlogdaemon start
    Redirecting to /bin/systemctl start dbvlogdaemon.service
    
    [root@DBVS ~]# service dbvlogdaemon stop
    Redirecting to /bin/systemctl stop dbvlogdaemon.service
    [root@DBVS ~]# service dbvlogdaemon start
    Redirecting to /bin/systemctl start dbvlogdaemon.service

     

    Send and apply archive log demo
    Get current date and primary current sequence.
    SQL> select sysdate from dual;
    
    SYSDATE
    -------------------
    2018/03/28 12:30:50
    
    SQL> select max(sequence#) from v$log;
    
    MAX(SEQUENCE#)
    --------------
               179
    
    
    Generate a Dbvisit gap report.
    oracle@DBVP:/u01/app/dbvisit/standby/conf/ [DBVPDB] /u01/app/dbvisit/standby/dbvctl -d DBVPDB_SITE1 -i
    =============================================================
    Dbvisit Standby Database Technology (8.0.16_0_g4e0697e2) (pid 21393)
    dbvctl started on DBVP: Wed Mar 28 12:30:57 2018
    =============================================================
    
    Dbvisit Standby log gap report for DBVPDB_SITE1 thread 1 at 201803281230:
    -------------------------------------------------------------
    Destination database on DBVS is at sequence: 178.
    Source database on DBVP is at log sequence: 179.
    Source database on DBVP is at archived log sequence: 178.
    Dbvisit Standby last transfer log sequence: 178.
    Dbvisit Standby last transfer at: 2018-03-28 12:29:14.
    
    Archive log gap for thread 1:  0.
    Transfer log gap for thread 1: 0.
    Standby database time lag (DAYS-HH:MI:SS): +00:01:27.
    
    
    =============================================================
    dbvctl ended on DBVP: Wed Mar 28 12:31:06 2018
    =============================================================

    No archive log needs to be send and apply on the standby. Both databases are in sync.

    
    
    Generate logfile switch
    SQL> alter system switch logfile;
    
    System altered.
    
    SQL> alter system switch logfile;
    
    System altered.
    
    SQL> alter system switch logfile;
    
    System altered.
    
    
    Check current date and primary database current sequence.
    SQL> select sysdate from dual;
    
    SYSDATE
    -------------------
    2018/03/28 12:31:29
    
    SQL> select max(sequence#) from v$log;
    
    MAX(SEQUENCE#)
    --------------
               182
    
    
    Generate new Dbvisit gap reports.
    oracle@DBVP:/u01/app/dbvisit/standby/conf/ [DBVPDB] /u01/app/dbvisit/standby/dbvctl -d DBVPDB_SITE1 -i
    =============================================================
    Dbvisit Standby Database Technology (8.0.16_0_g4e0697e2) (pid 21454)
    dbvctl started on DBVP: Wed Mar 28 12:31:38 2018
    =============================================================
    
    Dbvisit Standby log gap report for DBVPDB_SITE1 thread 1 at 201803281231:
    -------------------------------------------------------------
    Destination database on DBVS is at sequence: 178.
    Source database on DBVP is at log sequence: 182.
    Source database on DBVP is at archived log sequence: 181.
    Dbvisit Standby last transfer log sequence: 178.
    Dbvisit Standby last transfer at: 2018-03-28 12:29:14.
    
    Archive log gap for thread 1:  3.
    Transfer log gap for thread 1: 3.
    Standby database time lag (DAYS-HH:MI:SS): +00:02:27.
    
    
    =============================================================
    dbvctl ended on DBVP: Wed Mar 28 12:31:47 2018
    =============================================================
    We can see that we have 3 new archive logs to transfer and to apply on the standby.
    There is a 3 sequences lag between both databases.
    
    
    oracle@DBVP:/u01/app/dbvisit/standby/conf/ [DBVPDB] /u01/app/dbvisit/standby/dbvctl -d DBVPDB_SITE1 -i
    =============================================================
    Dbvisit Standby Database Technology (8.0.16_0_g4e0697e2) (pid 21571)
    dbvctl started on DBVP: Wed Mar 28 12:32:19 2018
    =============================================================
    Dbvisit Standby log gap report for DBVPDB_SITE1 thread 1 at 201803281232:
    -------------------------------------------------------------
    Destination database on DBVS is at sequence: 178.
    Source database on DBVP is at log sequence: 182.
    Source database on DBVP is at archived log sequence: 181.
    Dbvisit Standby last transfer log sequence: 181.
    Dbvisit Standby last transfer at: 2018-03-28 12:32:13.
    Archive log gap for thread 1:  3.
    Transfer log gap for thread 1: 0.
    Standby database time lag (DAYS-HH:MI:SS): +00:02:27.
    =============================================================
    dbvctl ended on DBVP: Wed Mar 28 12:32:27 2018
    =============================================================
    3 archive logs has been automatically transferred by the daemon to the standby in the next minute.
    
    
    oracle@DBVP:/u01/app/dbvisit/standby/conf/ [DBVPDB] /u01/app/dbvisit/standby/dbvctl -d DBVPDB_SITE1 -i
    =============================================================
    Dbvisit Standby Database Technology (8.0.16_0_g4e0697e2) (pid 21679)
    dbvctl started on DBVP: Wed Mar 28 12:33:00 2018
    =============================================================
    
    Dbvisit Standby log gap report for DBVPDB_SITE1 thread 1 at 201803281233:
    -------------------------------------------------------------
    Destination database on DBVS is at sequence: 181.
    Source database on DBVP is at log sequence: 182.
    Source database on DBVP is at archived log sequence: 181.
    Dbvisit Standby last transfer log sequence: 181.
    Dbvisit Standby last transfer at: 2018-03-28 12:32:13.
    
    Archive log gap for thread 1:  0.
    Transfer log gap for thread 1: 0.
    Standby database time lag (DAYS-HH:MI:SS): +00:01:13.
    
    
    =============================================================
    dbvctl ended on DBVP: Wed Mar 28 12:33:09 2018
    =============================================================

    Another minute later the standby daemon applied the new archive logs. Both databases are on sync.

     

    Conclusion

    Dbvisit new daemon feature is adding real flexibility in sending and applying archive logs, and help improving customer RPO and RTO. We still might want to keep a daily crontab gap report with email to be sent to a DBA team. This will ensure to monitor daemon keep alive.

    Logswitch and sending archive logs to standby consumes real system resource. Dbvisit daemon will also help fine tuning the use of the resource.

    Note that the daemon processes must be restarted after each daylight saving clock change.

     

    Cet article Dbvisit Standby Archive Log Daemon est apparu en premier sur Blog dbi services.

    Automating Oracle Linux Installation with Kickstart

    Fri, 2018-04-06 08:10
    Automating Oracle Linux Installation with Kickstart Kickstart ?

    If you need to setup from scratch several Oracle Linux systems for your Oracle databases, it can be boring to repeat the install tasks again and again on each servers.
    Automation and standardization are the keys.
    Kickstart can provide an easy way to accomplish mass deployment.

    Kickstart configuration files

    Kickstart will use a Kickstart configuration file to perform the deployment.
    Maintaining ready to go Kickstart configurations is easy.
    We will use in our demo an ftp server to store and access our configuration file.

    Direct access to the video:
    51

    Lets go ! Fisrt install an ftp server

    On an oralinux 7.2 server, just type following command to install an ftp server + an ftp client

    yum install vsftpd ftp lftp
    

    53

    Then adapt timeout parameter to avoid disconnection when deploying your server.
    Be sure anonymous access is enable.

    [root@localhost ~]# sed '/^#/d' /etc/vsftpd/vsftpd.conf 
    
    anonymous_enable=YES
    local_enable=YES
    write_enable=YES
    local_umask=022
    dirmessage_enable=YES
    xferlog_enable=YES
    connect_from_port_20=YES
    xferlog_std_format=YES
    idle_session_timeout=6000
    data_connection_timeout=1200
    listen=NO
    listen_ipv6=YES
    pam_service_name=vsftpd
    userlist_enable=YES
    tcp_wrappers=YES
    

    and start your ftpd server.

    systemctl start vsftpd
    

    Then put your kickstart configuration file in it. i will explain the file later:

    vi /var/ftp/pub/myksfile.ks
    

    And copy/paste the whole content. I will explain the file later:

    ########################################################################
    ########################################################################
    ##                                                                    ##
    ##               Kickstart for OEL7 :  olg.dbi-services.com           ##
    ##                                                                    ##
    ########################################################################
    ########################################################################
    
    # install through HTTP
    ########################################################################
    install
    cdrom
    
    
    # locale settings
    ########################################################################
    lang en_US.UTF-8
    keyboard --vckeymap=ch --xlayouts='ch'
    timezone --utc Europe/Zurich
    
    
    # X is not configured on the installed system. 
    ########################################################################
    skipx
    
    
    # installation mode
    ########################################################################
    text
    reboot --eject
    
    
    # Partition table initialization
    ########################################################################
    zerombr
    
    
    # Network configuration
    # Oracle Linux 7: How to modify Network Interface names (Doc ID 2080965.1)
    ########################################################################
    ### network --device eth0 --bootproto static --ip 192.168.56.102 --netmask 255.255.255.0 --gateway 192.168.56.1 --nameserver it.dbi-services.com --hostname olg.dbi-services.com net.ifnames=0
    
    
    # security settings
    ########################################################################
    rootpw      toor
    firewall    --enabled --ssh
    selinux   --enforcing
    authconfig  --enableshadow --passalgo=sha512
    
    
    # Partitioning and bootloader
    ########################################################################
    # only 1 disk presented to the O.S during installation time
    # net.ifnames=0 to use eth name for network devices
    bootloader      --location=mbr  --append="nofb quiet splash=quiet crashkernel=auto net.ifnames=0"
    firstboot       --disable
    clearpart       --all          --initlabel
    part /boot      --fstype xfs   --ondisk=/dev/sda --size=512
    part swap       --size=2048   --ondisk=/dev/sda
    part pv.01      --size=100     --ondisk=/dev/sda --grow
    volgroup RHELVG pv.01
    logvol /        --fstype xfs   --name=RootLV   --vgname=RHELVG --size=8196
    logvol /usr     --fstype xfs   --name=UsrLV    --vgname=RHELVG --size=2048
    logvol /tmp     --fstype xfs   --name=TmpLV    --vgname=RHELVG --size=2048
    logvol /var     --fstype xfs   --name=VarLV    --vgname=RHELVG --size=4096
    logvol /var/log/audit     --fstype xfs   --name=AuditLV    --vgname=RHELVG --size=2048
    logvol /opt     --fstype xfs   --name=OptLV    --vgname=RHELVG --size=2048
    logvol /home    --fstype xfs   --name=HomeLV   --vgname=RHELVG --size=2048
    logvol /u01     --fstype xfs   --name=u01LV    --vgname=RHELVG --size=2048
    
    
    
    # packages + RPMs
    ########################################################################
    %packages
    @base
    
    # system components
    device-mapper-multipath
    kexec-tools
    lvm2
    e4fsprogs
    sg3_utils
    lsscsi
    dstat
    ntp
    perl
    postfix
    bc
    
    # VI
    vim-common
    vim-enhanced
    
    # SELINUX
    setroubleshoot
    setroubleshoot-server
    setroubleshoot-plugins
    
    %end
    
    
    # POST installations tasks
    ########################################################################
    %post
    
    modprobe --first-time bonding
    # VLAN kernel module
    # modprobe --first-time 8021q
    
    # configure bond
    ################
    echo "DEVICE=bond0
    TYPE=Bond
    BONDING_MASTER=yes
    BOOTPROTO=static
    IPADDR=192.168.56.149
    NETMASK=255.255.255.0
    GATEWAY=192.168.56.1
    BONDING_OPTS=\"mode=active-backup miimon=100\"
    ONPARENT=yes
    ONBOOT=yes" > /etc/sysconfig/network-scripts/ifcfg-bond0
    
    echo "DEVICE=eth0
    ONBOOT=yes
    MASTER=bond0
    BOOTPROTO=none
    NM_CONTROLLED=no
    SLAVE=yes" > /etc/sysconfig/network-scripts/ifcfg-eth0
    
    echo "DEVICE=eth1
    ONBOOT=yes
    MASTER=bond0
    BOOTPROTO=none
    NM_CONTROLLED=no
    SLAVE=yes" > /etc/sysconfig/network-scripts/ifcfg-eth1
    
    echo "DEVICE=eth2
    ONBOOT=yes
    BOOTPROTO=dhcp
    NM_CONTROLLED=no
    " > /etc/sysconfig/network-scripts/ifcfg-eth2
    
    rm -f /etc/sysconfig/network-scripts/ifcfg-en*
    
    systemctl restart network
    systemctl stop NetworkManager.service
    systemctl disable NetworkManager.service
    
    
    # Switch to Postfix
    ###################
    alternatives --set mta  /usr/sbin/sendmail.postfix
    
    
    # HOSTS FILE
    ############
    cat >> /etc/hosts <> /etc/ntp.conf
    
    # DNS config
    #############
    cat > /etc/resolv.conf < /etc/postfix/main.cf < /etc/postfix/master.cf <> /etc/postfix/generic
    postmap /etc/postfix/generic
    
    
    
    # user management + SUDO privilege delegation
    ########################################################################
    adduser admora
    echo toor | passwd admora --stdin
    
    echo "admora    ALL=NOPASSWD: ALL
    #admora  ALL = NOPASSWD: /bin/su - oracle , /bin/su -" >> /etc/sudoers 
    
    
    # Enable services
    ########################################################################
    systemctl enable ntpd.service
    systemctl start ntpd.service
    systemctl enable ntpdate.service
    
    
    # Oracle +Nagios prereqs
    ########################################################################
    yum -y install oracle-rdbms-server-11gR2-preinstall oracle-rdbms-server-12cR1-preinstall oracle-database-server-12cR2-preinstall
    yum -y install openssl openssl-devel
    yum -y install net-tools
    # as of ALUA RHEL7.4 incompatibilities (stay on 7.2 and lock repo. later)
    #yum -y update
    
    
    # Oracle tuned configuration
    ########################################################################
    mkdir -p /usr/lib/tuned/dbiOracle
    cat > /usr/lib/tuned/dbiOracle/tuned.conf < /sys/class/fc_host/host1/issue_lip
    echo 1 > /sys/class/fc_host/host2/issue_lip
    
    echo "# Format:
    # alias wwid
    #
    LUN_ORAFRA 360030d90466abf0660191bde985bba15
    LUN_ORADBF 360030d906382c2065827918ddb6506da" >> /etc/multipath/bindings
    
    cat > /etc/multipath.conf <<EOF
    
    defaults {
       polling_interval 60
             }
    
    blacklist {
     devnode "^sd[a]"
            devnode "^(zram|ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
            devnode "^hd[a-z]"
            devnode "^cciss!c[0-9]d[0-9]*"
    }
    blacklist_exceptions {
     wwid "360030d90466abf0660191bde985bba15"
     wwid "360030d906382c2065827918ddb6506da"
     #vendor   "DataCore"
     #product   "Virtual Disk"
                  }
    devices {
     device {
       vendor    "DataCore"
       product   "Virtual Disk"
       path_checker   tur
       prio     alua
       failback   10
       no_path_retry   fail
    
       dev_loss_tmo   infinity
       fast_io_fail_tmo  5
    
       rr_min_io_rq    100
       # Alternative option – See notes below
       # rr_min_io  100
    
       path_grouping_policy  group_by_prio
       # Alternative policy - See notes below
       # path_grouping_policy failover
    
        # optional - See notes below
       user_friendly_names yes
                     }
             }
    EOF
    
    systemctl reload multipathd
    
    # final post steps (Bugs, security)
    ####################################
    systemctl disable rdma.service
    touch /.autorelabel
    dracut -f 
    
    %end
    
    

    Test that you can access anonymously to your file through ftp with your browser
    ftp://192.168.56.101/pub/myksfile.ks
    52
    Or via an ftp client

    $ lftp ftp://192.168.56.101
    
    lftp 192.168.56.101:~> cat /pub/myksfile.ks
    
    You can now deploy your Oracle Linux server for a new database:

    When you arrive on the installation screen,
    22

    Booting from dvd, press ESC to get the boot prompt and type
    For the demo, I’m using Virtual Box VM, + 1 dvd drive for the ISO file i have downloaded from the oracle site: V100082-01.iso (oralinux7.2)

    linux ks=ftp://192.168.56.101/pub/myksfile.ks
    

    Then press ENTER as shown in this demo:
    51

    Here, if you don’t get RNETLINK answers: File exists, something is wrong in your network configuration.
    07

    At this step, if you see the green line, it’s mean you entered in anaconda and that your installation process is ongoing.
    55

    If you receive some Pane errors, once again, something is wrong in the network configuration. This is the hard part. Depending of the customer infrastructure, you could nee to set up ip manually.
    Below 2 examples: one using a static IP configuration and the other a VLAN configuration.

    static IP configuration
    
    linux ip=192.168.56.102 netmask=255.255.255.0 gateway=192.168.56.1 servername=it.dbi-services.com ks=ftp://192.168.56.101/pub/myksfile.ks net.ifnames=0
    
    static IP configuration with use of VLAN (VLANID=27 in this example)
    
    linux ip=192.168.56.102 netmask=255.255.255.128 gateway=192.168.56.1 servername=it.dbi-services.com ks=ftp://192.168.56.1/myksfile.ks net.ifnames=0 vlan=VLAN27.27:eth0
    

    Anaconda will now perform the partitioning part:
    04

    For the demo, I’m using a 40G disk. If you don’t give enough space, or if you have done some errors in your configuration, you will be prompt to fix the configuration issues. You would better restart the installation from the beginning.

    # Partitioning and bootloader
    ########################################################################
    # only 1 disk presented to the O.S during installation time
    # net.ifnames=0 to use eth name for network devices
    bootloader      --location=mbr  --append="nofb quiet splash=quiet crashkernel=auto net.ifnames=0"
    firstboot       --disable
    clearpart       --all          --initlabel
    part /boot      --fstype xfs   --ondisk=/dev/sda --size=512
    part swap       --size=2048   --ondisk=/dev/sda
    part pv.01      --size=100     --ondisk=/dev/sda --grow
    volgroup RHELVG pv.01
    logvol /        --fstype xfs   --name=RootLV   --vgname=RHELVG --size=8196
    logvol /usr     --fstype xfs   --name=UsrLV    --vgname=RHELVG --size=2048
    logvol /tmp     --fstype xfs   --name=TmpLV    --vgname=RHELVG --size=2048
    logvol /var     --fstype xfs   --name=VarLV    --vgname=RHELVG --size=4096
    logvol /var/log/audit     --fstype xfs   --name=AuditLV    --vgname=RHELVG --size=2048
    logvol /opt     --fstype xfs   --name=OptLV    --vgname=RHELVG --size=2048
    logvol /home    --fstype xfs   --name=HomeLV   --vgname=RHELVG --size=2048
    logvol /u01     --fstype xfs   --name=u01LV    --vgname=RHELVG --size=2048
    

    When the partitioning part is finish, the package installation process will begin.
    25

    You can add personalize the packages you want install from the dvd.

    # packages + RPMs
    ########################################################################
    %packages
    @base
    
    # system components
    device-mapper-multipath
    kexec-tools
    lvm2
    e4fsprogs
    sg3_utils
    lsscsi
    dstat
    ntp
    perl
    postfix
    bc
    

    During the installation, you can TAB between console to get more information on what’s going on.
    Console 2 permit you to type shell commands:

    For the demo, I’m using 3 Ethernet cards: 2 for the bonding, 1 NAT for internet connection.
    With ip a command, i can see which the interface names and IP i’m currently using during the installation process:
    54
    Because I set net.ifnames=0, eth will be used after rebooting for my netcard interfaces name. I will configure them in the POST installations tasks.

     bootloader      --location=mbr  --append="nofb quiet splash=quiet crashkernel=auto net.ifnames=0
    

    Switching between Console 1 / Console 3 / Console 5 permit to see what anaconda is doing. Interesting part it the %%post message.
    It means you are in the POST installations tasks.
    21

    Configuration files of your system can be modified.
    In my demo, i will configure bonding, postfix, and multipathing and yum install oracle-database-server-12cR2-preinstall package with dependencies !
    21

    The script coming from the kickstart configuration file is stored in the /tmp folders. It is called /tmp/ks-script-JeYnWI.log
    After reboot, you can inspect it if you like to.

     

    Cet article Automating Oracle Linux Installation with Kickstart est apparu en premier sur Blog dbi services.

    Password Validation in MySQL

    Thu, 2018-04-05 05:05
    Introduction on validate_password plugin

    Since version 5.6.6 MySQL provides a new security plugins named Password Validation Plugin. The password-validation plugin aims to test passwords strength and improve security. The goal of this blog is to provide you a short overview of the functionalities provided through this plugin and illustrate these functionalities with concrete examples.

    As explained into the documentation The validate_password plugin implements two capabilities:

    1. The plugin checks the password against the current password policy and rejects the password if it is weak
    2. The VALIDATE_PASSWORD_STRENGTH() SQL function assesses the strength of potential passwords. The function takes a password argument and returns an integer from 0 (weak) to 100 (strong).

    validate_password plugin implements three level of password checking that are described below:

    • LOW – policy tests password length only.
    • MEDIUM (Default) – policy adds the conditions that passwords must contain at least 1 numeric character, 1 lowercase character, 1 uppercase character, and 1 special (nonalphanumeric) character
    • STRONG – policy adds the condition that password substrings of length 4 or longer must not match words in the dictionary file, if one has been specified.

    validate_password plugin provides several checks that can be seen using the show variables command:

    SHOW VARIABLES LIKE 'validate_password.%';
    +--------------------------------------+--------+
    | Variable_name                        | Value  |
    +--------------------------------------+--------+
    | validate_password.check_user_name    | ON     |
    | validate_password.dictionary_file    |        |
    | validate_password.length             | 8      |
    | validate_password.mixed_case_count   | 1      |
    | validate_password.number_count       | 1      |
    | validate_password.policy             | MEDIUM |
    | validate_password.special_char_count | 1      |
    +--------------------------------------+--------+
    7 rows in set (0,01 sec)
    
    
    Tests with validate_password.policy=LOW

    First let’s set the validate_password.policy to LOW to check which tests are done by the plugin. It should only check password length.

    SET GLOBAL validate_password.policy=LOW;
    
    +--------------------------------------+-------+
    | Variable_name                        | Value |
    +--------------------------------------+-------+
    | validate_password.check_user_name    | ON    |
    | validate_password.dictionary_file    |       |
    | validate_password.length             | 8     |
    | validate_password.mixed_case_count   | 1     |
    | validate_password.number_count       | 1     |
    | validate_password.policy             | LOW   |
    | validate_password.special_char_count | 1     |
    +--------------------------------------+-------+
    
    create user 'steulet'@'localhost' identified by '1234567';
    ERROR 1819 (HY000): Your password does not satisfy the current policy requirements
    
    create user 'steulet'@'localhost' identified by '12345678';
    Query OK, 0 rows affected (0,01 sec)

     

    Tests with validate_password.policy=MEDIUM

    MEDIUM policy adds the conditions that passwords must contain at least 1 numeric character, 1 lowercase character, 1 uppercase character, and 1 special (nonalphanumeric) character

    SET GLOBAL validate_password.policy=MEDIUM;
    Query OK, 0 rows affected (0,00 sec)
    
    SHOW VARIABLES LIKE 'validate_password.%';
    +--------------------------------------+-------------------------------------+
    | Variable_name | Value |
    +--------------------------------------+-------------------------------------+
    | validate_password.check_user_name | ON |
    | validate_password.dictionary_file |  |
    | validate_password.length | 8 |
    | validate_password.mixed_case_count | 1 |
    | validate_password.number_count | 1 |
    | validate_password.policy | MEDIUM |
    | validate_password.special_char_count | 1 |
    +--------------------------------------+-------------------------------------+
    7 rows in set (0.00 sec)
    create user 'hueber'@'localhost' identified by '12345678';
    ERROR 1819 (HY000): Your password does not satisfy the current policy requirements
    
    create user 'hueber'@'localhost' identified by '1234567L';
    ERROR 1819 (HY000): Your password does not satisfy the current policy requirements
    
    create user 'hueber'@'localhost' identified by '123456zL';
    ERROR 1819 (HY000): Your password does not satisfy the current policy requirements
    
    create user 'hueber'@'localhost' identified by '12345!zL';
    Query OK, 0 rows affected (0.01 sec)

     

    Tests with validate_password.policy=STRONG

    In order to check the validate_password.policy=STRONG I uploaded a password file used for brute force attack. You can download this file from: https://github.com/danielmiessler/SecLists/blob/master/Passwords/Most-Popular-Letter-Passes.txt

    SET GLOBAL validate_password.dictionary_file='/u01/mysqldata/mysqld2/PasswordList';
    Query OK, 0 rows affected (0,00 sec)
    SET GLOBAL validate_password.policy=strong;
    Query OK, 0 rows affected (0,00 sec)
    SHOW VARIABLES LIKE 'validate_password.%';
    +--------------------------------------+-------------------------------------+
    | Variable_name | Value |
    +--------------------------------------+-------------------------------------+
    | validate_password.check_user_name | ON |
    | validate_password.dictionary_file | /u01/mysqldata/mysqld2/PasswordList |
    | validate_password.length | 8 |
    | validate_password.mixed_case_count | 1 |
    | validate_password.number_count | 1 |
    | validate_password.policy | STRONG |
    | validate_password.special_char_count | 1 |
    +--------------------------------------+-------------------------------------+

    7 rows in set (0.00 sec)

    create user 'neuhaus'@'localhost' identified by 'Manager1;';
    ERROR 1819 (HY000): Your password does not satisfy the current policy requirements
    
    create user 'neuhaus'@'localhost' identified by 'Password1;';
    ERROR 1819 (HY000): Your password does not satisfy the current policy requirements

    If I decrease the validate_password.policy to medium, the plugin doesn’t check the dictionary file anymore:

    SET GLOBAL validate_password.policy=medium;
    Query OK, 0 rows affected (0,00 sec)
    
    create user 'neuhaus'@'localhost' identified by 'Password1;';
    Query OK, 0 rows affected (0,00 sec)
    Function VALIDATE_PASSWORD_STRENGTH()

    As explained above the validate_password_strength test a password and returns an integer from 0 (weak) to 100 (strong) representing the password strength.

    select VALIDATE_PASSWORD_STRENGTH('abcd');
    +------------------------------------+
    | VALIDATE_PASSWORD_STRENGTH('abcd') |
    +------------------------------------+
    | 25 |
    +------------------------------------+
    1 row in set (0.00 sec)
    select VALIDATE_PASSWORD_STRENGTH('password');
    +----------------------------------------+
    | VALIDATE_PASSWORD_STRENGTH('password') |
    +----------------------------------------+
    | 50 |
    +----------------------------------------+
    1 row in set (0.00 sec)
    select VALIDATE_PASSWORD_STRENGTH('Manager1!');
    +-----------------------------------------+
    | VALIDATE_PASSWORD_STRENGTH('Manager1!') |
    +-----------------------------------------+
    | 75 |
    +-----------------------------------------+
    1 row in set (0.00 sec)
    select VALIDATE_PASSWORD_STRENGTH('aZbq!1)m8N');
    +------------------------------------------+
    | VALIDATE_PASSWORD_STRENGTH('aZbq!1)m8N') |
    +------------------------------------------+
    | 100 |
    +------------------------------------------+
    1 row in set (0.00 sec)

     

     

    Cet article Password Validation in MySQL est apparu en premier sur Blog dbi services.

    Pages