Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 11 hours 59 min ago

Transparent Data Encryption – Certificate in master database: is it really a good practice?

21 hours 45 min ago

As encryption solution in SQL Server, Transparent Data Encryption (TDE) is simple and quick to set up. That’s why this is a common encryption mechanism.

TDE encrypts data with a certificate at the page level, before SQL Server writes on the disk. It is supposed to protect your environment from some scenarios, where SQL Server files (backups or data) are stolen.
By default the certificate used for encryption is stored in the master database. But is it really a good practice?

Let’s see with a common scenario:
– One MSSQL instance where TDE is enabled for one database
– One MSSQL instance without any certificate

Configuring TDE for a database

Create a master key:

USE master
GO
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'P@$$w0rd'
GO

Create a certificate to use for TDE:

USE master
GO
CREATE CERTIFICATE CertinMaster
   WITH SUBJECT = 'Self-Signed Certificate in Master',   
   EXPIRY_DATE = '20241231';  
GO  

Certificate

Create a database encryption key in the database you want to encrypt, protected by the certificate:

USE dummy
GO
CREATE DATABASE ENCRYPTION KEY WITH ALGORITHM = AES_256 ENCRYPTION BY SERVER CERTIFICATE CertinMaster
GO

Enable encryption on the database:

USE master
GO
ALTER DATABASE dummy SET ENCRYPTION ON
GO

tde

At this step:
– TDE is configured for dummy database
– Certificate used for encryption is located in the master database

Restoring encrypted database to an another instance – first try

After backing up dummy database and copying the backup to another MSSQL instance, an error is displayed when trying to restore this database with required certificate:

Restore_tde_witout_certificate_impossible

This example shows how TDE protects data from a scenario where someone has robbed your backup file.
A similar error appears if an attach file procedure is used.

Restoring encrypted database to an another instance – second try

After backing up master database, let’s try to restore it on the other MSSQL environment.
Restoring a master database is a bit more complicated than a user database, but not impossible.

First it is mandatory to set the instance in Single User mode, by adding Trace Flag ‘-m’ at startup parameters.
PowerShell code executed on target server:

$Path = "HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL13.ARCTURUS\MSSQLServer\Parameters\";
New-ItemProperty -Path $Path -PropertyType String -Name SQLArg3 -Value "-m"

Note: depending on how many startup parameters you have (default is 3), adjust accordingly the item property to SQLArgX.

A restart of the instance is necessary to apply the Trace Flag. Do not start the agent service, otherwise it will connect to the instance in single user mode.

As only one connection to the instance is authorized in this mode, PowerShell is more suited to do the job for the restore:

$Query= "RESTORE DATABASE master FROM DISK = 'C:\Mountpoints\Backup\master.bak' WITH REPLACE";
Invoke-Sqlcmd -ServerInstance KERRIGAN\ARCTURUS -Query $Query"

The SQL Server is automatically stopped.

First remove the Single user mode:

$Path = "HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL13.ARCTURUS\MSSQLServer\Parameters\";
Remove-ItemProperty -Path $Path -Name SQLArg3"

After disabling Single User mode, SQL Server services (Engine + Agent) can start again.

After query the new master, it is effectively holding the certificate:
Certificate_Copied

Finally after trying to restore the dummy database on this instance, it works without any trouble: data are freely accessible.

Conclusion

For a complete protection, I do not recommend to use a simple form of Transparent Data Encryption. As a best practice, I do recommend to store the encryption key on a third-party system using Extensible Key Management (EKM).

 

Cet article Transparent Data Encryption – Certificate in master database: is it really a good practice? est apparu en premier sur Blog dbi services.

Foglight

Mon, 2018-12-10 11:37

As many Oracle Database Administrators, I am a regular user of Oracle Enterprise Manager in order to monitor or administer my client’s databases. I am lucky because most of them have the diagnostic pack and the tuning pack :=).

But we always have to be curious and discover new products. As described by my colleague Nicolas Penot, I installed Foglight:

https://blog.dbi-services.com/foglight-monitoring-solution-for-databases-part-01/

https://blog.dbi-services.com/foglight-monitoring-solution-for-databases-part-02/

The installation was successful and very quick to realize, remember when you have to install OEM 13c, if you well know the product and have a powerful server, if you can install in half a day , you are pretty efficient.

I successfully discovered two Oracle databases (version 12.2.0.1 and 18.3.0.0). As you can see in the picture below, you can also monitor MySQL , Postgres, DB2 , and SQL Server.

fg1

 

The first summary database page shows a lot of useful information (process activity, memory activity, logical reads … a.s.o:

fg2

 

fg3

We have the possibility to navigate in different menus for example the Storage Tablespace Summary:

fg4

We have the possibility to display the datafiles I/O summary:

fg5

 

We can display the pluggable databases and to have an overview of workload metrics with nice graphs:

fg6

 

fg7

 

Foglight also displays the alert.log file error messages with four categories (Informational, Critical Warning or Fatal):

We create false critical messages in the alert.log file, I used dbms_system.ksdwrt() to generate false ORA-00600 or ORA_04031 error messages:

fg8

 

You can visualize , enable or disable the alarms, edit and modify the collections, the configuration is intuitive:

fg9

The report menu is also interesting, many templates for different databases are defined:

fg10

It is very difficult to make a choice between Foglight and OEM. My first opinion (but I did not test Foglight in an active production environment) is that Foglight is a monitoring solution for many types of databases (MySQL, Oracle, Postgres, SQL Server, DB2), thus OEM is a monitoring and administration tool for Oracle databases.

Effectively you can add SQL server or MySQL plugin , Medora plugin for Postgres in OEM  in order to administer SQL Server, MySQL or Postgres databases, but you will never reach the powerful functionalities offered by OEM to monitor or administer Oracle databases.

The big advantage is their threshold, notification functionality and the reporting facilities which both OEM and Foglight have.  I did not test a lot SQL PI in Foglight, and I’m asking myself if like Oracle we have the possibility to have an historical graph of what happened the week before, if we have the possibility to display the execution plan and of course to run the SQL tuning advisor to offer another execution plan for example ?

The time dedicated to administer Foglight seems to be equivalent to OEM. You cannot install those enterprise products and let them live, like OEM Foglight seems to be a complex system and requires a significant time to run effectively.

If you do not have a lot of databases and not a big budget, you can use simple SQL scripts to keep you alerted. You have several different types of database to monitor, it might be a good idea to test and use Foglight. If most of your databases are Oracle based and you have paid for the tuning and diagnostic pack, use OEM to fully benefit of the administration and monitoring console.

 

Cet article Foglight est apparu en premier sur Blog dbi services.

First immersion in the Docker Conference EU 2018

Sat, 2018-12-08 12:02

In short, a very interesting event for both Devs and Ops. Every day was organized around workshops, hands-on-labs sessions and Hallway tracks. My colleague Mehdi Bada and I tried to attend as much sessions as possible but obviously it was difficult to cover all the topics.

blog 149 - 0 - dockercon small

Anyway, workshops and hands-on-labs were very interesting especially if you like to mix theory and practice. But I had to admit sometimes we got in trouble to keep up the pace of some workshops. Regarding the workshop we ran into what I call the “Context switch” issue between following the trainer’s explanation and doing exercises at the same time :) The migrating .NET applications to Docker workshop with Elton Stoneman (Docker) was one that comes I mind in this case :)

As database specialists at dbi services we obviously had a special focus on storage-oriented topics and we were interested in attending sessions and workshops on this topic including Use Cases and Practical Solutions for Docker Container Storage on Swarm and K8s session with Don Stewart (Docker) and Mark Church (Docker) as well as Container Storage Panel Q&A with Ed Beauvais (Oracle), Chris Brandon (Storage OS Inc) and Keith Hudgins (Docker). We got an overview of different possible solutions to implement as file-based, block-based and object-based storage in order to address different pattern workloads including fileserver, OLTP, BigData etc. Container Storage Landscape is large and vendor-specific actually but Docker storage team announced some plans to introduce first snapshot / restore capabilities and to provide an CSI (Common Storage Interface) to offer a simple community driven approach and a more predictable and functional interface for most common use cases as well. Let’s see what’s happen in the future but my guess (speculation mode) is that for “specific” applications like databases, vendor storage drivers will likely remain the most viable option when performance will be at the heart of concerns.

blog 149 - 1 - storage drivers

Even if containers are formally design to handle stateless applications it is not uncommon to see databases in such infrastructure nowadays. After all databases are also (special) applications, right? I was already convinced by the fact that containerization infrastructure was now enough mature to handle database workloads, these sessions reinforced my strong belief that Docker Swarm or K8s are production database ready from a storage perspective at least.

We also got the opportunity to attend to workshops and sessions around container orchestrator topics including mainly Docker Swarm and K8s orchestration. It was interesting to see that the same question often raised by attendees during these sessions: Do we have to use Swarm over K8s and vice-versa-ca? This is also a question we are going to ask for a dbi services internal project by the way and obviously, there is no black-or-white response. What is certain is that Docker Swarm remains important for customers as confirmed by Steve Singh during the first general session on Tuesday 4th December 2018. We got feedback from customer stories like Citizens bank that an orchestrator choice depends on different factors and in the context of this customer, they are using the both from Docker EE :) We also attended to other interesting Swam and K8s infrastructure topics including Swarm Orchestration – features and workflows by Bret Fisher (Docker Captain) and Container Networking for Swarm and Kubernetes in Docker Enterprise by Guillaume Morini (Docker) as well. Finally, and probably one of my favorite workshops was troubleshooting with sysdig by Michael Ducy. Sysdig is part of well-known monitoring / troubleshooting tools for containers in the market. Let’s say that it was a subtle combination between deep dive immersion of Linux kernel principals and practical scenarios about using sysdig tools to fix container issues as confirmed by my working desktop below:

blog 149 - 2- WS sysdig

In addition to sessions, workshops and hands-on-labs, new announcements were done at the DockerCon EU 2018, during general sessions with Steve Singh (CEO) and Scott Johnston (Chief Product Officer) as main speakers.

blog 149 - 3 - dockercon general session 1

First general session announcements include new innovative tools including docker-app, docker-assemble and enhancement of docker stack support for both Swarm and Kubernetes since Docker EE 2.0 and probably the most expected one: Docker Desktop Enterprise. It turns out that the adoption of Docker Desktop from developers was a real success but not really designed to scale to Enterprise-class environment and this is basically what Docker Desktop enterprise product is supposed to address.

It was also an opportunity to get some interesting figures about Docker (EE) adoption across the world:

  • 1M of new developer
  • 5M of new applications
  • 1B of containers downloaded every week
  • 650+ customers on docker EE
  • 76.4% of companies running mission-critical apps in containers in production

The last one is by far my favorite because it highlights that most of Docker environments are not anymore developer-scoped limited environments. Moreover, it is worth noting that the other following figures seem to point out that Docker is not a visionary developer whim anymore and it drives a strong adoption for customer due to an interesting ROI:

  • 69% differentiating products and services vs competitors
  • 70% => increasing sales of product
  • 71% bringing products to market faster

Finally, let’s finish with the second general session that was more Docker community-oriented and I know how important community may be for contribution and to bring people for interaction as well. As Microsoft with MVPs, Docker Captains are the Docker counterpart and were thanked for their wonderful contribution during this event. But obviously contribution is beyond MVPs, ACEs or Docker captains and Kal De (CTO, EVP, Product Development) explained how to contribute to different Docker projects and showed then contribution figures from the community through GitHub:

  • Compose 1 MM monthly
  • 25K new compose files published on GitHub per week
  • 14K GitHub contributors – 280+ people

This first immersion in the Docker World conference was definitely a good experience and a great opportunity to feel the emphasis around Docker and future directions made by the company. I also appreciated discussions and feedbacks from some attendees during network track to prepare our future challenges on this topic.

 

 

 

Cet article First immersion in the Docker Conference EU 2018 est apparu en premier sur Blog dbi services.

PostgreSQL : Get my database server name

Fri, 2018-12-07 16:32

I was looking for a build-in function to get the hostname of the server hosting my PostgreSQL cluster.
But seems that there is no build-in function. Looking in the extensions, I find the extension hostname which can allow to get the database server host name.
In this this blog I am explaining how to install and how to use it. The installation is very easy. The first step is to download it here .

After let’s go to the directory where the archive was decompressed and let’s run the command make

master/pg-hostname-master/ [PG1] ls
Changes  doc  hostname.control  Makefile  META.json  README.md  sql  src  test
14:46:26 postgres@dbi-pg-essentials:/home/postgres/pg-hostname-master/pg-hostname-master/ [PG1] ls
Changes  doc  hostname.control  Makefile  META.json  README.md  sql  src  test
14:46:48 postgres@dbi-pg-essentials:/home/postgres/pg-hostname-

master/pg-hostname-master/ [PG1] make
gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -O2 -fpic -I. -I./ -I/u01/app/postgres/product/95/db_0/include/server -I/u01/app/postgres/product/95/db_0/include/internal -D_GNU_SOURCE -I/usr/include/libxml2   -c -o src/hostname.o src/hostname.c
gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -O2 -fpic -L/u01/app/postgres/product/95/db_0/lib -Wl,--as-needed -Wl,-rpath,'/u01/app/postgres/product/95/db_0/lib',--enable-new-dtags  -shared -o src/hostname.so src/hostname.o
cp sql/hostname.sql sql/hostname--1.0.0.sql
14:46:55 postgres@dbi-pg-essentials:/home/postgres/pg-hostname-

Once done let’s run the command make install

pg-hostname-master/ [PG1] make install
/bin/mkdir -p '/u01/app/postgres/product/95/db_0/share/extension'
/bin/mkdir -p '/u01/app/postgres/product/95/db_0/share/extension'
/bin/mkdir -p '/u01/app/postgres/product/95/db_0/lib'
/bin/mkdir -p '/u01/app/postgres/product/95/db_0/share/doc/extension'
/bin/install -c -m 644 .//hostname.control '/u01/app/postgres/product/95/db_0/share/extension/'
/bin/install -c -m 644 .//sql/hostname--1.0.0.sql .//sql/hostname--unpackaged--1.0.0.sql  '/u01/app/postgres/product/95/db_0/share/extension/'
/bin/install -c -m 755  src/hostname.so '/u01/app/postgres/product/95/db_0/lib/'
/bin/install -c -m 644 .//doc/hostname.mmd '/u01/app/postgres/product/95/db_0/share/doc/extension/'
14:47:29 postgres@dbi-pg-essentials:/home/postgres/pg-hostname-master/pg-hostname-master/ [PG1]

If everything is ok we should now have the extension in our $PGHOME/share/extension

15:02:39 postgres@dbi-pg-essentials:[PG1] ls -ltra *hostname*
-rw-r--r--. 1 postgres postgres 140 Dec  5 14:47 hostname.control
-rw-r--r--. 1 postgres postgres  96 Dec  5 14:47 hostname--1.0.0.sql
-rw-r--r--. 1 postgres 

And that’s all. We just have now have to install the extension in the database

postgres=# CREATE EXTENSION hostname;
CREATE EXTENSION

And then now we can have the hostname of our the server

postgres=# SELECT hostname();
     hostname
-------------------
 dbi-pg-essentials
(1 row)

postgres=#

Cet article PostgreSQL : Get my database server name est apparu en premier sur Blog dbi services.

PostgreSQL 12 : New option –socketdir for pg_upgrade

Fri, 2018-12-07 16:30

PostgreSQL 12 is under development but tests can be done. Steps to install this version can be found in this dbi blog . Many thanks to Daniel.
While reading the documentation I found that there is a new parameter for pg_upgrade. This new parameter is –socketdir.
Why this parameter?
In fact the path name of a UNIX socket is allowed to be maximally 108 chars long. Before PostgreSQL 12, the default directory for the sockets created for the temporary postmasters started by pg_upgrade was the current directory. But depending of the current directory the pathname might be very long for a socket name. In PostgreSQL 12 The default location is still the current working directory, but the parameter socketdir now allows us to specify another location

To better understand I am going to upgrade from PostgreSQL 10 to PostgreSQL 12

20:59:44 postgres@dbi-pg-essentials:/u02/pgdata/PG12TEST/ [PG12TEST] psql -U postgres -d postgres
psql (12devel dbi services build)
Type "help" for help.

postgres=# select version();
                                                            version
---------------------------------------------------------------------------------------------------------------------
 PostgreSQL 12devel dbi services build on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11
(1 row)

postgres=#

Now let’s create a very deeply nested directory

mkdir -p ttttttttttttttttttttttttttttttttttttttttt/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/ddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd/UUUUUUUUUUUUUUUUUUUUUUUUUUUUUuuuuuuuuuuuuuuuuuuuuuuuuuuuu/ddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd

And let’s do a cd into this new directory and let’s try to do an upgrade

$ export PGDATAOLD=/u02/pgdata/PG3
$ export PGBINOLD=/u01/app/postgres/product/10/db_1/bin
$ export PGDATANEW=/u02/pgdata/PG12TEST
$ export PGBINNEW=/u01/app/postgres/product/12dev/db_0/bin

When running the pg_upgrade with the check option, we got following errors

$ pg_upgrade -c
Performing Consistency Checks
-----------------------------
Checking cluster versions                                   ok

*failure*
Consult the last few lines of "pg_upgrade_server.log" for
the probable cause of the failure.

connection to database failed: Unix-domain socket path "/home/postgres/ttttttttttttttttttttttttttttttttttttttttt/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/ddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd/UUUUUUUUUUUUUUUUUUUUUUUUUUUUUuuuuuuuuuuuuuuuuuuuuuuuuuuuu/ddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd/.s.PGSQL.50432" is too long (maximum 107 bytes)

could not connect to source postmaster started with the command:
"/u01/app/postgres/product/10/db_1/bin/pg_ctl" -w -l "pg_upgrade_server.log" -D "/u02/pgdata/PG3" -o "-p 50432 -b  -c listen_addresses='' -c unix_socket_permissions=0700 -c unix_socket_directories='/home/postgres/ttttttttttttttttttttttttttttttttttttttttt/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/ddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd/UUUUUUUUUUUUUUUUUUUUUUUUUUUUUuuuuuuuuuuuuuuuuuuuuuuuuuuuu/ddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd'" start
Failure, exiting
$

Seems that the pathname for the socket is very long.
And if we use this new parameter, we can specify a new location for the sockets. And we can see that the checks are now successful

$ pg_upgrade -c --socketdir=/home/postgres/
Performing Consistency Checks
-----------------------------
Checking cluster versions                                   ok
Checking database user is the install user                  ok
Checking database connection settings                       ok
Checking for prepared transactions                          ok
Checking for reg* data types in user tables                 ok
Checking for contrib/isn with bigint-passing mismatch       ok
Checking for tables WITH OIDs                               ok
Checking for presence of required libraries                 ok
Checking database user is the install user                  ok
Checking for prepared transactions                          ok

*Clusters are compatible*

And then we can upgrade the cluster using this new parameter and still staying in this new created directory

$ pg_upgrade  --socketdir=/home/postgres/
Performing Consistency Checks
-----------------------------
Checking cluster versions                                   ok
Checking database user is the install user                  ok
Checking database connection settings                       ok
Checking for prepared transactions                          ok
Checking for reg* data types in user tables                 ok
Checking for contrib/isn with bigint-passing mismatch       ok
Checking for tables WITH OIDs                               ok
Creating dump of global objects                             ok
Creating dump of database schemas
                                                            ok
Checking for presence of required libraries                 ok
Checking database user is the install user                  ok
Checking for prepared transactions                          ok

If pg_upgrade fails after this point, you must re-initdb the
new cluster before continuing.

Performing Upgrade
------------------
Analyzing all rows in the new cluster                       ok
Freezing all rows in the new cluster                        ok
Deleting files from new pg_xact                             ok
Copying old pg_xact to new server                           ok
Setting next transaction ID and epoch for new cluster       ok
Deleting files from new pg_multixact/offsets                ok
Copying old pg_multixact/offsets to new server              ok
Deleting files from new pg_multixact/members                ok
Copying old pg_multixact/members to new server              ok
Setting next multixact ID and offset for new cluster        ok
Resetting WAL archives                                      ok
Setting frozenxid and minmxid counters in new cluster       ok
Restoring global objects in the new cluster                 ok
Restoring database schemas in the new cluster
                                                            ok
Copying user relation files
                                                            ok
Setting next OID for new cluster                            ok
Sync data directory to disk                                 ok
Creating script to analyze new cluster                      ok
Creating script to delete old cluster                       ok

Upgrade Complete
----------------
Optimizer statistics are not transferred by pg_upgrade so,
once you start the new server, consider running:
    ./analyze_new_cluster.sh

Running this script will delete the old cluster's data files:
    ./delete_old_cluster.sh
$
Conclusion :

In this blog we have seen the new option –socketdir for pg_upgrade. It’s a good thing to know that this parameter exists, but in most case the current working directory should be ok for an upgrade

Cet article PostgreSQL 12 : New option –socketdir for pg_upgrade est apparu en premier sur Blog dbi services.

DockerCon18 Barcelona – Day 3

Thu, 2018-12-06 06:37

Wednesday was the last day of the DockerCon18 Europe. Like the previous day, we started the journey by a keynote of 2 hours, more oriented “Docker Community”, the core message of the keynote, the community is one of the pillars of open source technologies and Docker Inc want to push more and more the community aspect. The community is growing very fast and competitive.

They take the opportunity to award the community leader of the year and a new Docker Captain, Bret Fisher.

dockercommunity-min

Then we attended an interesting session: Docker Storage with Swarm and Kubernetes.

The guy who presented started the session with a funny part: Container Storage Fake News!! During a few minutes, he listed one by one all fake news related to storage in the container world. The best fake news for us:

RDBMS and databases cannot be run on containers: NO! Official images are available from the providers. The best example is SQL Server, who provide a very competitive Docker image for their users.

The core message of the session is that databases containers are coming more and more and will be used and deployed. The very interesting thing is about the collaboration between Docker and storage provider, who are developing API for Docker compatibility, so in the future, each storage provider will have its own API to communicate with Docker container.
 

The last but not least session of the day for me was about Provisioning and Managing Storage for Docker Containers.

The goal of the session was the explanation of How we can manage easily storage operations for containers.

Docker EE Platform with Kubernetes (with PV and PVC) help us in the future to manage storage operations for containers.

PV = Persistent Volume
PVC = Persistent Volume Claim

They present us, also, the difference between static and dynamic provisioning in Kubernetes and the future of storage management in Docker using CSI.

Core message: Docker is making storage a priority.

oznor

Cet article DockerCon18 Barcelona – Day 3 est apparu en premier sur Blog dbi services.

DockerCon18 Barcelona – Day 2

Thu, 2018-12-06 04:59

Tuesday was the second day in Barcelona for the DockerCon18. We attend the first general session in the morning. It was a mix of presentations, live demos and the participation of Docker big customer in EMEA such as Société Général in France, who present us the impact of Docker in their daily business. The main message of the first part of the keynote was: “How Docker can help you to make the digital transformation of your business”.

In the second part, new features were presented during the live demos:

  • docker stack deployment using Docker EE
  • docker-assemble: command: build docker image without configuration starting with a git repository of the application source code.
  • docker stack command: to deploy a Docker image using a compose file.
  • docker-app command: An utility to help make Compose files more reusable and shareable.

Then they present the introduction to Kubernetes support on Docker EE platform.

Screenshot 2018-12-06 at 11.02.01

Finally, they present the way to deploy an application with Docker Desktop Application Designer.

The keynote video is also available here, those interested.

After the keynote, we attended a very interesting workshop concerning the Storage in Docker EE platform 2.1, done by Don Stewart a Solution Architect at Docker.

storagews-min

In the lab, we discovered the types of storage options that are available and how to implement them within a container environment.
Lab link for those interested: https://github.com/donmstewart/docker-storage-workshop

The first session of the afternoon was about Docker Enterprise platform 2.1: Architecture Overview and Uses Cases.

 

The presentation was split into 3 main parts:

  • Docker Enterprise overview and architecture
  • Docker Enterprise 2.1 – What’s new with demos
  • Next steps

 

The first part of the presentation was more marketing oriented, by the presentation of the Docker Enterprise platform.

Then the following new features were presented including small demos:

  • Extended Windows Support
  • Extended Kubernetes Support: Windows Server 2016, 1709, 1803, 2019
  • Improve Operational Insights: node metrics, data retention overview, more metrics, and charts…
  • Image management and storage optimizations
  • Security improvements

oznor

We finish the conference day by a workshop again, yes…Because during this conference the level and the quality of the workshops was very good and interesting. The workshop was about Swarm Orchestration – Features and Workflows.
This was definitively one of the best workshops I attended.

Slides: https://container.training/swarm-selfpaced.yml.html#1
Github repository: https://github.com/jpetazzo/container.training

During this workshop, we create a complete Docker cluster using Swarm and deep dive into Swarm orchestration.

A very interesting day, with a lot of new things around Docker.

Cet article DockerCon18 Barcelona – Day 2 est apparu en premier sur Blog dbi services.

OEM Cloud Control 13c – Agent Gold Image

Wed, 2018-12-05 06:52
Introduction

I am currently setting up a new “Base Image” virtual machine (Red Hat Enterprise Linux 7.6) which will be used to create 6 brand new Oracle database servers requested by a customer. Besides installing and configuring the OS, I also have to install 3 Oracle Homes and one Cloud Control Agent 13c.

An OMS13c server already exists including an Agent patched with the EM-AGENT Bundle Patch 13.2.0.0.181031 (28680866) :
oracle@oms13c:/home/oracle/ [agent13c] opatch lsinventory | grep 28680866
Patch 28680866 : applied on Tue Nov 13 17:32:48 CET 2018
28680866, 28744209, 28298159, 25141245, 28533438, 28651962, 28635152
oracle@oms13c:/home/oracle/ [agent13c]

However, when I wanted to deploy the CC13c Agent on my Master VM from the Cloud Control 13c web interface (Setup > Add Target > Add Targets Manually > Install Agent on Host), the Agent was successfully installed but… without the patch 28680866 :( . That means I will have to install the patch manually. Considering that the goal of creating a “Base Image” VM for this project is to quickly and easily delivering 6 database servers, having to install AND to patch the Agent on each server is not very efficient and doesn’t fit with what I want.
I had so to find a better way to deploy a patched Agent and the solution has been to use an Agent Gold Image. It allowed me to do exactly what I wanted.

In this post I will show how I have set this up.

Deploying the Agent

Here is how we can deploy the Agent on the Base Image VM. From Cloud Control 13c, we click on Setup > Add Target > Add Targets Manually > Install Agent on Host :
1

Then we insert the name of the target VM, we select the approriate platform…
2_2

…and we specify the directory in which we want to install the Agent (Agent Home) :
3_2

Everything is now ready to start the deployment. We can click on Next to see the review of the deployment configuration and on Deploy Agent to start.
Once the Agent is correctly deployed, the status should be like that :
4

As explained above we can see that the Agent is not patched with the Bundle Patch of October 2018 :
oracle@basevm:/u01/app/oracle/agent13c/agent_13.2.0.0.0/OPatch/ [agent13c] ./opatch lsinventory | grep 28680866
oracle@basevm:/u01/app/oracle/agent13c/agent_13.2.0.0.0/OPatch/ [agent13c]

We must patch it manually…

Updating OPatch

Before installing a patch it is highly recommended to update the OPatch utility first. All version of the tool are available here. The current one my VM is 13.8.0.0.0 :
oracle@basevm:/u01/app/oracle/software/OPatch/oms13cAgent/ [agent13c] opatch version
OPatch Version: 13.8.0.0.0


OPatch succeeded.

We must use the following command to update OPatch :
oracle@basevm:/u01/app/oracle/software/OPatch/oms13cAgent/ [agent13c] unzip -q p6880880_139000_Generic.zip
oracle@basevm:/u01/app/oracle/software/OPatch/oms13cAgent/ [agent13c] cd 6880880/
oracle@basevm:/u01/app/oracle/software/OPatch/oms13cAgent/6880880/ [agent13c] $ORACLE_HOME/oracle_common/jdk/bin/java -jar ./opatch_generic.jar -silent oracle_home=$ORACLE_HOME
Launcher log file is /tmp/OraInstall2018-11-23_02-58-11PM/launcher2018-11-23_02-58-11PM.log.
Extracting the installer . . . . Done
Checking if CPU speed is above 300 MHz. Actual 2099.998 MHz Passed
Checking swap space: must be greater than 512 MB. Actual 4095 MB Passed
Checking if this platform requires a 64-bit JVM. Actual 64 Passed (64-bit not required)
Checking temp space: must be greater than 300 MB. Actual 27268 MB Passed
Preparing to launch the Oracle Universal Installer from /tmp/OraInstall2018-11-23_02-58-11PM
Installation Summary
[...] [...] Logs successfully copied to /u01/app/oraInventory/logs.
oracle@basevm:/u01/app/oracle/software/OPatch/oms13cAgent/6880880/ [agent13c] opatch version
OPatch Version: 13.9.3.3.0


OPatch succeeded.
oracle@basevm:/u01/app/oracle/software/OPatch/oms13cAgent/6880880/ [agent13c]

You probably noticed that since OEM 13cR2 the way to update OPatch has changed : no more easy unzip, we have to use a Java file instead (don’t really understand why…).

Patching the Agent

As OPatch is now up to date we can proceed with the installation of the patch 28680866 :
oracle@basevm:/u01/app/oracle/software/agent13c/patch/ [agent13c] unzip -q p28680866_132000_Generic.zip
oracle@basevm:/u01/app/oracle/software/agent13c/patch/ [agent13c] cd 28680866/28680866/
oracle@basevm:/u01/app/oracle/software/agent13c/patch/28680866/28680866/ [agent13c] emctl stop agent
Oracle Enterprise Manager Cloud Control 13c Release 2
Copyright (c) 1996, 2016 Oracle Corporation. All rights reserved.
Stopping agent ... stopped.
oracle@basevm:/u01/app/oracle/software/agent13c/patch/28680866/28680866/ [agent13c] opatch apply
Oracle Interim Patch Installer version 13.9.3.3.0
Copyright (c) 2018, Oracle Corporation. All rights reserved.


Oracle Home : /u01/app/oracle/agent13c/agent_13.2.0.0.0
Central Inventory : /u01/app/oraInventory
from : /u01/app/oracle/agent13c/agent_13.2.0.0.0/oraInst.loc
OPatch version : 13.9.3.3.0
OUI version : 13.9.1.0.0
Log file location : /u01/app/oracle/agent13c/agent_13.2.0.0.0/cfgtoollogs/opatch/opatch2018-11-23_15-33-14PM_1.log


OPatch detects the Middleware Home as "/u01/app/oracle/agent13c"


Verifying environment and performing prerequisite checks...
OPatch continues with these patches: 28680866


Do you want to proceed? [y|n] y
User Responded with: Y
All checks passed.
Backing up files...
Applying interim patch '28680866' to OH '/u01/app/oracle/agent13c/agent_13.2.0.0.0'


Patching component oracle.sysman.top.agent, 13.2.0.0.0...
Patch 28680866 successfully applied.
Log file location: /u01/app/oracle/agent13c/agent_13.2.0.0.0/cfgtoollogs/opatch/opatch2018-11-23_15-33-14PM_1.log


OPatch succeeded.
oracle@basevm:/u01/app/oracle/software/agent13c/patch/28680866/28680866/ [agent13c]

Let’s restart the Agent and check that the patch has been applied :
oracle@basevm:/u01/app/oracle/software/agent13c/patch/28680866/28680866/ [agent13c] emctl start agent
Oracle Enterprise Manager Cloud Control 13c Release 2
Copyright (c) 1996, 2016 Oracle Corporation. All rights reserved.
Starting agent ................... started.
oracle@basevm:/u01/app/oracle/software/agent13c/patch/28680866/28680866/ [agent13c] opatch lsinventory | grep 28680866
Patch 28680866 : applied on Mon Dec 03 17:17:25 CET 2018
28680866, 28744209, 28298159, 25141245, 28533438, 28651962, 28635152
oracle@basevm:/u01/app/oracle/software/agent13c/patch/28680866/28680866/ [agent13c]

Perfect. The Agent is now patched but…

Installing the DB plugin

…what about its plugins ? We can see from the OMS13c server that the Agent doesn’t have the database plugin installed :
oracle@oms13c:/home/oracle/ [oms13c] emcli login -username=sysman
Enter password :


Login successful
oracle@oms13c:/home/oracle/ [oms13c] emcli list_plugins_on_agent -agent_names="basevm.xx.yyyy.com:3872"
The Agent URL is https://basevm.xx.yyyy.com:3872/emd/main/ -
Plug-in Name Plugin-id Version [revision]

Oracle Home oracle.sysman.oh 13.2.0.0.0
Systems Infrastructure oracle.sysman.si 13.2.2.0.0

This is normal. As no Oracle database are currently running on the VM, the DB plugin was not installed automatically during the Agent deployment. We have to install it manually using the following command :
oracle@oms13c:/home/oracle/ [oms13c] emcli deploy_plugin_on_agent -agent_names="basevm.xx.yyyy.com:3872" -plugin=oracle.sysman.db
Agent side plug-in deployment is in progress
Use "emcli get_plugin_deployment_status -plugin=oracle.sysman.db" to track the plug-in deployment status.
oracle@oms13c:/home/oracle/ [oms13c]

To check the status of the plugin installation :
oracle@oms13c:/home/oracle/ [oms13c] emcli get_plugin_deployment_status -plugin=oracle.sysman.db
Plug-in Deployment/Undeployment Status


Destination : Management Agent - basevm.xx.yyyy.com:3872
Plug-in Name : Oracle Database
Version : 13.2.2.0.0
ID : oracle.sysman.db
Content : Plug-in
Action : Deployment
Status : Success
Steps Info:
---------------------------------------- ------------------------- ------------------------- ----------
Step Start Time End Time Status
---------------------------------------- ------------------------- ------------------------- ----------
Submit job for deployment 11/23/18 4:06:29 PM CET 11/23/18 4:06:30 PM CET Success


Initialize 11/23/18 4:06:32 PM CET 11/23/18 4:06:43 PM CET Success


Validate Environment 11/23/18 4:06:44 PM CET 11/23/18 4:06:44 PM CET Success


Install software 11/23/18 4:06:44 PM CET 11/23/18 4:06:45 PM CET Success


Attach Oracle Home to Inventory 11/23/18 4:06:46 PM CET 11/23/18 4:07:04 PM CET Success


Configure plug-in on Management Agent 11/23/18 4:07:05 PM CET 11/23/18 4:07:28 PM CET Success


Update inventory 11/23/18 4:07:23 PM CET 11/23/18 4:07:28 PM CET Success


---------------------------------------- ------------------------- ------------------------- ----------
oracle@oms13c:/home/oracle/ [oms13c]

Quick check :
oracle@oms13c:/home/oracle/ emcli list_plugins_on_agent -agent_names="basevm.xx.yyyy.com:3872"
The Agent URL is https://basevm.xx.yyyy.com:3872/emd/main/ -
Plug-in Name Plugin-id Version [revision]

Oracle Database oracle.sysman.db 13.2.2.0.0
Oracle Home oracle.sysman.oh 13.2.0.0.0
Systems Infrastructure oracle.sysman.si 13.2.2.0.0


oracle@oms13c:/home/oracle/ [oms13c]

The Agent is now exactly in the state in which we want to deploy it on all 6 servers (OPatch up to date, Agent patched, DB plugin installed).
It’s now time to move forward with the creation of an Agent Gold Image.

Creating the Agent Gold image

Going back to Cloud Control we can navigate to Setup > Manage Cloud Control > Gold Agent Images :
Screenshot from 2018-12-03 21-13-14
We click on Manage All Images
6

…then on Create and we give a name to our Image :
7

Once the Image created, we must create its 1st version. We click on the Image name and then on Action > Create. From here we can select the Agent configured earlier on the VM. It will be the source of the Gold Image :
8

The creation of the Gold Agent Image and its 1st version can be also done from command-line with the following emcli command :
oracle@oms13c:/home/oracle/ [oms13c] emcli create_gold_agent_image -image_name="agent13c_gold_image" -version_name="gold_image_v1" -source_agent="basevm.xx.yyyy.com:3872"
A gold agent image create operation with name "GOLD_AGENT_IMAGE_CREATE_2018_12_03_22_04_20_042" has been submitted.
You can track the progress of this session using the command "emcli get_gold_agent_image_activity_status -operation_name=GOLD_AGENT_IMAGE_CREATE_2018_12_03_22_04_20_042"


oracle@oms13c:/home/oracle/ [oms13c] emcli get_gold_agent_image_activity_status -operation_name=GOLD_AGENT_IMAGE_CREATE_2018_12_03_22_04_20_042
Inputs
------
Gold Image Version Name : gold_image_v1
Gold Image Name : agent13c_gold_image
Source Agent : basevm.xx.yyyy.com:3872
Working Directory : %agentStateDir%/install


Status
-------
Step Name Status Error Cause Recommendation
Create Gold Agent Image IN_PROGRESS


oracle@oms13c:/home/oracle/

The Gold Agent Image is now created. We can start to deploy it on the others servers in the same way we did at the first deployment, but by selecting this time With Gold Image :
9

Once the Agent is deployed on the server we can see that OPatch is up to date :
oracle@srvora01:/u01/app/oracle/agent13c/ [agent13c] opatch version
OPatch Version: 13.9.3.3.0


OPatch succeeded.
oracle@srvora01:/u01/app/oracle/agent13c/ [agent13c]

The Agent Bundle Patch is installed :
oracle@srvora01:/u01/app/oracle/agent13c/ [agent13c] opatch lsinventory | grep 28680866
Patch 28680866 : applied on Mon Dec 03 17:17:25 CET 2018
28680866, 28744209, 28298159, 25141245, 28533438, 28651962, 28635152
oracle@srvora01:/u01/app/oracle/agent13c/ [agent13c]

And the DB plugin is ready :
oracle@srvora01:/u01/app/oracle/agent13c/ [agent13c] ll
total 24
drwxr-xr-x. 31 oracle oinstall 4096 Dec 3 22:59 agent_13.2.0.0.0
-rw-r--r--. 1 oracle oinstall 209 Dec 3 22:32 agentimage.properties
drwxr-xr-x. 8 oracle oinstall 98 Dec 3 22:58 agent_inst
-rw-r--r--. 1 oracle oinstall 565 Dec 3 22:56 agentInstall.rsp
-rw-r--r--. 1 oracle oinstall 19 Dec 3 22:56 emctlcfg.rsp
-rw-r-----. 1 oracle oinstall 350 Dec 3 22:32 plugins.txt
-rw-r--r--. 1 oracle oinstall 470 Dec 3 22:57 plugins.txt.status
oracle@srvora01:/u01/app/oracle/agent13c/ [agent13c] cat plugins.txt.status
oracle.sysman.oh|13.2.0.0.0||discoveryPlugin|STATUS_SUCCESS
oracle.sysman.oh|13.2.0.0.0||agentPlugin|STATUS_SUCCESS
oracle.sysman.db|13.2.2.0.0||discoveryPlugin|STATUS_SUCCESS
oracle.sysman.db|13.2.2.0.0||agentPlugin|STATUS_SUCCESS
oracle.sysman.xa|13.2.2.0.0||discoveryPlugin|STATUS_SUCCESS
oracle.sysman.emas|13.2.2.0.0||discoveryPlugin|STATUS_SUCCESS
oracle.sysman.si|13.2.2.0.0||agentPlugin|STATUS_SUCCESS
oracle.sysman.si|13.2.2.0.0||discoveryPlugin|STATUS_SUCCESS
oracle@srvora01:/u01/app/oracle/agent13c/ [agent13c]

Conclusion

Using a Gold Image drastically ease the management of OMS Agents in Oracle environments. In addition to allowing massive deployment on targets, it is also possible to manage several Gold Images with different patch levels. The hosts are simply subscribed to a specific Image and follow its life cycle (new patch, new plugins, aso…).

Think about it during your next Oracle monitoring project !

Cet article OEM Cloud Control 13c – Agent Gold Image est apparu en premier sur Blog dbi services.

odacli create-database fails on ODA X7-2HA with java.lang.OutOfMemoryError

Tue, 2018-12-04 16:06

Today I was onsite at my customer and he told me: I can no longer create databases on my ODA X7-2HA, every time I try to use odacli create-database it fails, please help.

Ok, let’s check what happens, the customer shares the Oracle Homes, he wants to create a 11.2.0.4 database:

[root@robucnoroda020 ~]# odacli list-dbhomes

ID                                       Name                 DB Version                               Home Location                                 Status
---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- ----------
157bfdf4-4430-4fb1-878e-2fb803ee54bd     OraDB11204_home1     11.2.0.4.180417 (27441052, 27338049)     /u01/app/oracle/product/11.2.0.4/dbhome_1     Configured
2aaba0e6-4482-4c9f-8d98-4a9d72fdb96e     OraDB12102_home1     12.1.0.2.180417 (27338020, 27338029)     /u01/app/oracle/product/12.1.0.2/dbhome_1     Configured
ad2b0d0a-11c1-4a15-b22a-f698496cd606     OraDB12201_home1     12.2.0.1.180417 (27464465, 27674384)     /u01/app/oracle/product/12.2.0.1/dbhome_1     Configured

[root@robucnoroda020 ~]#

Ok we try to create a 11.2.0.4 database:

[root@robucnoroda020 log]# odacli create-database -n FOO -dh 157bfdf4-4430-4fb1-878e-2fb803ee54bd -cs AL32UTF8 -y RAC -r ACFS -m
Password for SYS,SYSTEM and PDB Admin:

Job details
----------------------------------------------------------------
 ID: 1959838e-34a6-419e-94da-08b931a039cc
 Description: Database service creation with db name: FOO
 Status: Created
 Created: December 4, 2018 11:11:26 PM EET
 Message:

Task Name Start Time End Time Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------

[root@robucnoroda020 log]#

The job was created successful, we check whats going on:

[root@robucnoroda020 log]# odacli describe-job -i 1959838e-34a6-419e-94da-08b931a039cc

Job details
----------------------------------------------------------------
 ID: 1959838e-34a6-419e-94da-08b931a039cc
 Description: Database service creation with db name: FOO
 Status: Failure
 Created: December 4, 2018 11:11:26 PM EET
 Message: DCS-10001:Internal error encountered: Failed to create the database FOO.

Task Name Start Time End Time Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Database Service creation December 4, 2018 11:11:26 PM EET December 4, 2018 11:13:01 PM EET Failure
Database Service creation December 4, 2018 11:11:26 PM EET December 4, 2018 11:13:01 PM EET Failure
Setting up ssh equivalance December 4, 2018 11:11:26 PM EET December 4, 2018 11:11:46 PM EET Success
Creating volume dclFOO December 4, 2018 11:11:47 PM EET December 4, 2018 11:12:04 PM EET Success
Creating volume datFOO December 4, 2018 11:12:04 PM EET December 4, 2018 11:12:21 PM EET Success
Creating ACFS filesystem for DATA December 4, 2018 11:12:21 PM EET December 4, 2018 11:12:34 PM EET Success
Database Service creation December 4, 2018 11:12:34 PM EET December 4, 2018 11:13:01 PM EET Failure
Database Creation December 4, 2018 11:12:34 PM EET December 4, 2018 11:13:00 PM EET Failure

[root@robucnoroda020 log]#

Indeed, the job has failed, next we check the DCS log, there we can see the database creation failure:

2018-12-04 23:13:01,209 DEBUG [Database Service creation] [] c.o.d.c.t.r.TaskReportRecorder:  Compile task plan for ServiceJobReport
'{
  "updatedTime" : null,
  "jobId" : "1959838e-34a6-419e-94da-08b931a039cc",
  "status" : "Failure",
  "message" : null,
  "reports" : [ ],
  "createTimestamp" : 1543957886185,
  "resourceList" : [ ],
  "description" : "Database service creation with db name: FOO"
}'...

2018-12-04 23:13:01,219 DEBUG [Database Service creation] [] c.o.d.a.t.TaskServiceRequest: Task[id: 1959838e-34a6-419e-94da-08b931a039cc, jobid: 1959838e-34a6-419e-94da-08b931a039cc, TaskName: Database Service creation] call() completed.
2018-12-04 23:13:01,219 INFO [Database Service creation] [] c.o.d.a.t.TaskServiceRequest: Task[id: 1959838e-34a6-419e-94da-08b931a039cc, jobid: 1959838e-34a6-419e-94da-08b931a039cc, TaskName: Database Service creation] completed: Failure

Ok, the log don’t tell us what’s going wrong, but behind the scene odacli create-database uses dbca in the requested ORACLE Home. In the next step we check the the dbca logs:

oracle@robucnoroda020:/u01/app/oracle/cfgtoollogs/dbca/FOO/ [rdbms11204] ll
total 276
-rw-r----- 1 oracle oinstall 275412 Dec  4 23:13 trace.log
oracle@robucnoroda020:/u01/app/oracle/cfgtoollogs/dbca/FOO/ [rdbms11204]

oracle@robucnoroda020:/u01/app/oracle/cfgtoollogs/dbca/FOO/ [rdbms11204] cat trace.log

......

[main] [ 2018-12-04 23:12:57.546 EET ] [InventoryUtil.getHomeName:111]  homeName = OraDB11204_home1
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
	at oracle.xml.parser.v2.XMLDocument.createNodeFromType(XMLDocument.java:4132)
	at oracle.xml.parser.v2.XMLDocument.createElement(XMLDocument.java:2801)
	at oracle.xml.parser.v2.DocumentBuilder.startElement(DocumentBuilder.java:488)
	at oracle.xml.parser.v2.NonValidatingParser.parseElement(NonValidatingParser.java:1616)
	at oracle.xml.parser.v2.NonValidatingParser.parseRootElement(NonValidatingParser.java:456)
	at oracle.xml.parser.v2.NonValidatingParser.parseDocument(NonValidatingParser.java:402)
	at oracle.xml.parser.v2.XMLParser.parse(XMLParser.java:244)
	at oracle.xml.jaxp.JXDocumentBuilder.parse(JXDocumentBuilder.java:155)
	at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:172)
	at oracle.sysman.oix.oixd.OixdDOMReader.getDocument(OixdDOMReader.java:42)
	at oracle.sysman.oic.oics.OicsCheckPointReader.buildCheckpoint(OicsCheckPointReader.java:75)
	at oracle.sysman.oic.oics.OicsCheckPointSession.<init>(OicsCheckPointSession.java:101)
	at oracle.sysman.oic.oics.OicsCheckPointIndexSession.<init>(OicsCheckPointIndexSession.java:123)
	at oracle.sysman.oic.oics.OicsCheckPointFactory.getIndexSession(OicsCheckPointFactory.java:69)
	at oracle.sysman.assistants.util.CheckpointContext.getCheckPointSession(CheckpointContext.java:256)
	at oracle.sysman.assistants.util.CheckpointContext.getCheckPoint(CheckpointContext.java:245)
	at oracle.sysman.assistants.dbca.backend.Host.cleanup(Host.java:3710)
	at oracle.sysman.assistants.dbca.backend.SilentHost.cleanup(SilentHost.java:585)
	at oracle.sysman.assistants.dbca.Dbca.execute(Dbca.java:145)
	at oracle.sysman.assistants.dbca.Dbca.main(Dbca.java:189)
[Thread-5] [ 2018-12-04 23:13:00.631 EET ] [DbcaCleanupHook.run:44]  Cleanup started
[Thread-5] [ 2018-12-04 23:13:00.631 EET ] [OracleHome.cleanupDBOptionsIntance:1482]  DB Options dummy instance sid=null
[Thread-5] [ 2018-12-04 23:13:00.631 EET ] [DbcaCleanupHook.run:49]  Cleanup ended

Ah, there is a JAVA OutOfMemory Exception, we know this from older times, we have to change the Heap Space for dbca’s Java  engine, let’s change to ORACLE_HOME and check dbca:

oracle@robucnoroda020:/u01/app/oracle/product/11.2.0.4/dbhome_1/bin/ [rdbms11204] grep JRE_OPT dbca
JRE_OPTIONS="${JRE_OPTIONS} -DSET_LAF=${SET_LAF} -Dsun.java2d.font.DisableAlgorithmicStyles=true -Dice.pilots.html4.ignoreNonGenericFonts=true  -DDISPLAY=${DISPLAY} -DJDBC_PROTOCOL=thin -mx128m"
exec $JRE_DIR/bin/java  $JRE_OPTIONS  $DEBUG_STRING -classpath $CLASSPATH oracle.sysman.assistants.dbca.Dbca $ARGUMENTS

In the dbca script, we see that the Java Heap Space is 128MB, we change it to 512MB (and yes create a backup first):

oracle@robucnoroda020:/u01/app/oracle/product/11.2.0.4/dbhome_1/bin/ [rdbms11204] grep JRE_OPT dbca
JRE_OPTIONS="${JRE_OPTIONS} -DSET_LAF=${SET_LAF} -Dsun.java2d.font.DisableAlgorithmicStyles=true -Dice.pilots.html4.ignoreNonGenericFonts=true  -DDISPLAY=${DISPLAY} -DJDBC_PROTOCOL=thin -mx512m"
exec $JRE_DIR/bin/java  $JRE_OPTIONS  $DEBUG_STRING -classpath $CLASSPATH oracle.sysman.assistants.dbca.Dbca $ARGUMENTS

After deleting the failed database we try again to create our database FOO:

[root@robucnoroda020 log]# odacli create-database -n FOO -dh 157bfdf4-4430-4fb1-878e-2fb803ee54bd -cs AL32UTF8 -y RAC -r ACFS -m
Password for SYS,SYSTEM and PDB Admin:

Job details
----------------------------------------------------------------
                     ID:  b289ed58-c29f-4ea8-8aa8-46a5af8ca529
            Description:  Database service creation with db name: FOO
                 Status:  Created
                Created:  December 4, 2018 11:45:04 PM EET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------

Let’s check what’s going on:

[root@robucnoroda020 log]# odacli describe-job -i b289ed58-c29f-4ea8-8aa8-46a5af8ca529

Job details
----------------------------------------------------------------
                     ID:  b289ed58-c29f-4ea8-8aa8-46a5af8ca529
            Description:  Database service creation with db name: FOO
                 Status:  Success
                Created:  December 4, 2018 11:45:04 PM EET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Setting up ssh equivalance               December 4, 2018 11:45:05 PM EET    December 4, 2018 11:45:25 PM EET    Success
Creating volume dclFOO                   December 4, 2018 11:45:25 PM EET    December 4, 2018 11:45:42 PM EET    Success
Creating volume datFOO                   December 4, 2018 11:45:42 PM EET    December 4, 2018 11:45:59 PM EET    Success
Creating ACFS filesystem for DATA        December 4, 2018 11:45:59 PM EET    December 4, 2018 11:46:13 PM EET    Success
Database Service creation                December 4, 2018 11:46:13 PM EET    December 4, 2018 11:51:23 PM EET    Success
Database Creation                        December 4, 2018 11:46:13 PM EET    December 4, 2018 11:49:56 PM EET    Success
updating the Database version            December 4, 2018 11:51:21 PM EET    December 4, 2018 11:51:23 PM EET    Success
create Users tablespace                  December 4, 2018 11:51:23 PM EET    December 4, 2018 11:51:25 PM EET    Success

[root@robucnoroda020 log]#

Super, the database was successfully created. It seems that sometimes odacli create-database fails due to dbca memory usage. So also on ODA check your dbca logs, if your database creation wasn’t successful. If you see these Heap Space Exceptions, don’t be afraid to change dbca’s heap memory allocation.

Cet article odacli create-database fails on ODA X7-2HA with java.lang.OutOfMemoryError est apparu en premier sur Blog dbi services.

DockerCon2018 Barcelona – Day 1

Mon, 2018-12-03 17:00

As a football fan, traveling to Barcelona without watching a football game is inconceivable, so I started my travel by attending the game against Villareal in Camp Nou  8-)

FCB-min

 

DockerCon Day 1

Today, with David Barbarin, was our first day at DockerCon2018 in Barcelona. The conference is located in a huge conference center and split between types of sessions including workshops, keynotes, hands-on labs, and hallway track.

The first day was only focused on workshops, hands-on labs and hallway track in where you can meet Docker experts and exchange on multiple topics.

Interesting workshops was proposed today, but to follow a workshop you had to register on it before. Fortunately, for the latecomers, a waiting list was here based on first arrived first served.

We started by following a very interesting workshop: Migrating .NET application to Docker Containers.

DotNet_WS1-min

The instructor presents us, how to migrate a monolithic application to Docker Containers. The starting point was a simple .NET application running into a single container and steps by steps the instructors explain us, through efficient demos, how to split the different services of our application in microservices architecture. We easily had access to a lab environment hosted in Azure through an RDP connection.

The 2 hour workshop was split by the following part:

  1. Building a single .NET application in one Docker Container.
  2. Split the Home page to the rest of the website, by using two containers.
  3. Add an SQL Server database for the data with the persistent storage.
  4. Add an NGINX proxy to redirect requests for the homepage and other pages of the website
  5. Create a container for the API with a connection to the database.
  6. Add a Message queue for the website.

 

After lunch, we planned to follow another workshop concerning the storage in Docker but the session and the waiting list was full. So we decided to get started with Hands-On Labs. After signing up to the Hands-On Labs, you get access to your own hosted environment and can explore all the features and capabilities of Docker through different labs:

  • Docker for Beginners – Linux
  • Docker for Beginners – Windows
  • Docker EE – an Introduction
  • Docker Security
  • Modernizing Traditional Java Applications
  • Docker EE with Kubernetes

 

We finally end the day by attending to one of the last workshops of today: Kubernetes Security, by Dimitris Kapadinis. During this workshop, the instructor shows us, the different methods to secure a Kubernetes cluster.

Kubernetes-WS-min (1)

The workshop was composed by the following part:

  1. Create a Kubernetes cluster with Play-with-Kubernetes (or minikube locally).
  2. Create and deploy a simple web application using Kubernetes.
  3. Hack the web application by entering inside the web-app pod.
  4. Protect the application by creating a security context for Kubernetes – Securing Kubernetes components.

 

It was a very intensive and interesting first day, I learned a lot through the different workshops and labs I done, so see you tomorrow  ;-)

 

Cet article DockerCon2018 Barcelona – Day 1 est apparu en premier sur Blog dbi services.

Oooooops or how to undelete a file on an ext4 filesystem

Mon, 2018-12-03 07:06

It happens within the blink of an eye.
A delete command was executed and half a second after you hit the enter button you knew it. That was a mistake.
This is the scenario which leads to this blog entry in where I show you how you can get your files back if you are lucky…

Short summary for the desperate

If you land here you are probably in the same situation I was so here is a short summary
Extundelete did not work for me but ext4magic did and I had to compile it from the sources

  • Remount the filesystem read-only or umount it as soon as possible after the incident
  • Backup your inode table you will need it for the restore
    • debugfs -R "dump /tmp/VMSSD01.journal" /dev/mapper/VMSSD01-VMSSD01
  • Check at which time your files were still there
    • ext4magic /dev/mapper/VMSSD01-VMSSD01 -H -a $(date -d "-3hours" +%s)
  • List the files within this timepoint
    • ext4magic /dev/mapper/VMSSD01-VMSSD01 -a 1542796423 -f / -l
  • Restore the file to a different disc/mountpoint
    • ext4magic /dev/mapper/VMSSD01-VMSSD01 -a 1542796423 -f / -j /tmp/VMSSD01.journal -r -d /tmp/recover
  • Be happy and promise never doing it again
And now the hole story

So it happened that I deleted two VM images by accident. I was cleaning up my environment and there were two files centos75_base_clone-1.qcow2 and centos75_base_clone-2.qcow2: As you can see I was using a clean and good naming convention which points directly, that these are the OS image files for my “nomachine” and my “networkmaster” machine… Especially the second one with my dhcp, dns, nfs and iscsi configuration would take some time to configure again.
In the first place nothing seemed to be wrong, all VMs were running normally until I tried to restart one of them and I went from :cool: to :shock: and at the end to :oops:

I could remember, that it was very important to unmount the filesystem as quickly as possible and stop changing anything on this filesystem
umount /VMSSD01
So a solution had to be found. A short Google search brought me to a tool with the promising name “extundelete” which can be found in the CentOS repository in the actual version 0.2.4 from 2012….
So a yum install -y extundelete and a man extundelete later I tried the command
extundelete --restore-all --after $(date -d "-2 hours" +%s) /dev/mapper/VMSSD01-VMSSD01
And…. It does not work.
A cryptical core dump and no solution on google so I went from :shock: TO :cry: .
extundelete_coredump
But it was not the time to give up. With the courage of the despaired, I searched around and found the tool ext4magic. Magic never sounded better than in this right moment. The tool was newer then extundelete even when it builds on extundelete. So I downloaded and compiled the newest Version 0.3.2 (from 2014). Before you can compile the source you need some dependencies:

yum install -y libblkid \
libblkid-devel \
zerofree e2fsp* \
zlib-devel \
libbz2-devel \
bzip2-devel \
file-devel

and to add some more “Magic” you need also yum install -y perl-File-LibMagic

A short ./configure && make later I got a binary and to tell it with Star Was: “A New Hope” started growing in me.

I listed all the files deleted in the last 3 hours and there they are. At least I thought these have to be my image files:
./src/ext4magic /dev/mapper/VMSSD01-VMSSD01 -H -a $(date -d "-3hours" +%s)
ext4magic_showInode

I listed out the content on the different timestamps and found at least one of my files. The timestamp 1542797503 showed some files so I tried to list all files from an earlier timestamp and one of my missing image files showed up.
./src/ext4magic /dev/mapper/VMSSD01-VMSSD01 -a 1542796423 -f / -l
ext4magic_file2restore
My mood started getting better and better and switched from :cry: to :???:.
I tried to restore my file
./src/ext4magic /dev/mapper/VMSSD01-VMSSD01 -a 1542796423 -f / -j /tmp/VMSSD01.journal -r -d /VMSSD02/recovery/
ext4magic_restoreInProgress
My first file is back :grin: . But the tool did not stop, it recovers more and more files and my hope was growing, to get both files back. The first file was back with the original name. For the second one, it was not that clear what happened. The tool was still running and recovers file after file after file and put all in the subdirectories MAGIC-2.

I tried to cancel the recovery job and give it a shot with the recovered files.
ext4magic_file2restore_unknown
After renaming the *.unknown files I tried to boot up the VM. To my surprise the first try was successful and all my VMs were back online.

Summary
  • Do not delete your files (obviously).
  • Use a clear naming convention for all your files.
  • A lsof before deleting a supposed unused file is always a good idea.
  • ext4magic worked for me and did as promised. My files are back, the VMs are up and running again. I am happy and :cool: .

    Cet article Oooooops or how to undelete a file on an ext4 filesystem est apparu en premier sur Blog dbi services.

OEM 13c Repository view gc$metric_values_daily

Mon, 2018-12-03 03:50

Oracle OEM 13c offers the possibility to generate reports about Oracle Database Tablespace Monthly Space Usage. The displayed information is quite useful and display a lot of information:

Screen Shot 2018-11-23 at 12.09.37

 

Screen Shot 2018-11-23 at 12.15.16

 

The report displays the details for each month and for each tablespace, but the GUI is sometimes painful: you have a nice report but many tablespaces and many numbers :=(

At a client’s site, I had the following behavior: a production database had its data files distributed across three file system /u01 /u01 and /u03. And I wanted to know the evolution of the data files available size for each disk.

My first idea was to write sql requests querying the tablespace history view dba_hist_tbspc_space_usage, but this view does not contain any information about the file system, and as a consequence it is not possible to know how much space will be needed by each file system in the future.

OEM 13c offers the capability of viewing graphs for most of the metrics within OEM, but forecasting and trending capabilities are not present, and most of the category of metrics are set up for real time alerting, not for historical trending.

And I did not find any views like dba_hist_datafiles_space_usage :=)

So I checked in EM13c to verify that the file system space usage is correctly monitored. Naturally each file system is monitored and triggers a warning when the available space is below 20%, or a critical alert when it is below 5%.

As a consequence I had the idea to query the OEM 13c repository views, and this was the good solution.

At first we can query the mgmt.$alert_current to display the alerts generated on the metric_name Filesystems over target of type host:

SQL> select target_name , target_type, metric_name, metric_column, column_label,alert_state, violation_level
  2  FROM MGMT$ALERT_CURRENT
  3  where metric_name ='Filesystems'
  4  and target_guid in (SELECT target_guid
  5  FROM mgmt$target
  6* WHERE target_type ='host');

em13c.localdomain
host
Filesystems
pctAvailable
Filesystem Space Available (%)
Warning 			20

There is a lot of management repository views providing access to target, metrics and monitoring information stored in the Management Repository. Those views offer the possibility to create your own scripts to query historical data from those views.

In my case I used the gc$metric_values_daily view corresponding to the data in the EM_METRIC_VALUES_DAILY table:

SQL> desc gc$metric_values_daily
 Name					   Null?    Type
 ----------------------------------------- -------- ----------------------------
 ENTITY_TYPE				   NOT NULL VARCHAR2(64)
 ENTITY_NAME				   NOT NULL VARCHAR2(256)
 ENTITY_GUID				   NOT NULL RAW(16)
 PARENT_ME_TYPE 				    VARCHAR2(64)
 PARENT_ME_NAME 				    VARCHAR2(256)
 PARENT_ME_GUID 			   NOT NULL RAW(16)
 TYPE_META_VER				   NOT NULL VARCHAR2(8)
 METRIC_GROUP_NAME			   NOT NULL VARCHAR2(64)
 METRIC_COLUMN_NAME			   NOT NULL VARCHAR2(64)
 COLUMN_TYPE				   NOT NULL NUMBER(1)
 COLUMN_INDEX				   NOT NULL NUMBER(3)
 DATA_COLUMN_TYPE			   NOT NULL NUMBER(2)
 METRIC_GROUP_ID			   NOT NULL NUMBER(38)
 METRIC_GROUP_LABEL				    VARCHAR2(64)
 METRIC_GROUP_LABEL_NLSID			    VARCHAR2(64)
 METRIC_COLUMN_ID			   NOT NULL NUMBER(38)
 METRIC_COLUMN_LABEL				    VARCHAR2(64)
 METRIC_COLUMN_LABEL_NLSID			    VARCHAR2(64)
 DESCRIPTION					    VARCHAR2(1024)
 SHORT_NAME					    VARCHAR2(40)
 UNIT						    VARCHAR2(64)
 IS_FOR_SUMMARY 				    NUMBER
 IS_STATEFUL					    NUMBER
 NON_THRESHOLDED_ALERTS 			    NUMBER
 METRIC_KEY_ID				   NOT NULL NUMBER(38)
 KEY_PART_1				   NOT NULL VARCHAR2(256)
 KEY_PART_2				   NOT NULL VARCHAR2(256)
 KEY_PART_3				   NOT NULL VARCHAR2(256)
 KEY_PART_4				   NOT NULL VARCHAR2(256)
 KEY_PART_5				   NOT NULL VARCHAR2(256)
 KEY_PART_6				   NOT NULL VARCHAR2(256)
 KEY_PART_7				   NOT NULL VARCHAR2(256)
 COLLECTION_TIME			   NOT NULL DATE
 COLLECTION_TIME_UTC				    DATE
 COUNT_OF_COLLECTIONS			   NOT NULL NUMBER(38)
 AVG_VALUE					    NUMBER
 MIN_VALUE					    NUMBER
 MAX_VALUE					    NUMBER
 STDDEV_VALUE					    NUMBER

The main column signification:

ENTITY_TYPE The type of the target. Host, oracle_database ENTITY_NAME The target or component name for example the hostname METRIC_GROUP_NAME Name of the metric group for example filesystems METRIC_COLUMN_NAME Name of the metric column for example available, pct KEY_PART_1 Key part 1 of composite key for example name of the file system KEY_PART_2 Key part 2 of composite key COLLECTION_TIME Collection time in target time zone AVG_VALUE Average value of metric for the day MIN_VALUE Minimum value of metric for the day MAX_VALUE Maximum value of metric for the day

So in my case I only have to execute the following SQL request:

select entity_name, 
key_part_1,
collection_time, 
avg_value as avg, 
avg_value/1000000 as percent  from gc$metric_values_daily
where metric_group_name='Filesystems' 
and metric_column_name= 'available' 
and key_part_1='/u01' 
and entity_name ='oraprod.com'
order by 1,2
/

oraprod.com     /u01          04-OCT-16  169545.44  	.380928435
oraprod.com     /u01          05-OCT-16  169572.44	    .380928435
oraprod.com     /u01          06-OCT-16  169583.44	    .380928435
…
oraprod.com     /u01           17-NOV-18  169538.44 	.172295163
oraprod.com     /u01           18-NOV-18  169538.44 	.172295163
oraprod.com     /u01           19-NOV-18  169525.94 	.172282459
oraprod.com     /u01           20-NOV-18  169498.44 	.172254512

With this result I can easily know how much space is consumed by each file system during the time. We can use Excel’s capabilities to determine the trend of the file system available size.

Conclusion:

We often forget that OEM is not only a graphical administration tool, the main interesting part of OEM is that OEM stores date at regular intervals for important metrics into the management repository. If you understand the management repository views structure, you will have extraordinary possibilities to extract historical data, build graphs in order to analyze more precisely your infrastructure.

Cet article OEM 13c Repository view gc$metric_values_daily est apparu en premier sur Blog dbi services.

AWS re:invent 2018 – Day 5 – Final day

Sat, 2018-12-01 12:34

On Friday, it’s the last day for AWS re:Invent convention in Las Vegas. I didn’t know in advance what to expect because I have to admit that even if I started to use AWS, there is still a lot to learn. And that’s why I personally think working with technology is interesting, it’s always moving and there is always something new to investigate or learn.

This morning, I attended a very interesting session about the life of a Cloud Ops Engineer with 2 people from AWS speaking about some scenarios that may happen (or already happened) in daily life of system engineers. I was quite surprised on 1 scenario where the starting point to analyze a failure was the billing console. But imagine, you get a call from a customer because the application is down and you don’t have lots of informations about it. In AWS, you pay for the services you use so you can find some resources directly on the bill. Then you can pull the string and follow different resources to get a better view. Then CloudTrail can help to see what was changed on the resources because it keeps track of API calls.

Knowing some keys services and enabling them can help a lot operation/devops team to support applications in an AWS environment: CloudTrail, CloudWatch, GuardDuty or few of them. Using CloudFormation or tools to provision the infrastructure can also help to detect drifts when there is an issue. It definitely worth waking up this morning.

I ended the conference with a new launch session about running on Amazon RDS with VMware on local datacenter. It allows customers to give the offload the management part to AWS while keeping the data on site. AWS will provide a bunch of control machines built by AWS and VMware which is AWS partner. These machines will connect using a private VPN to control machines in AWS Cloud infrastructure. Currently supporting PostgreSQL / MySQL and planning to support SQL Server and Oracle. It will be a BYOL (Bring Your Own Licence) model so it will interesting to see how to licence that environment. Aurora is not supported and with what I learned this week about the storage architecture I think it will be a difficult challenge.

With Amazon RDS on VMware and AWS Outpost which has been announced in the keynote, AWS is giving solutions for customer want to use Cloud but keep data in local datacenter. Oracle proposed Cloud at Customer to achieve a similar goal. Oracle proposal is built on hardware (engineered systems) provided and maintained by Oracle when AWS ‘ solution is built on VMware. But many companies are already running successfully VMware on their own hardware so the “fight” will be interesting.

Cet article AWS re:invent 2018 – Day 5 – Final day est apparu en premier sur Blog dbi services.

PostgreSQL 12: csv output format for psql

Sat, 2018-12-01 07:25

Getting data out of PostgreSQL in csv format is not a big issue. Using copy you can do that easily. PostgreSQL 12 will enhance psql so that you can directly return csv formatted output from a query.

Using “copy” you would do something like this:

postgres=# copy pg_database to '/var/tmp/aa.txt' with (format csv);
COPY 3
postgres=# \! cat /var/tmp/aa.txt
11788,postgres,10,6,en_US.utf8,en_US.utf8,f,t,-1,11787,598,1,1663,
1,template1,10,6,en_US.utf8,en_US.utf8,t,t,-1,11787,598,1,1663,"{=c/postgres,postgres=CTc/postgres}"
11787,template0,10,6,en_US.utf8,en_US.utf8,t,f,-1,11787,598,1,1663,"{=c/postgres,postgres=CTc/postgres}"
postgres=# 

What you can not do directly with copy is dumping a view:

postgres=# copy pg_settings to '/var/tmp/aa.txt' with (format csv);
ERROR:  cannot copy from view "pg_settings"
HINT:  Try the COPY (SELECT ...) TO variant.

Of course you can workaround that by doing it like this:

postgres=# copy (select * from pg_settings) to '/var/tmp/aa.txt' with (format csv);
COPY 309
postgres=# \! head -2 /var/tmp/aa.txt
allow_system_table_mods,off,,Developer Options,Allows modifications of the structure of system tables.,,postmaster,bool,default,,,,off,off,,,f
application_name,psql,,Reporting and Logging / What to Log,Sets the application name to be reported in statistics and logs.,,user,string,client,,,,"",psql,,,f

With PostgreSQL 12 there will be an easier way of doing that:

postgres=# \pset format csv
Output format is csv.
postgres=# select * from pg_database;
oid,datname,datdba,encoding,datcollate,datctype,datistemplate,datallowconn,datconnlimit,datlastsysoid,datfrozenxid,datminmxid,dattablespace,datacl
11788,postgres,10,6,en_US.utf8,en_US.utf8,f,t,-1,11787,598,1,1663,
1,template1,10,6,en_US.utf8,en_US.utf8,t,t,-1,11787,598,1,1663,"{=c/postgres,postgres=CTc/postgres}"
11787,template0,10,6,en_US.utf8,en_US.utf8,t,f,-1,11787,598,1,1663,"{=c/postgres,postgres=CTc/postgres}"

… or directly when executing psql:

postgres@pgbox:/home/postgres/ [PGDEV] psql -c "select * from pg_database" --csv  postgres
oid,datname,datdba,encoding,datcollate,datctype,datistemplate,datallowconn,datconnlimit,datlastsysoid,datfrozenxid,datminmxid,dattablespace,datacl
11788,postgres,10,6,en_US.utf8,en_US.utf8,f,t,-1,11787,598,1,1663,NULL
1,template1,10,6,en_US.utf8,en_US.utf8,t,t,-1,11787,598,1,1663,"{=c/postgres,postgres=CTc/postgres}"
11787,template0,10,6,en_US.utf8,en_US.utf8,t,f,-1,11787,598,1,1663,"{=c/postgres,postgres=CTc/postgres}"
Time: 0.794 ms

This makes is very easy to pass the data to other programs for further processing. By switching to unaligned output mode you even specify the field separator:

postgres@pgbox:/home/postgres/ [PGDEV] psql -c "select * from pg_database" --csv --field-separator='##' -A postgres
oid##datname##datdba##encoding##datcollate##datctype##datistemplate##datallowconn##datconnlimit##datlastsysoid##datfrozenxid##datminmxid##dattablespace##datacl
11788##postgres##10##6##en_US.utf8##en_US.utf8##f##t##-1##11787##598##1##1663##NULL
1##template1##10##6##en_US.utf8##en_US.utf8##t##t##-1##11787##598##1##1663##{=c/postgres,postgres=CTc/postgres}
11787##template0##10##6##en_US.utf8##en_US.utf8##t##f##-1##11787##598##1##1663##{=c/postgres,postgres=CTc/postgres}
(3 rows)
Time: 2.105 ms

Cet article PostgreSQL 12: csv output format for psql est apparu en premier sur Blog dbi services.

PostgreSQL 12: log_statement_sample_rate

Fri, 2018-11-30 12:41

A common way to identify long running queries in PostgreSQL is to set log_min_duration_statement to a value that is known to cause troubles. In other words: If you know most of your statements usually execute in under a second you could set log_min_duration_statement to “2s” (which means two seconds) so that whenever a statement takes longer than two seconds it will be reported in the log file. The issue with this however is, that this can cause a lot of activity in the server log file which of course is not good for performance as well. PostgreSQL 12 will probably come with a solution to that.

The default value of log_min_duration_statement is “-1″, which means disabled:

postgres=# select version();
                                                  version                                                   
------------------------------------------------------------------------------------------------------------
 PostgreSQL 12devel on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28), 64-bit
(1 row)

postgres=# show log_min_duration_statement;
 log_min_duration_statement 
----------------------------
 -1
(1 row)

Lets set this to 5ms and then generate some queries:

postgres=# alter system set log_min_duration_statement = '5ms';
ALTER SYSTEM
postgres=# select pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

postgres=# show log_min_duration_statement;
 log_min_duration_statement 
----------------------------
 5ms
(1 row)

When we execute some queries that take longer than 5ms they should get reported in the log file:

postgres=# select 'select pg_sleep(0.5)' from generate_series(1,10); \gexec
       ?column?       
----------------------
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
(10 rows)

 pg_sleep 
----------
 
(1 row)

...

Looking at the log file there are exactly 10 entries:

2018-11-28 03:13:32.240 CET - 20 - 28978 - [local] - postgres@postgres LOG:  duration: 529.825 ms  statement: select pg_sleep(0.5)
2018-11-28 03:13:32.770 CET - 21 - 28978 - [local] - postgres@postgres LOG:  duration: 529.904 ms  statement: select pg_sleep(0.5)
2018-11-28 03:13:33.273 CET - 22 - 28978 - [local] - postgres@postgres LOG:  duration: 501.729 ms  statement: select pg_sleep(0.5)
2018-11-28 03:13:33.783 CET - 23 - 28978 - [local] - postgres@postgres LOG:  duration: 509.532 ms  statement: select pg_sleep(0.5)
2018-11-28 03:13:34.305 CET - 24 - 28978 - [local] - postgres@postgres LOG:  duration: 520.946 ms  statement: select pg_sleep(0.5)
2018-11-28 03:13:34.809 CET - 25 - 28978 - [local] - postgres@postgres LOG:  duration: 502.624 ms  statement: select pg_sleep(0.5)
2018-11-28 03:13:35.315 CET - 26 - 28978 - [local] - postgres@postgres LOG:  duration: 505.043 ms  statement: select pg_sleep(0.5)
2018-11-28 03:13:35.817 CET - 27 - 28978 - [local] - postgres@postgres LOG:  duration: 502.034 ms  statement: select pg_sleep(0.5)
2018-11-28 03:13:36.321 CET - 28 - 28978 - [local] - postgres@postgres LOG:  duration: 503.417 ms  statement: select pg_sleep(0.5)
2018-11-28 03:13:36.824 CET - 29 - 28978 - [local] - postgres@postgres LOG:  duration: 501.790 ms  statement: select pg_sleep(0.5)

Imagine there are thousands of statements that exceed log_min_duration_statement: This will make the log file grow quite fast and performance will suffer because writing the log file takes resources as well. PostgreSQL 12 introduces a new parameter “log_statement_sample_rate” which helps with this:

postgres=# show log_statement_sample_rate;
 log_statement_sample_rate 
---------------------------
 1
(1 row)

The default is 1 which means log all statements that exceed the log_min_duration_statement threshold. When that is too much we can now go and say: We want only 10% of these statements getting logged:

postgres=# alter system set log_statement_sample_rate=0.1;
ALTER SYSTEM
postgres=# select pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

postgres=# show log_statement_sample_rate;
 log_statement_sample_rate 
---------------------------
 0.1
(1 row)

Doing the same test as above again:

postgres=# select 'select pg_sleep(0.5)' from generate_series(1,10); \gexec
       ?column?       
----------------------
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
(10 rows)

 pg_sleep 
----------
 
(1 row)

...

… we only get one entry in the log file, instead of 10:

2018-11-28 03:20:36.218 CET - 34 - 28978 - [local] - postgres@postgres LOG:  duration: 504.845 ms  statement: select pg_sleep(0.5)

Nice. Btw: The commit is here.

Cet article PostgreSQL 12: log_statement_sample_rate est apparu en premier sur Blog dbi services.

AWS re:invent 2018 – Day 4

Fri, 2018-11-30 11:52

Thursday morning it’s time for the keynote from Dr Werner Vogels.

A big part of the keynote was about databases again. Maybe AWS people knows that there are still lots of DBAs out there! It’s also again a focus on Amazon Aurora (relational), Dynamo DB (NoSQL) and AWS Redshift that are replacing Oracle databases Amazon was using. There was also a focus on how S3 works to maintain the best durability for the data. I’ve seen several sessions this week mentioning S3 as the object storage for building a data lake.

I haven’t mentioned on yesterday’s keynote but there are some guest coming on stage to speak about how they are using AWS on there businesses. Today, Ethan Kaplan from Fender. He spoke about Serverless movement than is used to power the new learning guitar application from Fender. This application was born because even if there are still lots of people buying new guitars, most of them quit playing guitar after 6 months. On the other side, people that continue playing are buying 8-10 guitars so it was important to develop a solution for people to continue to play.

The important guest for me was Yuri Misnik, a representative of National Australian Bank speaking about the Cloud first strategy within the Bank. Banking is traditionally associated with legacy technologies and regulation that would be against moving to the Cloud. But they are targeting 1/3 of the applications running in AWS by the year 2020 and they are moving fast.

There were less new launches than in Andy’s keynote but there are few to mention:

  • Redshift concurrent scaling: Allow Redshift to automatically increase the capacity to avoid waits when number of queries increases. The very nice part is the burstable, for every 24 hours your cluster is running you get 1 hour credit for this new concurrent usage making it almost transparent and free for most customers according to Werner
  • Support for new languages in AWS Lambda and most important support of custom running to allow customer running almost any language they like
  • AWS Lambda layer: You can now share a library between several Lambda functions without having to copy the code again
  • AWS Well-Architected Tool based on the AWS Well-Architected framework to allow reviewing the architecture without the need of a meeting with AWS or an AWS partner

Again, the keynote consumed a big part of the day but I was able to attend 2 more sessions. The 1st was Kyle Hailey presenting Performance Insights. It’s a performance tool for Amazon RDS working for Aurora, PostgreSQL, MySQL and Oracle. SQL Server is planned and will be the next on the list. It’s very promising even if it’s currently missing SQL execution plans. The strength of the tool is that it will provide consistent GUI across different engines.

AWS Performance Insights

As an Oracle DBA, I have the feeling that the tool is inspired a lot of what has been done already quite some time ago when ASH has been implemented directly in the database. The same AAS metric is driving the analysis and it’s based on 1 second sampling interval. It’s a good news, most DBAs that are already working this way with Oracle will be able to quickly understand their load when moving to Amazon RDS. The fun part is a demo from the company Slalom that developed a connector to Performance Insights for Alexa, now you can speak to your database.

I finished the day with a workshop about CI/CD: Continuous Integration / Continuous Deployment or Delivery according to the level you reach. Most often, we prefer or we have to (validated environment) have an approval for moving to Production thus only it’s only continuous delivery. I can’t put all the details but the workshop was well organize with people able to answer questions and infrastructure running without issues. It was an occasion to deal a little bit with Opswork service for a managed Puppet server and discover CodeBuild / CodePipeline services.

On Thursday night, it’s re:Play party where nobody told people that they can’t play anymore because they are too old :-) There will be a lot of activities besides 2 different stages: 1 for live music and 1 with DJs. But on Friday morning, there will be the last sessions and it will still be interesting, stay tuned.

Cet article AWS re:invent 2018 – Day 4 est apparu en premier sur Blog dbi services.

Extend a Oracle Access Manager 12.2.1.3 with User Messaging Service

Fri, 2018-11-30 07:43

After upgrading a Oracle Access Manager from 11.1.2.3 to 12.2.1.3, one of my customer requested from me to configure the Forget Password feature. This feature requires the User Messaging Service that was part of the SOA software in previous versions. Now it is part of the Oracle Access Manager software and the WebLogic Domain can be extended with the UMS services.

Extend WebLogic Domain

[oracle@vm03 ~]$ cd /u01/app/fmw_oim_12213/
[oracle@vm03 fmw_oim_12213]$ ls
cfgtoollogs coherence domain-registry.xml em idm inventory OPatch oracle_common oraInst.loc oui user_projects wlserver
[oracle@vm03 fmw_oim_12213]$ cd oracle_common/common/bin/
[oracle@vm03 bin]$ ./config.sh

Conf1
In the next wizard, we will select the Oracle User Messaging Service Basic template that is a quick start template that defines the managed server, ums_server1, and targets all UMS components to that server.
Conf2
We keep the file store persistence store but it should be placed on a shared drive if it is planned to use a UMS WebLogic cluster.
Conf3
The OPSS was already configured in the OAM WebLogic Domain, there is noting to change in the two next wizards.
Conf4
conf5
In the next wizards we get the JDBC connections from the RCU configuration schema.
conf6
conf7
The UMS repository schema was already installed. The JDBC connections shows correctly and the JDBC connections test went smoothly.
conf8
conf9
We had no keystore configured
conf10
We had to go through the topoligy to configure the UMS server and the UMS cluster.
conf11
In the next wizard, the new ums_server1 appeared listening on port 7003. This is the default for all new WebLogic Servers in the configuration wizard. Best is to change it to the default SOA port that is 8001 (it can help avoiding some troubles with hard coded ports in some applications).
conf12
As this OAM WebLogic Domain was configured with clusters, I had to create an UMS cluster named ums_cluster.
conf13
I did not use server templatres neither dynamic servers. Nothing changed in the next two wizards.
conf14
conf15
Assigned the ums_server1 to the ums_cluster moving it from the right panel to the left panel under the ums_cluster.
conf16
The coherence cluster existed already in the OAM WebLogic Domain and the machine too. Not changes was done in the next two wizards.
conf17
conf18
The ums_server1 was attached to the already existing machine moving it from the left panel to the right panel under the existing machine.
conf19
The deployments and the services were not changed. No actions was applied on the two next wizards.
conf20
conf21
We kept the persistence file store file name as it was proposed by default.
conf22
We checked the summary and clicked the Update button to extend the OAM WebLogic Domain.
conf23
All went smoothly without errors.
conf24
After the Domain has been Extended, it was started. The first time the ums_server1 had to be started manually using the startManagedWebLogic.sh script.

Configure the UMS email driver.

We decided to use mails to send the One Time Pin token to the users requesting to reset their password.

This configuration has to be done through the Fusion Middleware Enterprise Manager Console.
EM1
Select usermessaginserver (ums_server1)
EM2
EM3
At the bottom of the page, look for the Email driver and click the configure button. The next page allows t create a new Email driver and then to configure it.
Em4
EM5
here are the parameters that needs to be set for the UMS to be able to send mails.

  • Name: MailServer
  • Capability: Send
  • Sender Address:
    Check ‘Use Default Sender Address’ and set the default email ddress: EMAIL:admin@dbi-workshop.com
  • Outgoing Mail Server: “to be filed with your email server name”
Usage

Using the Oracle Access Manager One Time Pin rest API, messages are pushed to the User Messaging Service JMS queue and sent by mails using the email driver.

My next blog will explain how to configure the Oracle Access Manager Forget Password feature using the One Time Pin sent by Mail

Cet article Extend a Oracle Access Manager 12.2.1.3 with User Messaging Service est apparu en premier sur Blog dbi services.

AWS re:invent 2018 – Day 3

Thu, 2018-11-29 13:11

On Wednesday morning, this is the big day for AWS re:Invent conference. The day started with a 3 hours keynote from Andy Jassy, CEO of AWS.The keynote covered a lot of different topics: databases, blockchain, machine learning… As most keynotes, it started with some numbers and AWS is still a strong leader on the public Cloud if you look at the market shares, a moment to tackle a bit some known competitors.

AWS market share

Then it’s also the occasion for lots of announces and new services launches. If I have to make a selection, here is a few:

  • AWS Timestream: a new database service focusing on data from time series designed for IoT data that may come from sensors
  • AWS Quantum Ledger Database (QLDB): a new database service to answer some needs where customers were looking at the blockchain
  • AWS Glacier Deep Archive: the goal is to forget about backup to tape with a very competitive price of 1$ per TB (0.00099 $ per GB)
  • AWS Security Hub: to manage security and compliance across an AWS environment

Andy Jassy also announced a competition for machine learning developer AWS DeepRacer league. The goal is to create an autonomous driving model so the AWS DeepRacer (small electric car) finish a lap of a test track the fastest way possible. It’s reinforcement learning with a try and error where the model improves itself using the feedback of previous iterations. For this year developers will have 1 day and the 3 best will compete in a final before Thursday’s morning keynote.

After the keynote, I continued to pick sessions around databases and AWS introductory sessions about the infrastructure.

The one about matching the workload with the right database was very interesting. Basically, Rick Houlihan was explaining that you have to know your knowledge before picking the engine you will use. Most often, the engine is picked because it’s already licensed or because there is already an operational team knowing how to operate it. Also it happens that some people tries to go to the latest or newest trend because of the buzz. According to him, relational databases are not scaling very well and are optimized for storage. That’s true that if you fully normalize, data is not supposed to be duplicated. On the other hand, NoSQL databases can scale a lot but are not very adaptive to ad-hoc queries.

Databases workload

On the architecture part, I focused again on VPC and there are some changes on how you can build your network to work on AWS alongside with your on premise datacenter.

VPC sharing allows now a VPC to span across different AWS account within the same organization. It means you can have several billing accounts but managed by a VPC owner that would be responsible for routing tables and so. It can simplify the maintenance and help network engineers.

The other big point I noted is the new transit gateway (TGW). A transit gateway can be attached to several VPCs and to AWS VPN to link with your datacenter. This is a central hub where you can maintain routing tables for inter-VPC traffic without maintaining a virtual private gateway per VPC.

VPC before TGW VPC with TGW

On Thursday, there will be another keynote from Dr Werner Vogels, CTO of Amazon.com. Let’s see if there will be other announces.

Cet article AWS re:invent 2018 – Day 3 est apparu en premier sur Blog dbi services.

AWS re:invent 2018 – Day 2

Wed, 2018-11-28 14:43

Second day in Las Vegas for the AWS re:Invent conference. It was time to travel a little bit around the campus to attend some sessions at the Aria. The planning was still around databases and a bit AWS knowledge.

The shuttle service worked perfectly in both directions with a reasonable time travel between the 2 hotels. But with such time travels, you can’t always be in all sessions you would like to see.

I started with a session about DevOps strategy where the speaker Ajit Zadgaonkar explained some rules to succeed in the DevOps strategy. Even if you start small, moving to DevOps is a movement that all the company should be aware of. It’s about teaching not only within the DevOps team but let know other teams and businesses about your work.

Then I saw 2 different interesting sessions about Aurora running on Amazon RDS. Aurora runs on the same platform than the other proposed engines (Oracle, SQL Server, PostgreSQL, mySQL and MariaDB). It means Aurora is fully managed by AWS.

The interesting part is that Aurora supports 2 different engines: MySQL or Postgres and in both cases, AWS claims that the performance is lot better in Aurora than in the community edition because it has been designed for the Cloud. One of the 2 session was a deep dive focusing on the Postgres part and the storage part of Aurora is totally different.

AWS Aurora Postgres storage

AWS is using a shared storage across a region (like Frankfurt) and “replicate” pages in 6 different locations. According to them, it provides great resilience/durability/availability. To prevent write performance bottleneck, write is valid once 4 out of the 6 blocs have been written. In addition, Aurora is kind of redo log based and doesn’t send full pages/blocs to the storage, reducing a lot the amount of written data. Below is a slides of a benchmark using pgbench.

Aurora Postgres benchmark

To continue my journey, I also went to basic sessions about AWS infrastructure itself and it’s interesting to note that they think in advance how to power their datacenters, 50% of the energy used by AWS datacenters comes from renewable sources like wind or solar.followed this session remotely thanks to overflow areas where you can attend a session currently on-going in another hotel. You get a video streaming of the session with the slides and you get a headset for the sound.

Invent overflow session

There is also 5 new regions planned in a near future including 2 new locations in Europe: Milan, Bahrein, Stockholm Hong Kong and Cape Town.

Even if there were already some announcements, on Wednesday morning we will have the keynote with Andy Jassy, CEO of AWS. I’m looking forward for this keynote.

Cet article AWS re:invent 2018 – Day 2 est apparu en premier sur Blog dbi services.

No more recovery.conf in PostgreSQL 12

Wed, 2018-11-28 13:41

Traditionally everything which is related to recovery in PostgreSQL goes to recovery.conf. This is not only true for recovery settings but also for turning an instance into a replica which follows a master. Some days ago this commit landed in the PostgreSQL git repository. What that effectively means is, that there will be no more recovery.conf starting with PostgreSQL 12. How does that work then? Lets do some tests.

Obviously you need the latest development version of PostgreSQL (if you are not sure on how to do that check here and here):

postgres@pgbox:/home/postgres/ [PGDEV] psql -X postgres
psql (12devel)
Type "help" for help.

postgres=# select version();
                                                  version                                                   
------------------------------------------------------------------------------------------------------------
 PostgreSQL 12devel on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28), 64-bit
(1 row)

Lets look at the replica case first. When you do a pg_basebackup you can tell it to write a recovery.conf file (at least you could tell that up to PostgreSQL 11). So what changed here:

postgres@pgbox:/home/postgres/ [PGDEV] pg_basebackup --help | grep -A 1 recovery
  -R, --write-recovery-conf
                         write configuration for replication

When you compare that to a version before 12 you’ll notice the difference in wording:

postgres@pgbox:/home/postgres/ [PGDEV] /u01/app/postgres/product/10/db_4/bin/pg_basebackup --help | grep -A 1 recovery
  -R, --write-recovery-conf
                         write recovery.conf for replication

The word “recovery.conf” is gone and it is a more general statement about replication configuration now. What does pg_baebackup do now in PostgreSQL 12 when we ask to write the configuration for recovery:

postgres@pgbox:/home/postgres/ [PGDEV] pg_basebackup -R -D /var/tmp/pg12s/

We do not have a recovery.conf file:

postgres@pgbox:/home/postgres/ [PGDEV] ls -la /var/tmp/pg12s/
total 64
drwxr-xr-x. 20 postgres postgres  4096 Nov 27 20:19 .
drwxrwxrwt.  6 root     root       256 Nov 27 20:19 ..
-rw-------.  1 postgres postgres   224 Nov 27 20:19 backup_label
drwx------.  5 postgres postgres    41 Nov 27 20:19 base
-rw-------.  1 postgres postgres    33 Nov 27 20:19 current_logfiles
drwx------.  2 postgres postgres  4096 Nov 27 20:19 global
drwx------.  2 postgres postgres     6 Nov 27 20:19 pg_commit_ts
drwx------.  2 postgres postgres     6 Nov 27 20:19 pg_dynshmem
-rw-------.  1 postgres postgres  4513 Nov 27 20:19 pg_hba.conf
-rw-------.  1 postgres postgres  1636 Nov 27 20:19 pg_ident.conf
drwxr-xr-x.  2 postgres postgres    32 Nov 27 20:19 pg_log
drwx------.  4 postgres postgres    68 Nov 27 20:19 pg_logical
drwx------.  4 postgres postgres    36 Nov 27 20:19 pg_multixact
drwx------.  2 postgres postgres     6 Nov 27 20:19 pg_notify
drwx------.  2 postgres postgres     6 Nov 27 20:19 pg_replslot
drwx------.  2 postgres postgres     6 Nov 27 20:19 pg_serial
drwx------.  2 postgres postgres     6 Nov 27 20:19 pg_snapshots
drwx------.  2 postgres postgres     6 Nov 27 20:19 pg_stat
drwx------.  2 postgres postgres     6 Nov 27 20:19 pg_stat_tmp
drwx------.  2 postgres postgres     6 Nov 27 20:19 pg_subtrans
drwx------.  2 postgres postgres     6 Nov 27 20:19 pg_tblspc
drwx------.  2 postgres postgres     6 Nov 27 20:19 pg_twophase
-rw-------.  1 postgres postgres     3 Nov 27 20:19 PG_VERSION
drwx------.  3 postgres postgres    60 Nov 27 20:19 pg_wal
drwx------.  2 postgres postgres    18 Nov 27 20:19 pg_xact
-rw-------.  1 postgres postgres   390 Nov 27 20:19 postgresql.auto.conf
-rw-------.  1 postgres postgres 26000 Nov 27 20:19 postgresql.conf
-rw-------.  1 postgres postgres     0 Nov 27 20:19 standby.signal

Replica related configuration is appended to postgresql.auto.conf:

postgres@pgbox:/home/postgres/ [PGDEV] cat /var/tmp/pg12s/postgresql.auto.conf 
# Do not edit this file manually!
# It will be overwritten by the ALTER SYSTEM command.
logging_collector = 'on'
log_truncate_on_rotation = 'on'
log_filename = 'postgresql-%a.log'
log_line_prefix = '%m - %l - %p - %h - %u@%d '
log_directory = 'pg_log'
primary_conninfo = 'user=postgres passfile=''/home/postgres/.pgpass'' port=5433 sslmode=prefer sslcompression=0 target_session_attrs=any'

But what about timeline and all the other settings? All these have been merged into the normal postgresql.[auto.]conf file as well:

postgres=# select name,setting from pg_settings where name like '%recovery%';
           name            | setting 
---------------------------+---------
 recovery_end_command      | 
 recovery_min_apply_delay  | 0
 recovery_target           | 
 recovery_target_action    | pause
 recovery_target_inclusive | on
 recovery_target_lsn       | 
 recovery_target_name      | 
 recovery_target_time      | 
 recovery_target_timeline  | 
 recovery_target_xid       | 
 trace_recovery_messages   | log
(11 rows)

So all the settings can now be set in one file. The remaining question is: How does the instance then know when it needs to go into recovery? Before PostgreSQL 12 the presence of the recovery.conf file told the instance to go into recovery. Now, that the file is gone there must be a new mechanism and that is the “standby.signal” file in case of a replica:

postgres@pgbox:/home/postgres/ [PGDEV] cat /var/tmp/pg12s/standby.signal 
postgres@pgbox:/home/postgres/ [PGDEV] 

That file is empty and just tells PostgreSQL to go into recovery and then process the recovery related parameters which are now in postgresql.[auto.]conf. The same is true when a recovery is requested: The signal file in that case is “recovery.signal”.

All in all that means there is one configuration file less to take care of and that is good. The question will be on how fast all the third party tools will catch up with that change.

Cet article No more recovery.conf in PostgreSQL 12 est apparu en premier sur Blog dbi services.

Pages