Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 4 hours 42 min ago

AWS re:invent 2018 – Day 2

Wed, 2018-11-28 14:43

Second day in Las Vegas for the AWS re:Invent conference. It was time to travel a little bit around the campus to attend some sessions at the Aria. The planning was still around databases and a bit AWS knowledge.

The shuttle service worked perfectly in both directions with a reasonable time travel between the 2 hotels. But with such time travels, you can’t always be in all sessions you would like to see.

I started with a session about DevOps strategy where the speaker Ajit Zadgaonkar explained some rules to succeed in the DevOps strategy. Even if you start small, moving to DevOps is a movement that all the company should be aware of. It’s about teaching not only within the DevOps team but let know other teams and businesses about your work.

Then I saw 2 different interesting sessions about Aurora running on Amazon RDS. Aurora runs on the same platform than the other proposed engines (Oracle, SQL Server, PostgreSQL, mySQL and MariaDB). It means Aurora is fully managed by AWS.

The interesting part is that Aurora supports 2 different engines: MySQL or Postgres and in both cases, AWS claims that the performance is lot better in Aurora than in the community edition because it has been designed for the Cloud. One of the 2 session was a deep dive focusing on the Postgres part and the storage part of Aurora is totally different.

AWS Aurora Postgres storage

AWS is using a shared storage across a region (like Frankfurt) and “replicate” pages in 6 different locations. According to them, it provides great resilience/durability/availability. To prevent write performance bottleneck, write is valid once 4 out of the 6 blocs have been written. In addition, Aurora is kind of redo log based and doesn’t send full pages/blocs to the storage, reducing a lot the amount of written data. Below is a slides of a benchmark using pgbench.

Aurora Postgres benchmark

To continue my journey, I also went to basic sessions about AWS infrastructure itself and it’s interesting to note that they think in advance how to power their datacenters, 50% of the energy used by AWS datacenters comes from renewable sources like wind or solar.followed this session remotely thanks to overflow areas where you can attend a session currently on-going in another hotel. You get a video streaming of the session with the slides and you get a headset for the sound.

Invent overflow session

There is also 5 new regions planned in a near future including 2 new locations in Europe: Milan, Bahrein, Stockholm Hong Kong and Cape Town.

Even if there were already some announcements, on Wednesday morning we will have the keynote with Andy Jassy, CEO of AWS. I’m looking forward for this keynote.

Cet article AWS re:invent 2018 – Day 2 est apparu en premier sur Blog dbi services.

No more recovery.conf in PostgreSQL 12

Wed, 2018-11-28 13:41

Traditionally everything which is related to recovery in PostgreSQL goes to recovery.conf. This is not only true for recovery settings but also for turning an instance into a replica which follows a master. Some days ago this commit landed in the PostgreSQL git repository. What that effectively means is, that there will be no more recovery.conf starting with PostgreSQL 12. How does that work then? Lets do some tests.

Obviously you need the latest development version of PostgreSQL (if you are not sure on how to do that check here and here):

postgres@pgbox:/home/postgres/ [PGDEV] psql -X postgres
psql (12devel)
Type "help" for help.

postgres=# select version();
                                                  version                                                   
------------------------------------------------------------------------------------------------------------
 PostgreSQL 12devel on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28), 64-bit
(1 row)

Lets look at the replica case first. When you do a pg_basebackup you can tell it to write a recovery.conf file (at least you could tell that up to PostgreSQL 11). So what changed here:

postgres@pgbox:/home/postgres/ [PGDEV] pg_basebackup --help | grep -A 1 recovery
  -R, --write-recovery-conf
                         write configuration for replication

When you compare that to a version before 12 you’ll notice the difference in wording:

postgres@pgbox:/home/postgres/ [PGDEV] /u01/app/postgres/product/10/db_4/bin/pg_basebackup --help | grep -A 1 recovery
  -R, --write-recovery-conf
                         write recovery.conf for replication

The word “recovery.conf” is gone and it is a more general statement about replication configuration now. What does pg_baebackup do now in PostgreSQL 12 when we ask to write the configuration for recovery:

postgres@pgbox:/home/postgres/ [PGDEV] pg_basebackup -R -D /var/tmp/pg12s/

We do not have a recovery.conf file:

postgres@pgbox:/home/postgres/ [PGDEV] ls -la /var/tmp/pg12s/
total 64
drwxr-xr-x. 20 postgres postgres  4096 Nov 27 20:19 .
drwxrwxrwt.  6 root     root       256 Nov 27 20:19 ..
-rw-------.  1 postgres postgres   224 Nov 27 20:19 backup_label
drwx------.  5 postgres postgres    41 Nov 27 20:19 base
-rw-------.  1 postgres postgres    33 Nov 27 20:19 current_logfiles
drwx------.  2 postgres postgres  4096 Nov 27 20:19 global
drwx------.  2 postgres postgres     6 Nov 27 20:19 pg_commit_ts
drwx------.  2 postgres postgres     6 Nov 27 20:19 pg_dynshmem
-rw-------.  1 postgres postgres  4513 Nov 27 20:19 pg_hba.conf
-rw-------.  1 postgres postgres  1636 Nov 27 20:19 pg_ident.conf
drwxr-xr-x.  2 postgres postgres    32 Nov 27 20:19 pg_log
drwx------.  4 postgres postgres    68 Nov 27 20:19 pg_logical
drwx------.  4 postgres postgres    36 Nov 27 20:19 pg_multixact
drwx------.  2 postgres postgres     6 Nov 27 20:19 pg_notify
drwx------.  2 postgres postgres     6 Nov 27 20:19 pg_replslot
drwx------.  2 postgres postgres     6 Nov 27 20:19 pg_serial
drwx------.  2 postgres postgres     6 Nov 27 20:19 pg_snapshots
drwx------.  2 postgres postgres     6 Nov 27 20:19 pg_stat
drwx------.  2 postgres postgres     6 Nov 27 20:19 pg_stat_tmp
drwx------.  2 postgres postgres     6 Nov 27 20:19 pg_subtrans
drwx------.  2 postgres postgres     6 Nov 27 20:19 pg_tblspc
drwx------.  2 postgres postgres     6 Nov 27 20:19 pg_twophase
-rw-------.  1 postgres postgres     3 Nov 27 20:19 PG_VERSION
drwx------.  3 postgres postgres    60 Nov 27 20:19 pg_wal
drwx------.  2 postgres postgres    18 Nov 27 20:19 pg_xact
-rw-------.  1 postgres postgres   390 Nov 27 20:19 postgresql.auto.conf
-rw-------.  1 postgres postgres 26000 Nov 27 20:19 postgresql.conf
-rw-------.  1 postgres postgres     0 Nov 27 20:19 standby.signal

Replica related configuration is appended to postgresql.auto.conf:

postgres@pgbox:/home/postgres/ [PGDEV] cat /var/tmp/pg12s/postgresql.auto.conf 
# Do not edit this file manually!
# It will be overwritten by the ALTER SYSTEM command.
logging_collector = 'on'
log_truncate_on_rotation = 'on'
log_filename = 'postgresql-%a.log'
log_line_prefix = '%m - %l - %p - %h - %u@%d '
log_directory = 'pg_log'
primary_conninfo = 'user=postgres passfile=''/home/postgres/.pgpass'' port=5433 sslmode=prefer sslcompression=0 target_session_attrs=any'

But what about timeline and all the other settings? All these have been merged into the normal postgresql.[auto.]conf file as well:

postgres=# select name,setting from pg_settings where name like '%recovery%';
           name            | setting 
---------------------------+---------
 recovery_end_command      | 
 recovery_min_apply_delay  | 0
 recovery_target           | 
 recovery_target_action    | pause
 recovery_target_inclusive | on
 recovery_target_lsn       | 
 recovery_target_name      | 
 recovery_target_time      | 
 recovery_target_timeline  | 
 recovery_target_xid       | 
 trace_recovery_messages   | log
(11 rows)

So all the settings can now be set in one file. The remaining question is: How does the instance then know when it needs to go into recovery? Before PostgreSQL 12 the presence of the recovery.conf file told the instance to go into recovery. Now, that the file is gone there must be a new mechanism and that is the “standby.signal” file in case of a replica:

postgres@pgbox:/home/postgres/ [PGDEV] cat /var/tmp/pg12s/standby.signal 
postgres@pgbox:/home/postgres/ [PGDEV] 

That file is empty and just tells PostgreSQL to go into recovery and then process the recovery related parameters which are now in postgresql.[auto.]conf. The same is true when a recovery is requested: The signal file in that case is “recovery.signal”.

All in all that means there is one configuration file less to take care of and that is good. The question will be on how fast all the third party tools will catch up with that change.

Cet article No more recovery.conf in PostgreSQL 12 est apparu en premier sur Blog dbi services.

Keine Angst vor Container Technologie (DOAG 2018)

Wed, 2018-11-28 01:46

Seit 30 Jahren bin ich in der IT-Branche tätig, hatte immer wieder mit Oracle mit RDBMS Systemen zu tun. Nun, seit bald 4 Jahren als Berater bei dbi services habe ich sehr viel Berührung mit Oracle Datenbaken, daher auch mein Besuch der DOAG 2018.

Mit grossem Interesse reiste ich zur DOAG nach Nürnberg und hatte mir vorgenommen zum Thema OpenShift und Container, diverse Sessions zu besuchen.

Warum OpenShift? Nun seit einiger Zeit sehen wir Projekte(PoC) bei unseren Kunden in diesem Bereich. Red Hat bietet eine komplette Lösung an, die alle Komponenten beinhaltet. Ein Start wir deutlich schneller möglich.

Interessanterweise, ist nicht nur Euphorie zu spüren, es gibt auch kritische Stimmen zu diesem Thema. Doch es erinnert mich an die Zeit als die Hardware-Virtualisierung aufkam. Auch damals wurden kritische Fragen gestellt. Wird das funktionieren und ist viel zu komplex! Diese Technologie eignet sich nur für Dienstleister, Cloud-Anbieter etc.

Also Grund genug erste Erfahrungsberichte an der DOAG anzuhören und Vorträge zu diesem Thema zu besuchen.
 

Was ändert sich denn hier?

Nach der Hardware-Virtualisierung folgt nun der nächste Virtualisierungsschritt, Docker (Einsatz von Container).
Der Unterschiede der Hardware-Virtualisierung zu Docker kann am besten mit einer Schematischen Darstellung aufgezeigt werden.

Schematische Darstellung der Hardware-Virtualisierung

Server-Virtualisierung
 

Der Unterschied in der Architektur zwischen Hardware-Virtualisierung und Container

docker_fig1

Grösster Unterschied, bei der Hardware-Virtualisierung hat jeder Virtuelle-Server ein komplettes eigenes Betriebssystem. Durch die Container Architektur, fällt dieser Teil zum grössten Teil weg, was den einzelnen Container deutlich kleiner und vor allem portabler macht. Es werden weniger Ressourcen benötig auf der Infrastruktur, oder auf der selben Infrastruktur können deutlich mehr Containers betrieben werden.
 

OpenShift die Red hat Lösung für Docker hat folgende Architektur

architecture_overview
 

Was erwartet uns mit OpenShift, was müssen wir auf jeden Fall beachten

– Nächster Schritt zum Thema Virtualisierung -> Container
– Komplexe Infrastruktur, bei Red Hat alles aus einer Hand (Inkl. Kubernetes)
– Der Start in die Container Welt, muss sehr gut vorbereitet sein
– Technologie ist noch sehr jung, hier wird sich noch einiges ändern
– Wenn möglich ein PoC durchführen, nicht zu lange warten
– Konzepte und Prozesse werden zwingend benötigt
 

Mein Fazit

Mein erster Besuch an der DOAG hat mir sehr wertvolle Informationen und Erkenntnisse geliefert zu den beiden Theme OpenShift und Containers. Im speziellen die Lösung von Red Hat, mit dieser Technologie werde ich mich in der nächster Zeit beschäftigen. Ich bin sicher das wir hier wieder einmal an einem sehr interessanten Technologie-Wendepunkt stehen, dem Start in die Container Infrastrukturen mit kompletten Lösungen wie OpenShift von Red Hat. Jedoch trotz aller Euphorie, ein start in diese Technologie sollte geplant und kontrolliert erfolgen. Speziell sollten Erfahrungen in einem PoC gesammelt werden. Der Schritt ein OpenShift Infrastruktur in einem produktiven Umfeld einzusetzen, muss basiert auf den Erfahrungen gut geplant und kontrolliert erfolgen um nicht in die gleichen Probleme wie es damals bei der Hardware-Virtualisierung zu laufen!

chaos_container

Für den produktiven Betrieb, braucht es Sicherheit, Stabilität, Kontinuität ebenfalls sollten alle Komponenten aktuell bleiben. Monitoring und Backup/Restore sind ebenso Themen mit denen man sich vor der Inbetriebnahme auseinandersetzen muss. Sicher ermöglicht diese Technologie mehr Tempo, aber es braucht Regelungen und Prozesse damit nicht nach einer gewissen Zeit, die Container Welt plötzlich so wie auf dem Bild oberhalb aussieht!

Cet article Keine Angst vor Container Technologie (DOAG 2018) est apparu en premier sur Blog dbi services.

AWS re:invent 2018 – Day 1

Tue, 2018-11-27 10:20

Yesterday was my first day at AWS re:Invent conference. The venue is quite impressive, the conference is split between 6 hotels where you can attend different types of sessions including chalk talk, keynotes, hands-on labs or workshop. For my first day, I stayed in the same area in The Venetian to make it easy.

Invent

The walking distance is quite big between some places so it requires to carefully plan the day to be able to see what you want to see. Hopefully there is a shuttle service and I’ll move a bit more between hotels tomorrow. You also need to reserve your seat and be here in advance to be sure to enter the room.

In my own example, I wanted to attend a chalk talk about Oracle Licensing in the Cloud to start the week. As I was not able to reserve a seat I had to wait on the walk up line. The session was full, Oracle still interests lots of people and licensing is still a concern besides performance for lots of customers when they start planning to move to public cloud.

I’m working with AWS services for a bit more than 1 year at a customer but there are still a lot to learn and understand about AWS, that’s why I also attended to an Introductory session about VPC (Virtual Private Cloud) to better understand the network options when going to AWS. To make it simple, a VPC allows to to have a private network configured as you wish inside AWS. You have the control of the IP range you would like to use and you can configure the routing tables and so on.

I also tried to attend a workshop about running Oracle on the Amazon RDS, the AWS managed database service and especially how to migrate them from Oracle to the Amazon Aurora database using PostgreSQL compatibility. The goal was to use 2 AWS products to run the migration: AWS Schema Convertion Tool and AWS Database Migration Service. Unfortunately some issues with the WiFi constantly changing the IP and a limitation on my brand new AWS account that required additional checks from Amazon prevented me from going to the end of the workshop. But I got some credits to try it by myself a bit later so I’ll most probably try the Schema Conversion Tool.

Some DBA may worry about the managed database services or announces from Oracle about autonomous database but I agree with the slides below from AWS speaker during the workshop. I personally think that DBA won’t disappear. Data itself and applications will still be around for quite long time and the job may evolve and we will spend more time on application/data side than before.

DBA role in the Cloud

Today is another day, let’s forget a bit about the DBA part and try to see more about DevOps…

Cet article AWS re:invent 2018 – Day 1 est apparu en premier sur Blog dbi services.

SQL Server 2019 CTP 2.1 – A replacement of DBCC PAGE command?

Tue, 2018-11-27 07:08

Did you ever use the famous DBCC PAGE command? Folks who are interested in digging further to the SQL Server storage already use it for a while. We also use it during our SQL Server performance workshop by the way. But the usage of such command may sometimes go beyond and it may be used for some troubleshooting scenarios. For instance, last week, I had to investigate a locking contention scenario where I had to figure out which objects were involved and with their related pages (resource type) as the only way to identify them. SQL Server 2019 provides the sys.dm_db_page_info system function that can be useful in this kind of scenario.

blog 148 - 0 - banner

To simulate locks let’s start updating some rows in the dbo.bigTransactionHistory as follows:

USE AdventureWorks_dbi;
GO

BEGIN TRAN;

UPDATE TOP (1000) dbo.bigTransactionHistory
SET Quantity = Quantity + 1

 

Now let’s take a look at the sys.dm_tran_locks to get a picture of locks held by the above query:

SELECT 
	resource_type,
	COUNT(*) AS nb_locks
FROM 
	sys.dm_tran_locks AS tl
WHERE 
	tl.request_session_id = 52
GROUP BY
	resource_type

 

blog 148 - 1 - query locks

Referring to my customer scenario, let’s say I wanted to investigate locks and objects involved. For the simplicity of the demo I focused only the sys.dm_tran_locks DMV but generally speaking you would probably add other ones as sys.dm_exec_requests, sys.dm_exec_sessions etc …

SELECT 
	tl.resource_database_id,
	SUBSTRING(tl.resource_description, 0, CHARINDEX(':', tl.resource_description)) AS file_id,
	SUBSTRING(tl.resource_description, CHARINDEX(':', tl.resource_description) + 1, LEN(tl.resource_description)) AS page_id
FROM 
	sys.dm_tran_locks AS tl
WHERE 
	tl.request_session_id = 52
	AND tl.resource_type = 'PAGE'

 

blog 148 - 2 - locks and pages

The sys.dm_tran_locks DMV contains the resource_description column that provides contextual information about the resource locked by my query. Therefore the resource_description value column will inform about [file_id:page_id] when resource_type is PAGE.

SQL Server 2019 will probably lead the DBCC PAGE command to return to the stone age for some tasks but let’s start with this old command as follows:

DBCC PAGE (5, 1, 403636, 3) WITH TABLERESULTS;

 

blog 148 - 3 - dbcc page

The DBCC PAGE did the job and provides and output that includes the page header section where the Metadata: ObjectId is stored. We may then use it with OBJECT_NAME() function to get the corresponding table name.

SELECT OBJECT_NAME(695673526)

 

blog 148 - 4 - dbcc page - object_name

But let’s say that using this command may be slightly controversial because this is always an undocumented command so far and no need to explain here how it can be dangerous to use it in production. Honestly, I never encountered situations where DBCC PAGE was an issue but I may not provide a full guarantee and it is obviously at your own risk. In addition, applying DBCC PAGE for all rows returned from my previous query can be a little bit tricky and this is where the new sys.dm_db_page_info comes into play.

;WITH tran_locks
AS
(
	SELECT 
		tl.resource_database_id,
		SUBSTRING(tl.resource_description, 0, CHARINDEX(':', tl.resource_description)) AS file_id,
		SUBSTRING(tl.resource_description, CHARINDEX(':', tl.resource_description) + 1, LEN(tl.resource_description)) AS page_id
	FROM 
		sys.dm_tran_locks AS tl
	WHERE 
		tl.request_session_id = 52
		AND tl.resource_type = 'PAGE'
)
SELECT 
	OBJECT_NAME(page_info.object_id) AS table_name,
	page_info.*
FROM 
	tran_locks AS t
CROSS APPLY 
	sys.dm_db_page_info(t.resource_database_id, t.file_id, t.page_id,DEFAULT) AS page_info

 

This system function provides a plenty of information mainly coming from the page header in tabular format and makes my previous requirement easier to address as show below.

blog 148 - 5 - sys.dm_db_page_info

The good news is this function is officially documented but un/fortunately (as you convenience) for the deep dive study you will still continue to rely on the DBCC PAGE.

Happy troubleshooting!

 

 

Cet article SQL Server 2019 CTP 2.1 – A replacement of DBCC PAGE command? est apparu en premier sur Blog dbi services.

Strange behavior when patching GI/ASM

Mon, 2018-11-26 12:45

I tried to apply a patch to my 18.3.0 GI/ASM two node cluster on RHEL 7.5.
The first node worked fine, but the second node got always an error…

Environment:
Server Node1: dbserver01
Server Node2: dbserver02
Oracle Version: 18.3.0 with PSU OCT 2018 ==> 28660077
Patch to be installed: 28655784 (RU 18.4.0.0)

First node (dbserver01)
Everything fine:

cd ${ORACLE_HOME}/OPatch
sudo ./opatchauto apply /tmp/28655784/
...
Sucessfull

Secondary node (dbserver02)
Same command but different output:

cd ${ORACLE_HOME}/Patch
sudo ./opatchauto apply /tmp/28655784/
...
Remote command execution failed due to No ECDSA host key is known for dbserver01 and you have requested strict checking.
Host key verification failed.
Command output:
OPATCHAUTO-72050: System instance creation failed.
OPATCHAUTO-72050: Failed while retrieving system information.
OPATCHAUTO-72050: Please check log file for more details.

After playing around with the keys I found out, that the host keys had to be exchange also for root.
So I connected as root and made an ssh from dbserver01 to dbserver02 and from dbserver02 to dbserver01.

After I exchanged the host keys the error message changed:

Remote command execution failed due to Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
Command output:
OPATCHAUTO-72050: System instance creation failed.
OPATCHAUTO-72050: Failed while retrieving system information.
OPATCHAUTO-72050: Please check log file for more details.

So I investigated the log file a litte further and the statement with the error was:

/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=no -o NumberOfPasswordPrompts=0 dbserver01 \
/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=yes -o NumberOfPasswordPrompts=0 dbserver01 \
/u00/app/oracle/product/18.3.0/dbhome_1//perl/bin/perl \
/u00/app/oracle/product/18.3.0/dbhome_1/OPatch/auto/database/bin/RemoteHostExecutor.pl \
-GRID_HOME=/u00/app/oracle/product/18.3.0/grid_1 \
-OBJECTLOC=/u00/app/oracle/product/18.3.0/dbhome_1//cfgtoollogs/opatchautodb/hostdata.obj \
-CRS_ACTION=get_all_homes -CLUSTERNODES=dbserver01,dbserver02,dbserver02 \
-JVM_HANDLER=oracle/dbsysmodel/driver/sdk/productdriver/remote/RemoteOperationHelper

Soooooo: dbserver02 starts a ssh session to dbserver01 and from there an additional session to dbserver01 (himself).
I don’t know why but it is as it is….after I did a keyexchange from dbserver01 (root) to dbserver01 (root) the patching worked fine.
At the moment I can not remeber that I ever had to do a keyexchange from the root User on to the same host.

Did you got the same proble or do you know a better way to do that? Write me a comment!

Cet article Strange behavior when patching GI/ASM est apparu en premier sur Blog dbi services.

DOAG 2018: OVM or KVM on ODA?

Mon, 2018-11-26 03:51

The DOAG 2018 is over, for me the most important topics were in the field of licensing. The insecurity among the users is great, let’s take virtualization on the ODA, for example:

The starting point: The customer uses Oracle Enterprise Edition, has 2 CPU licenses, uses Dataguard as disaster protection on 2 ODA X7-2M systems and wants to virtualize, he also has 2 application servers that are also to be virtualized.

Sure, if I use the HA variant of the ODA or Standard Edition, this does not concern me, there OVM is used as a hypervisor and this allows hard partitioning. The database system (ODA_BASE) automatically gets its own CPU pool in Virtualized Deployment; additional VMs can be distributed to the rest of the CPU.

On the small and medium models only KVM is available as a hypervisor. This has some limitations: on the one hand there is no virtualized deployment of the ODA 2S / 2M system, on the other hand, the operation of databases as KVM guests is not supported. This means that the ODA must be set up as a bare metal system, the application servers are virtualized in KVM.

What does that mean for the customer described above? We set up the system in bare metal mode, we activate 2 cores on each system, set up the database and set up the Dataguard between primary and standby. The customer costs 2 EE CPU licenses (about $ 95k per price list).

Now he wants to virtualize his 2 application servers and notes that 4 cores are needed per application server. Of 36 cores (per system) but only 2 cores are available, so he also activates 4 more cores (odacli update-cpucore -c 6) on both systems and installs the VM.

But: The customer has also changed his Oracle EE licenses, namely from 1 EE CPU to 3 CPU per ODA, so overall he has to buy 6 CPU licenses (about $ 285k according to the price list)!

Now Oracle propagates that in the future KVM in the virtualization should be the means of choice. However, this will not work without hard partitioning under KVM or the support of databases in KVM machines.

Tammy Bednar (Oracle’s Oracle Database Appliance Product Manager) announced in her presentation “KVM or OVM? Use Cases for Solution in a Box” that solutions to this problem are expected by mid-2019:

– Oracle databases and applications should be supported as KVM guests
– Support for hard partitioning
– Windows guests under KVM
– Tooling (odacli / Web Console) should support the deployment of KVM guests
– A “privileged” VM (similar to the ODA_BASE on the HA models) for the databases should be provided
– Automated migration of OVM guests to KVM

All these measures would certainly make the “small” systems much more attractive for consolidation. It will also help to simplify the “license jungle” a bit and to give the customers a bit more security. I am curious what will come.

Cet article DOAG 2018: OVM or KVM on ODA? est apparu en premier sur Blog dbi services.

AWS re:invent 2018 warm up

Mon, 2018-11-26 03:07

The Cloud is now part of our job so we have to get a deeper look on the available services to understand and take best advantage of them. The annual AWS conference re:invent has started tonight in The Venetian at Las Vegas and will last until Friday.

AWS logo

Today was a bit special because there were no sessions yet but instead I was able to participate to a ride to Red Rock canyon on a Harley Davidson motorbike.

It’s a 56 miles ride and you can enjoy beautiful landscapes very different from the city and the light of the casinos. We were a small group with around 13 bikes and even if it was a bit cold it was a really nice tour. I really recommend people in Vegas to escape the city for few hours to discover such places like Red Rock or Valley of Fire.

Harley Davidson ride to Red Rock Canyon

 

Then the conference opened on Midnight Madness and an attempt to beat the world record of ensemble air drumming. I don’t know yet if we achieve the goal but I tried to help and participated to the challenge.

invent Midnight Madness

The 1st launch of the week has been also done this evening and it’s a new service called AWS RoboMaker. You can now use AWS cloud to develop new robotics applications and use other services like Lex or Polly to allow your robot to understand voice orders and answer it for example.

Tomorrow the real thing begins with hand-on labs and some sessions, stay tuned.

Cet article AWS re:invent 2018 warm up est apparu en premier sur Blog dbi services.

Flashback to the DOAG conference 2018

Sat, 2018-11-24 14:40

Each year, since the company creation in 2010, dbi services attends the DOAG conference in Nürnberg. Since 2013 we even have a booth.

The primary goal of participating to the DOAG Conference, is to get an overview about the main trends in the Oracle business. Furthermore, this conference and our booth allow us to welcome our Swiss and German customers and thank them for their trust. They’re always pleased to receive some nice Swiss Chocolate produced in Delémont (Switzerland), city of our Headquarter.

But those are not the only reasons why we attend this event. The DOAG conference is also a way to promote our expertise with our referents and to thank our performing consultants for their work all over the year. We consider the conference as a way to train people and improve their skills.

Finally some nice social evenings take place, first of all the Swiss Oracle User Group (SOUG) “Schweizer Abend”, the Tuesday Evening, secondly the “DOAG party” on Wednesday evening. dbi services being active in the Swiss Oracle User Group, we always keep a deep link to the Oracle community.

As a Chief Sales Officer I tried to get an overview of the main technical “Oracle trends”, through the successes of our sessions (9 in total) all over the conference. The “success” being measured in term of number of participants to those sessions.

At a first glance I did observe a kind of “stagnation” of the interest about Cloud topics. I can provide several evidences and explanations about that. First of all the Key Note during the first day presenting a study over German customers concerning the cloud adoption didn’t reveal any useful information, according to me. The Cloud adoption increases, however there are still some limitations in the deployment of Cloud solutions because of security issues and in particular the cloud act.

Another possible reason about the “small” interest about Cloud topics during the conference, according to me, relies on the fact that Cloud became a kind of “commodity”. Furthermore, we all have to admit that Oracle has definitively not a leadership position in this business. Amazon, Azure and Google definitively are the leaders in this business and Oracle remains a “small” challenger.

Our session from Thomas Rein did not had so much attendees, even if we really presented a concrete use case about Oracle Cloud usage and adoption. The DOAG conference is a user group conference, Techies mostly attend the conference and Techies have to deal with concrete issues, currently the Oracle Cloud does not belong to them.

So what were the “main topics” according to what I could observe ?

Open Source had a huge success for us, both the MySQL and the two PostgreSQL tracks were very very successful, thanks to Elisa Usai and Daniel Westermann.

Some general topics like an “introduction to Blockchain” also had a huge success, thanks to Alain Lacour for this successful session.

Finally the “classicals”, like DB tuning on the old-fashion “On Prem” architectures also had a huge success, thanks to our technology leader Clemens Bleile and to Jérôme Witt who explained all about the I/O internals (which are of course deeply link with performance issues).

Thanks to our other referents: Pascal Brand (Implement SAML 2.0 SSO in WLS and IDM Federation Services) and our CEO David Hueber (ODA HA: What about VMs and backup?) who presented some more “focused” topics.

I use this Blog post to also thank the Scope Alliance and in particular Esentri for the very nice party on Wednesday Evening, beside hard work, hard party is also necessary :-)

Below, Daniel Westermann with our customer “die Mobiliar” on the stage, full room :

IMG_5143

Cet article Flashback to the DOAG conference 2018 est apparu en premier sur Blog dbi services.

My DOAG Debut

Fri, 2018-11-23 08:50

Unbelievable! After more than 10 years working in the Oracle Database environment, this year was my first participation at the DOAG Conference + Exhibition.

After a relaxed travel to Nürnberg with all the power our small car could provide on the German Autobahn, we arrived at the Messezentrum.
With the combined power of our dbi services’ team, the booth was ready in no time and we could switch to the more relaxed part of the day and ended up in our hotel’s bar with other DOAG participants.

The next few days were a firework of valuable sessions, stimulating discussions and some after hour parties who gave me to think about my life decisions and led me to the question: Why did it take me so long for participating in the DOAG Conference + Exhibition?

It would make this post unreadable long and boring if I would sum up all sessions I attended.
So I will just mention a few highlights with the links to the presentations:

Boxing-Gloves-Icons

Boxing Gloves Vectors by Creativology.pk

And of course, what must be mentioned is The Battle: Oracle vs. Postgres: Jan Karremans vs. Daniel Westermann

The red boxing glow (for Oracle) represents Daniel Westermann, Oracle expert for many many years who now is the Open Infrastructure Technologie Leader @ dbi services, while Jan Karremans, Senior Sales Engineer at Enterprise DB put on the blue glow (for Postgres). The room was fully packed with over 200 people who have more sympathy for Oracle.

 Oracle vs. Postgres

The Battle: Oracle vs. Postgres

Knowing how much Daniel loves the Open Source database it was inspiring to see how eloquent he defended the Oracle system and brought Jan multiple times into troubles.
It was a good and brave fight between the opponents in which Daniel had the better arguments and gained a win after points.
For the next time, I would wish to see Daniel on the other side defending Postgres because I am sure he could fight down almost every opponent.

In the end, this DOAG was a wonderful experience and I am sure it won’t take another 10 years until I come back.

PS: I could write about the after party, but as you know, what happens at the after party stays at the after party expect the headache, this little b… stays a little bit longer.

PPS: On the last day I’ve got a nice little present from virtual7 for winning the F1 grand prix challenge. I now exactly on which dbi event we will open this bottle, stay tuned…
IMG_20181122_153112

Cet article My DOAG Debut est apparu en premier sur Blog dbi services.

DOAG 2018: Key word: “Docker”

Fri, 2018-11-23 06:34

Capture

In my blog about the DOAG Last year I said that saw a growing interest on the automatic deployment tools and Docker containers. This year confirmed the interest. They were a lot of presentations about Docker containers, Kubernetes, OpenShift. This for the database stream, the DevOps stream but also the Middleware one. I numbered more than 25 sessions where the keyword Docker appeared in the Abstract.

Despite my will, I was not able to assist to all of those. They were to many to be able to.

One of those interesting presentations that retained my attention was the following one: “Management von Docker Containern mit Openshift & Kubernetes” from Heiko Stein. He gave us a very good overview of the services for Kubernetes and Openshift and showed us how they can be complementary.

An other one was about monitoring and diagnosing performances of a Java application (OpenJDK 11) running in a Docker Container.
Monitoring of JVM in Docker to Diagnose Performance Issues. This one was interesting at several levels, as it talked about: Docker container, OpenJDK 11 and the tools that are delivered with. Monitring applications and diagnosing issues are always interesting subjects to follow to get some hints from someone else experiences.

The last I will list but not the least one was “MS Docker: 42 Tips & Tricks for Working with Containers“. This one in summary is all you ever wanted to know about Docker. But the 45 minutes sessions were really to short to get everything from it :-(.

Those presentations just made my interest for those technologies grow faster.

Cet article DOAG 2018: Key word: “Docker” est apparu en premier sur Blog dbi services.

My first presentation at the DOAG – “MySQL 8.0 Community: Ready for GDPR?”

Fri, 2018-11-23 02:35

This year I participated for the first time to the DOAG, the conference which takes place in November in Nuremberg. Here some key words about this event: Oracle and other technologies, 2000 visitors, more than 400 sessions, more than 800 abstracts sent, exhibitors…
And for me everything started when in June I decided to send an abstract for a MySQL session.

Preparation

I’ve been working on MySQL for several years. At the beginning of this year, I started testing the new 8.0 version. We live in an age where security is more important than ever, GDPR and other regulations force us to review some subjects such as privacy and data policies. MySQL put in place lots of improvements regarding security in this last version.
So my session proposal for the DOAG was the following one:

MySQL 8.0 Community – Ready for GDPR ?
One of the most topical subject today is security.
New MySQL 8.0 version introduces several improvements about that, such as:
Encryption of Undo and Redo Logs, which comes to enrich existing datafile encryption
Password rotation policy, to avoid a user to always use the same passwords
New caching_sha2_password plugin, which let you manage authentication in a faster and more secure way
SQL Roles, to simplify the user access right management
So… let’s have a look!

When I received the e-mail that told me that my abstract had been accepted, I was happy and stressed at the same time.
I directly started testing and studying more and more these new features, writing my slides and preparing some demos and my speech in English. I know, for the most of you this is simple, but – hey – this would have been my first session ever! ;)
Working at dbi services is also the possibility to present a session to colleagues and so to test it and have some feedback during our internal events, before presenting this same session to abroad/external events. So in September I could present my session a first time and this helped me to feel more comfortable about the fact of presenting something. Time passed and November was suddenly there…

Arriving to the DOAG

So on 19th November I caught my flight and at 7pm I was in Nuremberg. And the day after I arrived to the Conference Center.
badge
doag
My session was planned for 3pm so I had some time in the morning to visit some booths and people that I wanted to meet (Oracle MySQL, Quest, EDB, and my colleagues on booth of dbi services).
dbi services
And I also got some useful tips from my colleagues to calm my stress and better manage my session: take a few seconds before starting talking to catch the visual attention of the audience, breath correctly, visit the room before, and so on (thank you guys for your support during the last weeks!).

My session

The expected moment came, my VMs were running for demos and slides ready, and some people arrived in the room.
IMG_9528
I started my talk with a little introduction to GDPR explaining the importance of having some privacy and data policies in our hyper connected world. And this aspect let me doing a link with the fact that MySQL 8.0 came out with lots of improvements in terms of security.
So I finally could go deeper into technical part to explain these new important features:

- SQL Roles:
Thanks to roles, we could have a faster user administration and grant handling is managed in a centralized way. During the session I did a demo to explain how roles are created and activated in MySQL and I used the yEd desktop application to generate the diagram of the whole roles representation from a graphml file.
For more details about roles, read my previous blog and the MySQL Documentation.

- Password Reuse Policy:
It avoids users to use previous passwords. This can be activated in order of changes (with the system variable password_history) or time elapsed (password_reuse_interval) and it’s not valid for privileged accounts.

- Password Verification Policy:
If this feature is activated, attempts to change an account password require before to specify the current password to be replaced.
For more details about password verification policy, read my previous blog and the MySQL Documentation.

- Validate Password Component:
It was already there on previous versions but now this is not a plugin anymore but a component instead. For some statements like ALTER|CREATE USER, GRANT, SET PASSWORD, it checks the password of an user account against the policy that we defined (LOW, MEDIUM or HIGH) and rejects the password if it’s weak.

- InnoDB Tablespace Encryption:
It’s a 2-tier encryption architecture, based on a master key and tablespace keys. When a table is encrypted, a tablespace key is encrypted and stored into the tablespace header. When an user wants to access to his data, a master key is used to decrypt the tablespace key. So during the session I explained how it works, which are the requirements and how we can setup this feature. I also did a demo to show how we could extract some clear-text data without connecting to the MySQL Server in opposition to the fact that if encryption is activated that is not possible.
This feature is there starting from MySQL 5.7.11 but it helped me to introduce the next chapter.

- InnoDB Redo/Undo Log Encryption:
Redo log data is encrypted/decrypted with the tablespace encryption key which is stored in the header of ib_logfile0. Through a demo I explained, which are the requirements, how to setup it and what we have to think about before activating this option. And I showed how we could extract some sensitive data in the Redo Log Files if encryption is turned off.
IMG_9531
Same thing for the encryption of InnoDB Undo Log files, which can be activated with the system variable innodb_undo_log_encrypt.

- caching_sha2_password Plugin:
In MySQL 8.0 the new caching_sha2_password plugin makes the authentication strong as its predecessor (it still uses the SHA-256 password hashing method) but at the same time faster: a cache on the server side let the user accounts that already connected once bypass the full authentication.
Here the schema through which I explained the whole authentication process using RSA key pairs:
auth

A little conclusion

Participating to the DOAG and presenting there has been for me a very important professional, human and social experience. I went beyond my limits, I learned lots of news things thanks to the other speakers sessions, I met new people working on IT, had fun with colleagues sharing some spare time. This was my first participation to a conference , it will not be the last one. Why did not I start that before? ;)

Cet article My first presentation at the DOAG – “MySQL 8.0 Community: Ready for GDPR?” est apparu en premier sur Blog dbi services.

Technical and non-technical sessions at the DOAG 2018

Thu, 2018-11-22 19:03

The amazing DOAG 2018 conference is over now. As every year we saw great technical as well as great non-technical sessions. What impressed me was the non-technical presentation “Zurück an die Arbeit – Wie aus Business-Theatern wieder echte Unternehmen werden” (back to work – how business theatres become real business company again) provided by Lars Vollmer. It was very funny, but also thought-provoking. Lars started with the provocative sentence that people do work too less. Not in terms of time, but in terms of what people do. I.e. lots of things they do looks like work, but actually is not. Like meetings, yearly talks, reports, presentations, etc. At one point Lars started talking about Dabbawalas in India to show that people may work with highest quality and totally independent without any hierarchical structure. It’s a very old concept. As lunch is too expensive in some Indian cities there is a service to bring the lunch in boxes from home to work. The people delivering the lunch boxes work totally independently, they have no boss and often are not able to read. But still, they are able to handover the lunch boxes from Dabbawala to Dabbawala until it finally arrives at the destination. The independent workers provide the service with an unbelievable high quality. I.e. out of 6 million lunch boxes only 1 is not delivered to the correct target. That’s impressive. See e.g. Wikipedia Dabbawala for it.

One of the technical sessions, which I appreciated to be able to attend was about “Oracle’s kernel debug, diagnostics & tracing infrastructure” provided by Stefan Köhler.
IMG_2040
That returned to mind my early days at Oracle Support with statements like


alter system set events '<EVENT_NUMBER> trace name context forever, level <X>';

However, the time of setting numeric events (e.g. 10046) is over as Oracle uses the UTS (Unified Tracing Service) for new events only and already maps some numbers to an Event++.

An example for UTS is:


alter session set events 'trace[RDBMS.SQL_Compiler.*][SQL: 869cv4hgb868z] disk=highest';

I.e. the section within the second brackets is the scope. In the example above that means create a 10053-trace on SQL_ID 869cv4hgb868z once that is parsed.

To make sure that events are populated from the SGA to all sessions PGA, Oracle introduced a new parameter “_evt_system_event_propagation” in 11g. Unfortunately that feature was broken in Oracle 12.2 (bug #25989066 & #25994378) and fixed in Oracle 18c. See also the comments in Blog Enable 10046 tracing for a specific SQL.

It’s sad that the conference DOAG 2018 is over, but we are looking forward to an interesting event in 2019. The world is changing to more Open Source software used in businesses and it will be interesting to see how Oracle (and also the DOAG) will react on that.

Cet article Technical and non-technical sessions at the DOAG 2018 est apparu en premier sur Blog dbi services.

DOAG 2018 – Fazit: it’s not only about Oracle anymore

Thu, 2018-11-22 05:38

Amazing conference, amazing people and awesome party like every year  :mrgreen: and of course great networking like every year

DOAG2018_booth242
This year, I was quite impressed about the number of different technologies. Speeches were not only centered around the big red O. As already summarized, by my colleague Alain yesterday:

DOAG 2018 – Not only database – Docker and Kubernetes rule the world

Open Source RDBMS, containers and microservices are arising much more quickly than cloud computing even if, in some cases, both are linked/mixed together.

Of course, a lot of Oracle RDBMS presentations were really good and worth for partners as well as for customers. Some were centered around new features, the new release model but surprisingly in spite of the Cloud Computing trend all sessions about Oracle-, Linux basics were very popular. It makes me think that we are at the dawn of change on how Database Administrators are integrating RDBMS in the arising DevOps organization within their companies; DevOps requires stuff running fast without any long running complicated process.
It sounds like  the opposite of the database purpose which implies stability, performance, availability, etc.
So, what? if the developers are constantly looking for new technologies why shall we (DBAs) not look for alternative RDBMS? OpenSource RDBMS?

In my opinion, this is a quite good summary of what I felt during the conference and what I feel everyday at my customers as a consultant over the past five years.
Surprisingly, Paolo Kreth (Schweizerische Mobiliar) had a talk titled “Hilfe, die Open Source DBs kommen!” (Help, the Open Source DBs are arising) which exactly described that “feeling” from the opposite customer side with a real-life customer case.

DOAG_speech

Thanks to Paolo for sharing his mindset and the spirit of the IT @die Mobiliar. As a consultant, I fully agree and confirm the concept he introduced  are matching the reality with more or less success.

Basically, the more people are communicating, sharing expertise (whatever the budget), the more successful actions will be; This applies of course to DevOps but as well at entreprise level. Which indeed matches to our company values and that’s why we also have the label “Great Place To Work” for the second year in a row.

GPTW_CH_Logo_2018_FR

If you share the same mindset, feel free to send us your CV here.

 

Last one but not least, see U next year (again) :-)

Cet article DOAG 2018 – Fazit: it’s not only about Oracle anymore est apparu en premier sur Blog dbi services.

DOAG 2018 – What to learn from a battle on IT technologies?

Wed, 2018-11-21 23:54

This year’s my 6th participation with dbi services at the DOAG Conference + Exhibition Nuremberg (as a “non-techie” attendee, no need to say), but it was my very first battle on IT technologies. And it was fun!

On the DOAG 2018 Conference + Exhibition

DOAG 2018 Conference + Exhibition is taking place November 20 – 23, 2018 in Nuremberg. Participants have the opportunity to attend a three-day lecture program with more than 400 talks and international top speakers, plus a wide choice of workshops and community activities. This is a great opportunity to expand your knowledge and benefit from the know-how of the Oracle community.

doag1

On dbi services at the DOAG 2018

dbi services attends the DOAG with sessions + booth every year since 2013. Our consultants share their knowledge within the German speaking Oracle community, this year with 9 technical sessions. Customers and contacts are welcome during the sessions and at booth number 242 on the 2nd floor. Every day a prize draw takes place at the booth with a Swiss watch to win and many other things which guarantees a relaxed and funny atmosphere.

DSC_1719_2 DSC_1729_2

On battles at community events

Usually, techies present specific topics on one specific technology, i.e. Oracle new features, new versions, interesting findings, tools, and practices. However, it may happen that some speakers rather perform in “team”, probably because they are too shy to come alone on stage… especially on big rooms with 1 or more huge screens behind the speaker where he/she spends the 45 minutes presentation like feeling his/her “maxi-Me” in his/her back!

mini-me and maxi-me

Anyway, Jan Karremans, Senior Sales Engineer for EnterpriseDB / EDB Postgres, and our Daniel Westermann, Senior Consultant and Technology Leader Open Infrastructure at dbi services, decided to have a battle on a funny topic: Oracle vs. PostgreSQL. And some situations were really funny indeed. The opponents went through critical topics like budget, scalability, security, performance and administration. Daniel was chosen to represent Oracle DB technologies where Jan represented PostgreSQL. No need to say on which side the balance tilted regarding budget considerations. But in general I would say that the battle turned a little in favor of Oracle.

DSC_1711_2 battle Jan Daniel

What does the battle tells us about IT technologies?

The battle could have been even more bloody. Some of us have expected it for sure. Daniel started the show with an arrogant “something that costs nothing is useless” that brought fire on the stage and lots of laughs from the spectators. But both counterparts remained quite fair at the end. And the battle became a real opportunity to compare – more or less – two different DB technologies and approaches.

At the end, the interest was huge. More than 250 people attended the session where the room contains only 200 seats!

DSC_1707_2

Conclusion

The level of attendance was huge, not for nothing. The fun of attending a battle was one the reasons for sure. But having the chance to get a clear picture on which IT solution suits best is critical. So what about getting a neutral and independent view for evaluating IT technologies and designing future projects?

See you soon for more battles and feasibility studies ;-)

 

Cet article DOAG 2018 – What to learn from a battle on IT technologies? est apparu en premier sur Blog dbi services.

DOAG 2018 – Not only database – Docker and Kubernetes rule the world

Wed, 2018-11-21 11:12

The DOAG Konferenz is about everything around the Oracle database and Oracle technologies.
But this year, as stated in the blog of Daniel Westermann more and more open source as well as “hype” subjects from which Docker and Kubernetes are very popular.
Looking at the program we can see about 20 presentations on those 2 technologies:

  • DevOps mit der Oracle Datenbank
  • Oracle, PostgreSQL, Docker und Kubernetes bei der Mobiliar
  • Docker und die Oracle Fusion Middleware im BPM- und Forms-Bereich
  • Pimp your DevOps with Docker: An Oracle BI Example
  • Docker Security
  • Dockerize It – Mit APEX in die Amazon Cloud
  • Using Vagrant and Docker For APEX Development
  • Cloud Perspective: Kubernetes is like an app server, but more cloudy
  • Management von Docker Containern mit Openshift & Kubernetes
  • DevOps Supercharged with Docker on Exadata
  • Einführung in Kubernetes
  • Docker for Database Developers
  • Oracle Container Workloads mit Kubernetes auf AWS?
  • Practical Guide to Oracle Virtual Environments
  • Orchestrierung & Docker für DBAs
  • Monitoring of JVM in Docker to Diagnose Performance Issues
  • Container-native Entwicklung und Deployment
  • Alternativen des Betriebs von Weblogic mit Kubernetes/Docker
  • MS Docker: 42 Tips & Tricks for Working with Containers

Most of the ones my colleagues or myself had a chance to attend, were crowded.
Docker is growing for years, now replacing step by step the usage of full blown VMs.
Kubernetes is capitalizing on that success by providing complementary tools.
This is not only the future, it’s today…
… have a deaper look into them and follow-up.

Cet article DOAG 2018 – Not only database – Docker and Kubernetes rule the world est apparu en premier sur Blog dbi services.

Funny session Oracle vs PostgreSQL battle at the #DOAG2018

Wed, 2018-11-21 10:14

As each year dbi services is present at the DOAG with a nice booth and with many presentations. Today my colleague Daniel Westermann had a very funny session with Jan Karremans “Oracle vs PostgreSQL” battle. The rule of who will defend which technology was not defined before :-) and without surprise Daniel won the Oracle Technology. But Daniel since the last year is convinced from the PostgreSQL technology.

IMG_6364 2

Daniel started the Battle with aggressive question to Jan “Was gratis ist taugt nichts!”, so the battle was started !

During the battle many topics was addressed, as the license and support cost, where Oracle was compared to the community PostgreSQL. For me this was not fair, because comparing the cost of the monster Oracle to the PostgreSQL community is not something what you can compare. I would prefer that they compared Oracle to EDB starting the beginning of the battle.

What I will bring back from this battle:

First, was that many people are interested on this topic ” Oracle vs PostgreSQL”. The room was completely full and more than 50 persons were also standing, so around 250 persons attended this session, which is very impressive !

Second, the participants participate also actively to the battle, they want to have information and answers on some topics. This shows that currently lot’s of companies are asking themselves this question.

Third, the battle for PostgreSQL without including the features from EDB was not well balanced. Therefore I would suggested to remake a battle between Oracle and EDB.

Last but not least, I trust more Daniel when he speak about PostgreSQL that when he talks about Oracle, because when he say “with Oracle Autonomous Database you don’t need anymore DBA’s” I can’t trust him :-!

Cet article Funny session Oracle vs PostgreSQL battle at the #DOAG2018 est apparu en premier sur Blog dbi services.

DOAG 2018, more and more open source

Wed, 2018-11-21 04:35

We speak about the increasing interest in open source technologies since several years and now, in 2018, you even feel that at the DOAG. There are sessions about PostgreSQL, MariaDB, Docker, Kubernetes and much more. As usual we had to do our hard preparation work before the conference opened to the public and prepare our booth and Michael almost came to his limits (and maybe he was not even sure on what he was doing there :) ):

cof

Jerome kicked of the DOAG 2018 for us with one of the very sessions and talked about “Back to the roots: Oracle Datenbanken I/O Management”. That is still an interesting topic and even if it was a session early in the morning the room was almost full.

Elisa continued in the afternoon with her session about MySQL and GPDR and she was deep diving into MySQL data and redo log files:
large A very well prepared and interesting talk.

David followed with his favorite topic, which is ODA and the session was “ODA HA, what about VM backups?”:
LRG_DSC07146

During the sessions not much is happening at our booth, so there is time for internal discussions:
cof

Finally, in the late afternoon, it was Hans’ (from Mobiliar) and my turn to talk about Oracle, PostgreSQL, Docker and Kubernetes and we have been quite surprised on how many people we had in the room. The room just filled, see yourself:
6818f84d-19fe-49cf-b11b-b08c19d8e40d-original

It seems this is a topic almost everywhere.

And now: Day two at the DOAG, lets see what happens today.

Cet article DOAG 2018, more and more open source est apparu en premier sur Blog dbi services.

ODA and CIS / GDPR features

Tue, 2018-11-20 10:50

We all know that security becomes…sorry, is one of the hottest topic when setting up IT environment. One basis for that is to be compliant with regulations or standards such as GDPR or CIS. What is not so well known, is that ODA already integrates some tool to support you for that.

During this first day @DOAG2018 I followed and interesting session from Tammy Bednar, Senior Director of Product Management for ODA, about ODA and Security.

Beside the traditional points about the integrated stack of ODA, SUDO configuration or the Oracle Database Security options, I also heard about nice scripts available on ODA since version 12.2.1.3 to check ODA compliance against CIS standards.

For reminder the CIS, Center for Internet Security, produces security guidelines for components such as Linux, databases and much more. As member of the CIS, dbi services proposes security audits based on these guidelines (https://www.dbi-services.com/offering/services/it-security-services/)

On ODA there is now, out of the box, a „small“ Python script, which allows to check the CIS „status“ on OS level for your ODA.

To do so you can simply go in /opt/oracle/oak/bin and run the script cis.py.

IMG_0181

Sorry, as I couldn‘t take my ODA with me in Nürnberg, I do have only a picture of the script so far ;-)

There are 2 good news when running this script on an brand new installed ODA.

  1. The ODA is out of the box already 41% CIS compliant, which is not bad at all
  2. The ODA is only 41% compliant with CIS, which means there still room for improvement and some work for sysadmins like me ;-)

More seriously a real added value of this tool is that beside doing the compliance check it provides a features to fix some/all points. The advantage here is that in comparison of manual changes it makes sure it does not change anything which ODA relies on and breaks it.

What about the database?

Of course ODA is not only an Operating System. At the end there are databases running on it. So the question is: if the cis.py performs checks on OS level, what can I do on DB one?

For this Oracle released of free (yes free) tool called DBSAT, which stands for Database Security Assessment Tool.
https://www.oracle.com/database/technologies/security/dbsat.html

This tools runs against your database and make CIS but also some GDPR compliance checks providing a report. The report can be export in JSON for activities such as cross databases check.

More blogs to follow about these tools, once back from the DOAG…but now it‘s slowly time for the traditional Schweitzer Abend and some party ;-)

Cet article ODA and CIS / GDPR features est apparu en premier sur Blog dbi services.

Power BI Report Server Kerberos Setup

Mon, 2018-11-19 10:46
In the case you have the following configuration and requirements

Your Power BI, paginated mobile KPI reports are published on your on premise Power BI Report Server (Named i.e. SRV-PBIRS), their data sources is an Analysis Services located on another server (Named i.e. SRV-SSASTAB\INST01, INST01 being the named instance) and you want to track/monitor who is accessing the data on Analysis Services or you have row level security constraints.

In such case, if you have configure your Analysis connection using Windows integrated authentication, and therefore you have to setup the Kerberos delegation from the Power BI Report Server to the Analysis Services Server. If you don’t do that, your users will be faced to the famous “double-hop” issue and they won’t be able to access the Analysis Services data or you won’t be able to identify who is consuming your data on Analysis Services side.

In order to setup the Kerberos delegation you can follow steps below:

1- Be sure to be Domain Admin or to have sufficient permission to create SPN and change the Service Account and /or computer settings in the Active Directory. 2- On your Power BI Report Server  server, get the Service account starting your Power BI Report Server service.

(i.e. SvcAcc_PBIRS)

pic1

Note: If you do not have used domain service account you will have to use the server name instead in the following steps.

While you are on the server, make first a backup and then change the rsreportserver.config configuration file (for a default installation it is located here: C:\Program Files\Microsoft Power BI Report Server\PBIRS\ReportServer). Add the parameter <RSWindowsNegotiate/>> in the <AuthenticationType> xml node

pic2

Save an close the file.

3. On your Analysis Services server, get the server account starting your Analysis Services service

(i.e. SvcAcc_SSASTab)

pic3

Note: If you do not have used domain service account you will have to use the server name instead in the following steps.

4- Open a PowerShell console on a  any domain computer with your domain admin user.

Execute the following command to get SPN associated with your Power BI Report Service account:

Setspn -l PBIRSServiceAccount

If you do not see the following entry

HTTP/SRV-PBIRS.Domain
HTTP/SRV-PBIRS

Execute the following commands to register HTTP SPN for your server FQDN and NETBIOS names

SetSpn -a http/SRV-PBIRS.Domain PBIRSServiceAccount
SetSpn -a http/SRV-PBIRS PBIRSServiceAccount

Note that you have to replace the SRV-PBIRS.Domain with the URL (without the virtual directory) of your Power BI Report Server site in the case you defined an URL or you defined an HTTPS  URL with a certificate.

Check again if you the SPN’s are correctly registered after.

 5- In your PowerShell session, execute the following command to get SPN registered for your Analysis Services Service account:
SetSpn -l SvcAcc_SSASTab

You should see the following entries, meaning your Analysis Services SPN’s have been registered:

MSOLAPSVC.3/ SRV-SSASTAB:INST01
MSOLAPSVC.3/ SRV-SSASTAB.domain:INST01

If not run the following commands:

SetSpn -a MSOLAPSVC.3/ SRV-SSASTAB:INST01
SetSpn -a MSOLAPSVC.3/ SRV-SSASTAB.domain:INST01

Furthermore, in the case you installed your Analysis Services with a named instance (in my example INST01), check if SPN’s have been registered for the Analysis Services SQL Browser Service (the server name is used in that case for the SQL Server Browser is started with a local service account):

SetSpn -l SRV-SSASTAB

You should see the following entries:

MSOLAPDisco.3/SRV-SSASTAB
MSOLAPDisco.3/SRV-SSASTAB.domain

If not, run the following command:

SetSpn -a MSOLAPDisco.3/SRV-SSASTAB
SetSpn -a MSOLAPDisco.3/SRV-SSASTAB.domain

 

6- For the next step you have to open Active Directory administration.

Open the properties of your Power BI Report Server service account.In the Account tab, uncheck the “Account is sensitive and cannot be delegated”

pic4

Then in the Delegation tab, select the “Trust this user for delegation to any service”. If you have security constraint with the delegation, it is recommended to use the third option and to select the only services you defined in step 5.

pic5

 7- Finally restart you Power BI Report Server Service.

Cet article Power BI Report Server Kerberos Setup est apparu en premier sur Blog dbi services.

Pages