Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 3 hours 53 min ago

What I will take away from the PGCONF.EU 2018 Lisbon

Mon, 2018-10-29 03:05

You like non-technical blogs? This blog on the pgconf.eu 2018 Lisbon is for you ;-)

On PostgreSQL

Well, you’ve already heard about PostgreQSL, haven’t you?
This is all about Relational Database Management Systems. In other words, the invisible part of the IT.

PostgreSQL, or simply postgres, is a powerful, open source RDBMS with over 30 years of active development that has earned it a strong reputation for reliability, feature robustness, and performance. At least, you probably met the PostgreSQL blue elephant after a few beers already once in your life, didn’t you? I did !


On pgconf.eu

PostgreSQL Conference Europe, PGCONF.EU, is the largest PostgreSQL conference in Europe. This year’s PGCONF.EU took place in Lisbon and was the 10th edition of this amusing meeting and cooperation event for PostgreSQL users. All editions took place in major European cities to make it easy for as many people as possible to come to the conference:

2008 ‐ Prato, Italy
2009 ‐ Paris, France
2010 ‐ Stuttgart, Germany
2011 ‐ Amsterdam, Netherlands
2012 ‐ Prague, Czech Republic
2013 ‐ Dublin, Ireland
2014 ‐ Madrid, Spain
2015 ‐ Vienna, Austria
2016 ‐ Tallinn, Estonia
2017 ‐ Warsaw, Poland

This won’t be a surprise for you if I tell that the number of attendees has increased over the 10 years. But interestingly, this year’s edition have reached around to 450 speakers and attendees where the registration process has already been stopped end of September 2018 because the conference PGCONF.EU was sold out.

On Techies

Now you will ask “who are the typical users of Postgres?”. “PostgreSQL users come from all size of companies all over the world. Postgres is global” says Marc Linster / EnterpriseDB (EDB). More than 200 papers have been submitted for the pgconf.eu 2018 event whereas around 50 sessions took place. So this has been very selective this year.

EDB Marc

At this point of the blog, I have to tell that the pgconf.eu is a real techie event. Non techies are more or less welcome ;-) Techies went that far to avoid my attending the event that they sent a bird into the machine of my EasyJet flight that has first been delayed and then canceled on Monday night. But I really wanted to attend this major techie event, so I took a TAP flight on the next morning and arrived alive! We’ll see if I can say my journey back….

On dbi services at pgconf.eu

Arrived at the Marriott Hotel Lisbon, our 3-men team installed booth and live demo table for welcoming potential interested people in services of the Swiss PostgreSQL specialist dbi services on level 0 in Room Mediterranean.

sdr IMG_E6322 table demo

Unfortunately, all sessions took place on level -1 where this was possible to get coffee and catering, lunch and even access to the hotel parc. So only 1/3 of the attendees went upstairs and visited the sponsors’ booths. Should be better organized next time.

Apart from this issue, the organization was awesome, the welcoming the attendees and partners, the sessions, the location and the catering were great. Also the social events proposed by PostgreSQL Europe. During the event we also had the opportunity to spend some time and had excellent discussions with our partner EnterpriseDB

IMG_6316 partners

food and so EDB dbi pic

Also Daniel had the opportunity to demonstrate the dbi OpenDB Appliance live to many interested people.

live demo OpenDB Appliance

Not to be forgotten, we enjoyed the very nice atmosphere in the city of the discoverers (Magellan, Vasco da Gama, and many others)…

IMG_6356 stairs porte praca do commerco

… and had another kind of “live demo” + a drink at Museu da Farmacia

demo museu da farmacia

Now it is time to say thank you and goodbye pgconf.eu !


For me, there were many lessons learned at the pgconf.eu #10. One of them was how to write a blog. Many thanks to my mentor, Daniel Westermann! Also, many thanks to Hervé Schweitzer especially for the night-walk through Lisbon and the great non-alcohol cocktail!

For you, well… at least you’ve got a more precise idea of what a non technical guy attending a technical event will take away from such events. And even if this is not the best blog you’ve ever read, not to write this blog would have been… bullshit!

See you ;-)

Cet article What I will take away from the PGCONF.EU 2018 Lisbon est apparu en premier sur Blog dbi services.

SQL Server 2019 availability groups – Introduction to R/W redirection

Sun, 2018-10-28 13:02

writeThis is a second write-up about SQL Server 2019 CTP2.0 and availability group new features. This time the interesting one is about READRWRITE redirection capabilities. A couple of months ago, I wrote about SQL Server 2017 and new read-scale capabilities and listener management challenges regarding the operating system. Indeed, there are some scenarios where including listener will not as easy as we did with common ones on the top of Windows operation system. A list of these scenarios is listed in the BOL.

Without a listener, I would say that read-only connections are not a big deal because they are supposed to work regardless the replica’s role – either PRIMARY or secondary the game is not the same with read-write connections. The connection’s write-part may trigger errors if they attempt to run write queries and guess what, this issue is addressed by new READWRITE capabilities of AGs in SQL Server 2019.

Let’s set the context of my lab environment:

blog 146 - 1 - AG schema redirection RW


This is a pretty simple environment that includes 2 replicas in synchronous mode. Automatic failover is obviously not available in read scale topology.

My first test was to attempt a R/W client connection from the secondary to see if redirection applied on the primary. My configuration script includes the new READ_WRITE_ROUTING_URL requested for the redirection to the primary.

FOR DATABASE  [AdventureWorks2016]   
'WIN20161\SQL2019CTP2' WITH   
	ENDPOINT_URL = 'TCP://WIN20161.dbi-services.test:5026',  
	-- Secondary role => we need to allow all connection types 
	-- R/O + R/W to allow R/W connections
		READ_ONLY_ROUTING_URL = 'TCP://WIN20161.dbi-services.test:1459' 
		READ_WRITE_ROUTING_URL = 'TCP://WIN20161.dbi-services.test:1459' 
'WIN20162\SQL2019CTP2' WITH   
	ENDPOINT_URL = 'TCP://WIN20162.dbi-services.test:5026',  
		READ_ONLY_ROUTING_URL = 'TCP://WIN20162.dbi-services.test:1459' 
		READ_WRITE_ROUTING_URL = 'TCP://WIN20162.dbi-services.test:1459' 





My AG current state is as follows and meets all R/W redirection prerequisites: the WIN20162\SQL2019CTP2 secondary replica is online and the primary replica includes READ_WRITE_ROUTING_URL as well.

blog 146 - 2 - AG config

We may notice new sys.availability_replicas view column concerning shipped with this new SQL Server version including R/W URL settings and primary connection mode as well.

Let’s try a R/W connection to the secondary replica WIN20162\SQL2019CT2 with the following connection string. ApplicationIntent parameter is not specified meaning I will use R/W intent by default.

$connectionString = "Server=WIN20162\SQL2019CTP2; Integrated Security=False; uid=sa; pwd=xxxx; Initial Catalog=AdventureWorks2016;"


I got the expected result. My connection is transparently redirecting to my primary replica WIN20161\SQL2019CT2 as shown below:

blog 146 - 3 - Test AG RW redirection

Let’s switch my connection intent to read-only (connection parameter ApplicationIntent=ReadOnly):

$connectionString = "Server=WIN20162\SQL2019CTP2; Integrated Security=False; uid=sa; pwd=xxxx; Initial Catalog=AdventureWorks2016; ApplicationIntent=ReadOnly"


The connection stays on the secondary read-only accordingly to my AG configuration:

blog 146 - 4 - Test AG RO redirection

This new capability did the expected job. Obviously if you miss some prerequisites redirection will not occur but at the moment of this write-up I didn’t find out any “obvious” extended event to troubleshoot R/W routing events as we may already use for R/O routing. Probably in the next CTP …

In a nutshell, this new feature concerns R/W redirections from a secondary to a primary replica. But for curiosity, I tried to perform the same test for R/O redirection with ApplicationIntent=ReadOnly but without luck.

$connectionString = "Server=WIN20161\SQL2019CTP2; Integrated Security=False; uid=sa; pwd=xxxx; Initial Catalog=AdventureWorks2016; ApplicationIntent=ReadOnly"

blog 146 - 5 - Test AG RO redirection 1

To confirm my R/O topology is well configured, I implemented a AG listener as I did in a previous blog post about AG read-scale scenarios. For my test, I replaced the server value by the listener name in the following connection string. My AG listener is listen on port 1459  …

$connectionString = "Server=lst-ag2019,1459; Integrated Security=False; uid=sa; pwd=xxxx; Initial Catalog=AdventureWorks2016; ApplicationIntent=ReadOnly"


… and it ran successfully as show below. My connection is well redirected from the primary – WIN20161\SQL2019CTP2 to the secondary WIN20162\SQL2019CTP2:

blog 146 - 6 - Test AG RO redirection 2

Well, it seems that R/O redirection without using a listener is not implemented yet but it is probably out of the scope of SQL2019 AG redirection capabilities.

See you!








Cet article SQL Server 2019 availability groups – Introduction to R/W redirection est apparu en premier sur Blog dbi services.

Reimaging an old X3-2/X4-2 ODA

Fri, 2018-10-26 10:30

X3-2 and X4-2 ODAs are still very capable pieces of hardware in 2018. With 256GB of RAM and at least 16 cores per node, and with 18TB RAW disk capacity as a standard these appliances are far from obsolete even you probably don’t have any more support on the hardware from Oracle.
If you own several ODAs of this kind, hardware support may not really be a problem. If something fails, you can use the other ODA for spare parts.

You probably missed some patches on your old ODAs. Why? Maybe because it’s not so easy to patch, and it’s even more difficult if you don’t patch regularly. Or maybe just because you don’t want to add more tasks to your job (applying each patch is just like never stop patching).

So if you want to give a second life to your ODA, you’d better reimage it.

Reimaging: how to do?

Reimaging is the best way to do the cleanup of your ODA. Current deployment packages are certified for all the ODAs, except from V1 (first generation before the X3-2).

You first have to download all the needed files from MOS. Pay attention to download the deployment packages for OAKCLI stack because ODACLI is limited to lite and newer ODAs.

Assuming you’re using a bare metal configuration and you want to deploy the latest ODA version, you will need the following files :

  • 12999313 : ISO for reimaging
  • 28216780 : patch for OAKCLI stack (because reimaging actually does not update bioses and firmwares)
  • 12978712 : appliance server for OAKCLI stack
  • 17770873,  19520042 et 27449599 : rdbms clones for database, and

Network configuration and disk configuration didn’t change: you still need to provide all the IPs, VIPs, DNS and so on for the network, and disk configuration is still not so clear with external backup meaning that you will go for 85/15 repartition between DATA and RECO instead of the default 40/60 split. Don’t forget that you can change the redundancy level for each ASM diskgroup: DATA, RECO and REDO can use high redundancy, but normal redundancy will give you 50% more free space (18TB RAW is 9TB usable in normal redundancy and 6TB usable in high redundancy).

Step 1 – Connect the ISO as a CDROM through ILOM interface and reimage the servers

I won’t give you the extensive procedure for this part: nothing has changed regarding ODA reimaging during last years.

First step is to connect to the ILOM and virtually plug the ISO image on the server. Then, select the CDROM as the next boot device, and do a power cycle of the server. You’ll have to repeat this on the other node too. Reimaging lasts about 1h and is fully automatic. The latest step is still the longest one (post-installation procedure). Once the reimaging is done, each node should have a different default name: oak1 for node 0 and oak2 for node 1 (weird). If the nodes are both oak1, please check the cables connected to the shared storage: they must be connected according to the setup poster.

Step 2 – Configure basic network settings

Reimaging is always ending by a reboot, and depending on the appliance, it will ask you the kind of network you plan to use: Copper of Fiber. Then, through the ILOM, you need to launch the configure firstnet script:

/opt/oracle/oak/bin/oakcli configure firstnet

Repeat this configuration step on the second node. Now your nodes are visible through the network.

Step 3 – Deploy, cleanup, deploy…

Reimaging was so easy… But from now it will be a little more tricky. You now need to deploy the appliance: understand configure the complete network settings, install all the Oracle stack with Grid Infrastrucure, ASM, latest database engine and eventually create a first database. And you will need a graphical interface to configure all these parameters and launch the deployment. So, from the ILOM session, let’s unpack the necessary files, start a graphical session of Linux and launch the deployment GUI.

oakcli unpack -package /opt/dbi/p12978712_122140_Linux-x86-64_1of2.zip
oakcli unpack -package /opt/dbi/p12978712_122140_Linux-x86-64_2of2.zip
oakcli unpack -package /opt/dbi/p27449599_122140_Linux-x86-64.zip
oakcli deploy

Graphical interface will help you to configure all the parameters, but don’t deploy straight away from now. Backup the configuration file and then edit it:

vi /opt/dbi/deploy_oda01

Review all the parameters and adjust them to perfectly match your needs (most of these parameters cannot be changed afterwards).

Now you can launch the real deployment and select your configuration file in the graphical interface:

oakcli deploy

First try will fail and it’s a normal behaviour. Failure is because of the ASM headers: they are still writen on the disks in the storage shelf. Reimaging did nothing on these disks. And already having ASM disks configured will make the deployment process to fail. Now you can exit the deployment and do a cleanup of the failed attempt.


Unfortunatly you cannot do the cleanup if nothing is already deployed, so you need this first failing attempt. Alternatively, you can do the cleanup before reimaging, or manually clean all the disks headers and partitions on the 20 disks before trying to deploy (with a dd), but it probably won’t be faster.

When the cleanup is done, the ODA will reboot and you’ll have to configure again the firstnet from the ILOM on both nodes.

/opt/oracle/oak/bin/oakcli configure firstnet

Finally, with a new graphical session you can restart the deployment, and this time, if your parameter file is OK, it will be succesful. Yes!

oakcli deploy

Step 4 – Patch the server

It seems weird but reimaging actually doesn’t update the firmware, bios, ilom of the servers, nor the firmware of the disks in the storage shelf. Understand that reimaging is only a software reimaging of the nodes. This is an example of an ODA X4-2 configuration just after reimaging and deploying the appliance:

oakcli show version -detail
System Version  Component Name            Installed Version         Supported Version
--------------  ---------------           ------------------        -----------------
Controller_INT                 Up-to-date
Controller_EXT                 Up-to-date
Expander                  0018                      Up-to-date
SSD_SHARED                944A                      Up-to-date
HDD_LOCAL                 A720                      A7E0
[ c2d0,c2d1,c2d2,c2d      A720                      A7E0
c2d19 ] [ c2d10 ]                 A7E0                      Up-to-date
ILOM             r101689 r123795
BIOS                      25030100                  25060300
IPMI                              Up-to-date
HMP                             Up-to-date
OAK                             Up-to-date
OL                        6.9                       Up-to-date
GI_HOME               Up-to-date
DB_HOME               Up-to-date

Hopefully you can apply the patch even if your ODA is already in the same software version as your patch. Well done Oracle.

So let’s register the patch files and do the patching of the servers (server will probably reboot):

oakcli unpack -package /opt/dbi/p282166780_122140_Linux-x86-64_1of3.zip
oakcli unpack -package /opt/dbi/p282166780_122140_Linux-x86-64_2of3.zip
oakcli unpack -package /opt/dbi/p282166780_122140_Linux-x86-64_3of3.zip
oakcli update -patch --server

oakcli show version -detail

System Version  Component Name            Installed Version         Supported Version
--------------  ---------------           ------------------        -----------------
Controller_INT                 Up-to-date
Controller_EXT                 Up-to-date
Expander                  0018                      Up-to-date
SSD_SHARED                944A                      Up-to-date
HDD_LOCAL                 A7E0                      Up-to-date
[ c2d0,c2d1,c2d2,c2d      A720                      A7E0
c2d19 ] [ c2d10 ]                 A7E0                      Up-to-date
ILOM             r123795        Up-to-date
BIOS                      25060300                  Up-to-date
IPMI                              Up-to-date
HMP                             Up-to-date
OAK                             Up-to-date
OL                        6.9                       Up-to-date
GI_HOME               Up-to-date
DB_HOME               Up-to-date

Great, our servers are now up-to-date. But storage is still not OK.

Step 4 – Patch the storage

Patching the storage is quite easy (server will probably reboot):

oakcli update -patch --storage

oakcli show version -detail

System Version  Component Name            Installed Version         Supported Version
--------------  ---------------           ------------------        -----------------
Controller_INT                 Up-to-date
Controller_EXT                 Up-to-date
Expander                  0018                      Up-to-date
SSD_SHARED                944A                      Up-to-date
HDD_LOCAL                 A7E0                      Up-to-date
HDD_SHARED                A7E0                      Up-to-date
ILOM             r123795        Up-to-date
BIOS                      25060300                  Up-to-date
IPMI                              Up-to-date
HMP                             Up-to-date
OAK                             Up-to-date
OL                        6.9                       Up-to-date
GI_HOME               Up-to-date
DB_HOME               Up-to-date

Everything is OK now!

Conclusion – A few more things
  • When redeploying, consider changing the redundancy of the diskgroups and the partitionning of the disk if needed. This can only be configured during deployment. Disks parameters are located in the deployment file (DISKGROUPREDUNDANCYS and DBBackupType)
  • Always check that all the components are up-to-date to keep your ODA in a consistent state. Check on both nodes because local patching is also possible, and it could make no sense if the nodes are running different level of patch
  • Don’t forget to check/apply your licenses on your ODA because using Oracle software is for sure not free
  • You have to know that a freshly redeployed ODA will have 12.2 database compatibility on diskgroups, making the use of acfs mandatory for your old databases. For me it’s a real drawback considering that acfs is adding useless complexity to ASM
  • Don’t forget to deploy the other dbhomes according to your needs

Cet article Reimaging an old X3-2/X4-2 ODA est apparu en premier sur Blog dbi services.

Short summary of my PGCONF.EU 2018 conference

Fri, 2018-10-26 10:20

So it’s nearly the end of the conference, where I met lot’s of people from the Postgres community, customers, EDB partner and also contributor/developer of Postgres. During the 3 days I had also the chance to follow many technical sessions and I will give you a short feedback of my preferred sessions.
If you want to have the details of the sessions go the conference website https://www.postgresql.eu/events/pgconfeu2018/schedule


Tour de Data Types: VARCHAR2 or CHAR(255)?

Very interesting session from Andreas Scherbaum, where he presented the most interesting Data types with use cases and examples.

What’s new in PostgreSQL 11

Magnus it a well know presenter, and he made a good and funny presentation of the Postgres 11 new features.

CREATE STATISTICS – what is it for?

For me it was one of the best presentations it delivered not many information,the quality of the presentation was just perfect. I directly downloaded the presentation and added it to my knowledge book : https://github.com/tvondra/create-statistics-talk

Ace it with ACID: PostgreSQL transactions for fun and profit

The presentation was very good, especially if you want to trust your database.

An answer to PostgreSQL bloat woes

Thanks Amit for the presentation of the new storage engine ZHEAP, and for the information sharing. Directly after the session with Daniel we started to test this new storage engine. If you want to know more about this topic see my blog about ZHEAP.


It’s nice to know, which all new features in the Optimizer Tuning will come out the next years (FEEDBACK,ADVISOR,AUTONOMOUS,etc..). I wait impatiently to get access to the project,to be able to tests these new features.

See you next year !

Cet article Short summary of my PGCONF.EU 2018 conference est apparu en premier sur Blog dbi services.

How to change oam-config.xml to adapt to Oracle Access Manager configuration change

Fri, 2018-10-26 10:18

After upgrading the Oracle Access Manager from to, I extended the WebLogic Domain with User Messaging Services to be able to use the Forget Password feature.
The Oracle Support note ID 2302623.1 gives a good example how to do.
Oracle Access Manager 12cps3 (OAM Sample Application Demonstrates Forgot Password Flow Using Multi Factor Authentication REST APIs ( Doc ID 2302623.1 ).

But the OAM AdaptiveAuthenticatorPlugin was missing the UmsClientUrl property
and without this one, no way to send mails or SMS with the security token to the user requesting the reset of his password.

I decided to modify the oam-config.xml file with the missing property.
During the modification, I modfied the Version of the oam-config.xml to inform OAM about the configuration change.
What was my surprise when I saw that my modifications were reseted after the Administration Server was started new.

This has changed starting from OAM Now the oam-config is stored in the Metadata Database and needs to be exported,
modified and re-imported back to the database. The steps are described in the documentation (here).

I had then to follow then those steps as shown below:
First set the JAVA_HOME and PATH to point to the right Java Version.
$ export JAVA_HOME=/u00/app/oracle/product/Java/jdk
$ export PATH=$JAVA_HOME/bin:$PATH

The export requires a properties file defining te connection to the OAM repository$ more dbschema.properties
[oracle@vm03 OAM_CONFIG]$
Export the oam-config.xml file$ java -cp /u01/app/fmw_oim_12213/idm/oam/server/tools/config-utility/config-utility.jar:/u01/app/fmw_oim_12213/oracle_common/modules/oracle.jdbc/ojdbc8.jar oracle.security.am.migrate.main.ConfigCommand /u02/app/config/domains/idm_domain/ export dbschema.properties
Oct 15, 2018 6:40:44 PM oracle.security.am.migrate.main.command.CommandFactory getCommand
INFO: executable operation: export
This exports the oam-config.xml file in the local directory. Modify this file and import it back to the DB.$ java -cp /u01/app/fmw_oim_12213/idm/oam/server/tools/config-utility/config-utility.jar:/u01/app/fmw_oim_12213/oracle_common/modules/oracle.jdbc/ojdbc8.jar oracle.security.am.migrate.main.ConfigCommand /u02/app/config/domains/idm_domain/ import dbschema.properties
Oct 15, 2018 6:43:25 PM oracle.security.am.migrate.main.command.CommandFactory getCommand
INFO: executable operation: import
Oct 15, 2018 6:43:27 PM oracle.security.am.migrate.util.ConfigFileUtil replaceValue
INFO: 191 will be replaced by 192
Oct 15, 2018 6:43:28 PM oracle.security.am.migrate.operation.ImportConfigOperation invoke
INFO: imported config file version to database:192
During the import, the version is incremented automatically. Take care on not to have typos errors in the oam-config.xml file you import as I’m not sure there is a validation before the import and the OAM schema can be corrupted.

Cet article How to change oam-config.xml to adapt to Oracle Access Manager configuration change est apparu en premier sur Blog dbi services.

Node Manager not starting after using unpack to install a WebLogic domain on a remote machine.

Fri, 2018-10-26 10:14

Created a domain using the config.sh script that is using several hosts. The pack and unpack command has been used to install the domain on the remote servers.
The pack command:cd $FMW_HOME/oracle_common/common/bin
./pack.sh -domain=/u02/config/domains/workshop_domain \
-template=/home/weblogic/workshop/lab_DomainCreation/workshop_template.jar \
-template_name=workshop_template \
The new created jar file was copied on the remote server and the unpack command run:cd $FMW_HOME/oracle_common/common/bin
./unpack.sh -domain=/u02/config/domains/workshop_domain \
Starting the node manager failed due to the DemoIdentity Java KeyStore file missing. This one was always generated in the previous WebLogic Sofware version.

Errors from the node manager log file:weblogic.nodemanager.common.ConfigException: Identity key store file not found: /u02/config/domains/workshop_domain/security/DemoIdentity.jks
at weblogic.nodemanager.server.SSLConfig.loadKeyStoreConfig(SSLConfig.java:225)
at weblogic.nodemanager.server.SSLConfig.access$000(SSLConfig.java:33)
at weblogic.nodemanager.server.SSLConfig$1.run(SSLConfig.java:118)
at java.security.AccessController.doPrivileged(Native Method)
at weblogic.nodemanager.server.SSLConfig.(SSLConfig.java:115)
at weblogic.nodemanager.server.NMServer.(NMServer.java:169)
at weblogic.nodemanager.server.NMServer.getInstance(NMServer.java:134)
at weblogic.nodemanager.server.NMServer.main(NMServer.java:589)
at weblogic.NodeManager.main(NodeManager.java:31)

Starting from WebLogic, the unpack command does not generate the DemoIdentity.jks keystore file anymore. The DemoIdentity JKS file needs to be created manually or the Node manager changed to non SSL. After making sure the java set in the path is the one used by the WebLogic Domain run:cd /u02/config/domains/workshop_domain/security
java utils.CertGen -certfile democert -keyfile demokey -keyfilepass DemoIdentityPassPhrase -noskid
java utils.ImportPrivateKey -certfile democert.pem -keyfile demokey.pem -keyfilepass DemoIdentityPassPhrase -keystore DemoIdentity.jks -storepass DemoIdentityKeyStorePassPhrase -alias demoidentity
After this the Node Manager can be started successfully.

Cet article Node Manager not starting after using unpack to install a WebLogic domain on a remote machine. est apparu en premier sur Blog dbi services.

Deep dive Postgres at the #pgconfeu conference

Fri, 2018-10-26 03:37

Today I followed many good technical sessions at the European Postgres conference. The Postgres conferences are really technical oriented, you will find no marketing sessions there and you learn a lot of things.
As promised yesterday, I wrote today my first blog about the new Postgres storage engine ZHEAP/UNDO, which is a very interesting feature, with very interesting results.

Before you continue to read this blog, if you didn’t read my blog from yesterday,read it first :-) link

First test : table creation

We create 2 tables, one with the default Postgres storage engine HEAP, and one with the new storage enfine ZHEAP.

PSQL> create table heap2 as select a.*, md5(a::varchar), now() from generate_series(1,5000000) a;
SELECT 5000000
Time: 12819.369 ms (00:12.819)

PSQL> create table zheap2  with (storage_engine='zheap') as select a.*, md5(a::varchar), now() from generate_series(1,5000000) a;
SELECT 5000000
Time: 19155.004 ms (00:19.155)

You noticed, that with Postgres you can choose your storage engine at table level :-). The table creation with ZHEAP is slower, but is is normal because now we have to create the UNDO segment also.

Second test : Size of the both tables

Before to start the tests we will check the size of the HEAP and ZHEAP tables, as announced yesterday the HEAP table should be smaller, because we have less header information.

PSQL>  select pg_size_pretty(pg_relation_size('heap2'));
 365 MB
PSQL> select pg_size_pretty(pg_relation_size('zheap2'));
 289 MB

The ZHEAP tables is smaller, it exactly what Amit explain us yesterday, because the block header with ZHEAP is smaller. If you want to learn more read his presentation from yesterday. Again the link is on my blog from yesterday.

Third test : Update on the table

To get the bloat effect on the HEAP table, we will now update the full table and see what happen.

PSQL> update heap2 set a=a+12222222;
UPDATE 5000000
Time: 19834.911 ms (00:19.835)

PSQL> update zheap2 set a=a+12222222;
UPDATE 5000000
Time: 26956.043 ms (00:26.956)

PSQL> select pg_size_pretty(pg_relation_size('zheap2'));
 289 MB
PSQL> vacuum heap2;
PSQL> select pg_size_pretty(pg_relation_size('heap2'));
 730 MB

The same as for the creation the update is a bit longer, but the update with ZHEAP write many information into the log file.We should test again this update with disabling the writing of information into the log file about creating undo segment.
But as you can see, the most important information here is that the table don’t bloat as the HEAP table, now the HEAP table is 2 times bigger despite I executed a VACUUM.

Fourth test: test of the ROLLBACK

To test the ROLLBACK we have to open first a transaction with BEGIN;

PSQL>  begin;
PSQL>* update heap2 set a=a+12222222;
UPDATE 5000000
Time: 22071.462 ms (00:22.071)
PSQL> * rollback;
Time: 1.437 ms

PSQL> begin;
PSQL> * update zheap2 set a=a+12222222;
UPDATE 5000000
Time: 28210.845 ms (00:28.211)
PSQL> * rollback;
Time: 0.567 ms

This is the part where I’m the most surprised, the ROLLBACK for ZHEAP is so fast as for HEAP, I can’t explain that. I will leave my colleague Daniel Westermann making deeper tests :-). Because with ZHEAP he has to apply the undo blocks, where HEAP tables only mark the transactions as aborted.

Fifth tests : Check of the query performances

For this test we have to first flush the filesystem cache and to restart the database, to be sure that nothing is cached.

postgres@dbi-pg-tun:/home/postgres/zheap/ [ZHEAP] pgstop 
waiting for server to shut down.... done
server stopped

postgres@dbi-pg-tun:/home/postgres/ [ZHEAP] sudo sync
postgres@dbi-pg-tun:/home/postgres/ [ZHEAP] sudo echo 3 > /proc/sys/vm/drop_caches

postgres@dbi-pg-tun:/home/postgres/zheap/ [ZHEAP] pgstart
waiting for server to start.... done
server started

Now we are ready for the last test

postgres@dbi-pg-tun:/home/postgres/zheap/ [ZHEAP] sqh
PSQL> select count(*) from heap2;
Time: 3444.869 ms (00:03.445)

PSQL> select count(*) from zheap2;
Time: 593.894 ms

As you can see the query performance are improved significantly for full table scan :-), because the table didn’t bloat as for the HEAP table. For you information I started additionally 2 times a full update before to restart the database and the HEAP table is now 3 times bigger.

PSQL> select pg_size_pretty(pg_relation_size('heap2'));
 1095 MB

Time: 0.508 ms
PSQL> select pg_size_pretty(pg_relation_size('zheap2'));
 289 MB
Conclusion of these tests
  • Postgres allow the usage or not of UNDO’s at the table level
  • We are surprise how fast the ROLLBACK are, but this must be tested again, I don’t understand why
  • Select performance are improved significantly for full table scan :-)
  • The storage will not bloat anymore with ZHEAP
  • Finally only the updates are a little bit slower

It will be interesting to follow the discussions around this feature on the mailing list.

Cet article Deep dive Postgres at the #pgconfeu conference est apparu en premier sur Blog dbi services.

Deploy DC/OS using Ansible (Part 2) – Playbooks

Fri, 2018-10-26 02:02

Finally, after all the configuration stuff is done, we can run the playbooks

Create SSH Access

First the SSH Access on all nodes need to be created. Therefore the access-onprem.yml is used:
Be careful, I used CentOS on my system, so I commented the apt-get and the debian-based part out.
If you want to run the playbook on another operating system, adjust it carefully.

# This playbook enable access to all ansible targets via ssh

- name: setup the ansible requirements on all nodes
  hosts: all:!localhost
  #hosts: all
  serial: 20
  remote_user: "{{ initial_remote_user | default('root') }}"
  become: true

#    - name: attempt to update apt's cache
#      raw: test -e /usr/bin/apt-get && apt-get update
#      ignore_errors: yes

#    - name: attempt to install Python on Debian-based systems
#      raw: test -e /usr/bin/apt-get && apt-get -y install python-simplejson python
#      ignore_errors: yes

    - name: attempt to install Python on CentOS-based systems
      raw: test -e /usr/bin/yum && yum -y install python-simplejson python
      ignore_errors: yes

    - name: Create admin user group
        name: admin
        system: yes
        state: present

    - name: Ensure sudo is installed
        name: sudo
        state: present

    - name: Remove user centos
        name: centos
        state: absent
        remove: yes

    - name: Create Ansible user
        name: "{{ lookup('ini', 'remote_user section=defaults file=../ansible.cfg') }}"
        shell: /bin/bash
        comment: "Ansible management user"
        home: "/home/{{ lookup('ini', 'remote_user section=defaults file=../ansible.cfg') }}"
        createhome: yes
        password: "admin123"

    - name: Add Ansible user to admin group
        name: "{{ lookup('ini', 'remote_user section=defaults file=../ansible.cfg') }}"
        groups: admin
        append: yes

    - name: Add authorized key
        user: "{{ lookup('ini', 'remote_user section=defaults file=../ansible.cfg') }}"
        state: present
        key: "{{ lookup('file', lookup('env','HOME') + '/.ssh/ansible-dcos.pub') }}"

    - name: Copy sudoers file
      command: cp -f /etc/sudoers /etc/sudoers.tmp

    - name: Backup sudoers file
      command: cp -f /etc/sudoers /etc/sudoers.bak

    - name: Ensure admin group can sudo
        dest: /etc/sudoers.tmp
        state: present
        regexp: '^%admin'
        line: '%admin ALL=(ALL) NOPASSWD: ALL'
      when: ansible_os_family == 'Debian'

    - name: Ensure admin group can sudo
        dest: /etc/sudoers.tmp
        state: present
        regexp: '^%admin'
        insertafter: '^root'
        line: '%admin ALL=(ALL) NOPASSWD: ALL'
      when: ansible_os_family == 'RedHat'

    - name: Replace sudoers file
      shell: visudo -q -c -f /etc/sudoers.tmp && cp -f /etc/sudoers.tmp /etc/sudoers

    - name: Test Ansible user's access
      local_action: "shell ssh {{ lookup('ini', 'remote_user section=defaults file=../ansible.cfg') }}@{{ ansible_host }} 'sudo echo success'"
      become: False
      register: ansible_success

    - name: Remove Ansible SSH key from bootstrap user's authorized keys
        path: "{{ ansible_env.HOME }}/.ssh/authorized_keys"
        state: absent
        regexp: '^ssh-rsa AAAAB3N'
      when: ansible_success.stdout == "success"
Start the Playbook for the SSH access
[root@dcos-ansible ansible-dcos]# pwd

[root@dcos-ansible ansible-dcos]# ansible-playbook plays/access-onprem.yml
PLAY [setup the ansible requirements on all nodes] 
TASK [Gathering Facts]
ok: []
ok: []
ok: []
ok: []
ok: []


**************************************************************************************             : ok=14   changed=6    unreachable=0    failed=0             : ok=14   changed=6    unreachable=0    failed=0             : ok=14   changed=6    unreachable=0    failed=0             : ok=14   changed=6    unreachable=0    failed=0             : ok=14   changed=6    unreachable=0    failed=0

This is not the whole output of the playbook. Important to know, during the “TASK [Test Ansible user’s access]” I had to insert the Ansible password 5 times. After that the playbooks finished successfully.

Ping the servers using Ansible

After the playbook finished successfully do a test ping

[root@dcos-ansible ansible-dcos]# ansible all -m ping | SUCCESS => {
    "changed": false,
    "ping": "pong"
} | SUCCESS => {
    "changed": false,
    "ping": "pong"
} | SUCCESS => {
    "changed": false,
    "ping": "pong"
} | SUCCESS => {
    "changed": false,
    "ping": "pong"
} | SUCCESS => {
    "changed": false,
    "ping": "pong"

In case of trouble it is really helpful to use the “-vvv” option.
It is also possible to ping only one server using

ansible -m ping
Rollout the DC/OS installation
[root@dcos-ansible ansible-dcos]# pwd
[root@dcos-ansible ansible-dcos]# cat plays/install.yml
- name: setup the system requirements on all nodes
  hosts: all
  serial: 20
  become: true
    - common
    - docker

- name: generate the DC/OS configuration
  hosts: bootstraps
  serial: 1
  become: true
    - bootstrap

- name: deploy nodes
  hosts: [ masters, agents, agent_publics]
  serial: 20
  become: true
    - node-install

[root@dcos-ansible ansible-dcos]# pwd
[root@dcos-ansible ansible-dcos]# ansible-playbook plays/install.yml

PLAY [setup the system requirements on all nodes]

TASK [Gathering Facts]
ok: []
ok: []
ok: []
ok: []

In case some installation steps fail, Ansible will skip for that server and gives you the opportunity to rerun the playbook on the failed server.

ansible-playbook plays/install.yml --limit @/root/ansible-dcos/plays/install.retry

If you cannot connect to your master via browser: Check your /var/log/messages for error messages. In my case the master searched for the eth0 interface. Which isn’t available on my VM.
Just change the detect-ip script as follows, according to your network interface. Same step is needed on all agent-nodes as well.

[root@dcos-master bin]# cat /opt/mesosphere/bin/detect_ip
#!/usr/bin/env bash
set -o nounset -o errexit
export PATH=/usr/sbin:/usr/bin:$PATH
echo $(ip addr show enp0s8 | grep -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' | head -1)
Install the CLI

For those of you, which prefer a CLI, just install it on your master.

[root@dcos-master ~]#  [ -d /usr/local/bin ] || sudo mkdir -p /usr/local/bin
[root@dcos-master ~]# curl https://downloads.dcos.io/binaries/cli/linux/x86-64/dcos-1.11/dcos -o dcos
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 13.9M  100 13.9M    0     0  1313k      0  0:00:10  0:00:10 --:--:-- 3920k
[root@dcos-master ~]# sudo mv dcos /usr/local/bin
[root@dcos-master ~]# chmod +x /usr/local/bin/dcos
[root@dcos-master ~]# dcos cluster setup
If your browser didn't open, please go to the following link:

Enter OpenID Connect ID Token: eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6Ik9UQkVOakZFTWtWQ09VRTRPRVpGTlRNMFJrWXlRa015Tnprd1JrSkVRemRCTWpBM1FqYzVOZyJ9.eyJlbWFpbCI6Imp1bGlhLmd1Z2VsQGdtYWlsLmNvbSIsImVtYWlsX3ZlcmlmaWVkIjp0cnVlLCJpc3MiOiJodHRwczovL2Rjb3MuYXV0aDAuY29tLyIsInN1YiI6Imdvb2dsZS1vYXV0aDJ8MTA2NTU2OTI5OTM1NTc2MzQ1OTEyIiwiYXVkIjoiM3lGNVRPU3pkbEk0NVExeHNweHplb0dCZTlmTnhtOW0iLCJpYXQiOjE1NDA0NTA4MTcsImV4cCI6MTU0MDg4MjgxN30.M8d6dT4QNsBmUXbAH8B58K6Q2XvnCKnEd_yziiijBXHdW18P2OnJEYrKa9ewvOfFhyisvLa7XMU3xeBUhoqX5T6mGkQo_XUlxXM82Ohv3zNCdqyNCwPwoniX4vU7R736blcLRx1aB8TJnydNb0H0IzEAVzaYBQ1CRV-4a9KsiMXKBBPlskOSvek4b_FRghA6hsjMA2eO-G5r3B6UgHo6CCwdwVrhsOygvJ5NwDC0xiFrnkW-SjZRZztCN8cRj7b40VH43uY6R2ibxJfE7SaGpbWzLyp7juUJ766WXar3O7ww42bYIqLnAx6YmWG5kFeJnmJGT-Rdmhl2JuvdABoozA

That’s it, now you can configure and use your DC/OS. Always keep in mind: the ntpd service is really essential for a working DC/OS Node. Also use the /var/log/messages, it really helps!
One little thing I have to mention at the end. Don’t confide in the official documentation and the troubleshooting guide, it does not help as much as expected…

Cet article Deploy DC/OS using Ansible (Part 2) – Playbooks est apparu en premier sur Blog dbi services.

Deploy DC/OS using Ansible (Part 1) – Getting Started

Fri, 2018-10-26 02:01

To start into this topic I want to shortly explain some basics. Afterwards I show you how to prepare the configuration files.


Ansible is a Open Source automation utility. It is used for orchestration and configuration as well as the administration of PCs/Servers. You could say, okay but we have puppet or saltstack or an other framework, why should I use Ansible? Ansible differs! It has no Agent installed on the systems, it just needs a working SSH connection and a python installation. For deploying changes just write a Ansible Playbook, a simple YAML-File. For further information about Ansible just visit the Ansible Homepage.


Mesosphere’s DC/OS is a distributed operating system based on Apache Mesos (read more). It gives you the possibility to manage multiple machines as if they were one. Resource management, process placement scheduling, simplified installations and management of distributed services can be automated using DC/OS. DC/OS comes with a web interface as well as a command-line interface which can be used for monitoring and remote management.
DC/OS can be used as cluster manager, container platform and operating system. A quite mighty tool. To explain all the functionalities would go to far.

For setup the minimal DC/OS Using Ansible you need at least six servers:
– one Ansible
– one Bootstrap
– one Master
– two private Agents
– one public Agent

Bootstrap Node

In general, the bootstrap is the essential one when you spin up a server. It is used as staging location for the software installation, stores the DC/OS configuration and the bootstrap files for the DC/OS.

Master Node

The DC/OS master manages “the rest” of the cluster. It’s possible to run one or more master nodes. They contain most of the DC/OS components and the Mesos master process. It also provides the web interface, which provides a nice graphical view of the DC/OS Cluster

Private Agent Node

The private agents do not allow access from outside the cluster. They provide resources to the cluster.

Public Agent Node

The public agent node is a node on the network, that allows access from the outside of the DC/OS. The public agent is primary used as a kind of load balancer to decrease the surface that could be accessed by attackers.
In a cluster you need less public agent nodes than private agent nodes as they can handle the multiple agent services.

I just described the node components used for the installation. If you want more and deeper insights into DC/OS and it’s architecture, you can find a detailed documentation on the Mesosphere Homepage


For the minimal installation of this DC/OS you need six servers:
Each of the server with a public and a private IP expect the ansible server.


To install the DC/OS using Ansible I used the playbooks from GitHub. But as usual there is some specific stuff, when you test it at your environment.

Prepare the ansible server Install git and get ansible-dcos from git-hub
[root@dcos-ansible ~]# yum install git -y

[root@dcos-ansible ~]# git clone https://github.com/dcos-labs/ansible-dcos
Cloning into 'ansible-dcos'...
remote: Enumerating objects: 69, done.
remote: Counting objects: 100% (69/69), done.
remote: Compressing objects: 100% (48/48), done.
remote: Total 1957 (delta 25), reused 42 (delta 15), pack-reused 1888
Receiving objects: 100% (1957/1957), 312.95 KiB | 0 bytes/s, done.
Resolving deltas: 100% (982/982), done.

[root@dcos-ansible ~]# cd ansible-dcos/
[root@dcos-ansible ansible-dcos]# git tag
[root@dcos-ansible ansible-dcos]# git checkout v0.7.0-dcos-1.11
Note: checking out 'v0.7.0-dcos-1.11'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b new_branch_name

HEAD is now at 1f2cf7d... Prepare version v0.7.0-dcos-1.11
Install ansible
[root@dcos-ansible ansible-dcos]# yum install ansible
Prepare the hosts.yaml file
[root@dcos-ansible ansible-dcos]# pwd
[root@dcos-ansible ansible-dcos]# cp hosts.example.yaml hosts.yaml

[root@dcos-ansible ansible-dcos]# cat hosts.yaml
# Example for an ansible inventory file
        # Public IP Address of the Bootstrap Node
        # Public IP Addresses for the Master Nodes
        # Public IP Addresses for the Agent Nodes
        # Public IP Addresses for the Public Agent Nodes
    # IaaS target for DC/OS deployment
    # options: aws, gcp, azure or onprem
    dcos_iaas_target: 'onprem'

    # Choose the IP Detect Script
    # options: eth0, eth1, ... (or other device name for existing network interface)
    dcos_ip_detect_interface: 'eth0'

    # (internal/private) IP Address of the Bootstrap Node
    dcos_bootstrap_ip: ''

    # (internal/private) IP Addresses for the Master Nodes

    # DNS Resolvers

    # DNS Search Domain
    dcos_dns_search: 'None'

    # Internal Loadbalancer DNS for Masters (only needed for exhibitor: aws_s3)
    dcos_exhibitor_address: 'masterlb.internal'

    # External Loadbalancer DNS for Masters or
    # (external/public) Master Node IP Address (only needed for cli setup)
    dcos_master_address: 'masterlb.external'
Create the setup variables for DC/OS
[root@dcos-ansible ansible-dcos]# pwd
cp group_vars/all.example group_vars/all
enable SSH access on nodes with Ansible

In case of authentication problem using ansible playbooks, repeat the steps with “exec….” and “sshd-add…”

ssh-keygen -t rsa -b 4096 -C "admin@it.dbi-services.com" -f ~/.ssh/ansible-dcos
[root@dcos-ansible ansible-dcos]# exec /usr/bin/ssh-agent $SHELL
[root@dcos-ansible ansible-dcos]# ssh-add ~/.ssh/ansible-dcos
Enter passphrase for /root/.ssh/ansible-dcos:
Identity added: /root/.ssh/ansible-dcos (/root/.ssh/ansible-dcos)
Enter lines for initial SSH access on all nodes with ansible in group_vars/all

All systems must have the same username/password combination, otherwise ansible runs into failures.
In this step you have to change only the last 4 lines of group_vars/all

[root@dcos-ansible ansible-dcos]# cat group_vars/all
# Install latest operating system updates
os_system_updates: False

# DC/OS cluster version
dcos_version: '1.11.4'

# If planning to upgrade a previous deployed DC/OS Cluster,
# uncomment the following variable
#dcos_upgrade_from_version: '1.11.3'

# Download URL for DC/OS
dcos_download: "https://downloads.dcos.io/dcos/stable/{{ dcos_version }}/dcos_generate_config.sh"

# Name of the DC/OS Cluster
dcos_cluster_name: 'demo'

# Deploy Mesosphere Enterprise DC/OS or DC/OS OSS?
dcos_deploy_ee_package: False

# Optional if dcos_iaas_target := aws
#dcos_exhibitor: 'aws_s3'
#dcos_aws_access_key_id: '******'
#dcos_aws_secret_access_key: '******'
#dcos_aws_region: 'us-west-2'
#dcos_s3_bucket: 'bucket-name'

# Optional if dcos_iaas_target := azure
#dcos_exhibitor: 'azure'
#dcos_exhibitor_azure_account_name: 'name'
#dcos_exhibitor_azure_account_key: '******'

# Only required when deploying Mesosphere Enterprise DC/OS
dcos_ee_security: 'permissive'
dcos_ee_license_key_contents: '******'
dcos_ee_superuser_username: admin
# Default password:= admin
dcos_ee_superuser_password_hash: "$6$rounds=656000$8CXbMqwuglDt3Yai$ZkLEj8zS.GmPGWt.dhwAv0.XsjYXwVHuS9aHh3DMcfGaz45OpGxC5oQPXUUpFLMkqlXCfhXMloIzE0Xh8VwHJ."

# Configure rexray to enable support of external volumes (only for Mesosphere Enterprise DC/OS)
# Note: Set rexray_config_method: file and edit ./roles/bootstrap/templates/rexray.yaml.j2 for a custom rexray configuration
# options: empty, file
dcos_ee_rexray_config_method: empty

#For initial SSH access on nodes with Ansible
ansible_password: "password"
ansible_become_pass: "password"
#initial_remote_user: root
Change ansible configuration file
inventory = hosts.yaml
host_key_checking = False
remote_user = ansible
roles_path = ./roles
Insert the servers to Ansible hosts file

As most of this file is commented out, I just post the section I added

[root@dcos-ansible ansible-dcos]# cat /etc/ansible/hosts

Finally we can start with the playbooks – Read part 2 for this

Cet article Deploy DC/OS using Ansible (Part 1) – Getting Started est apparu en premier sur Blog dbi services.

What’s OpenDB Appliance ?

Thu, 2018-10-25 10:07

Thanks to the OpenDB Appliance, the “click, hope and pray” approach is a thing of the past. Use the tool developed by dbi services’ specialists to make your work easier”

OpenDB Appliance makes life easier, discover why.

Get furhter information about the OpenDB Appliance.

Cet article What’s OpenDB Appliance ? est apparu en premier sur Blog dbi services.

Some psql features you are maybe not aware of

Thu, 2018-10-25 04:36

It is the time of The 10th Annual PostgreSQL Conference Europe, so this is the perfect time to blog about some tips and tricks around psql you’ll love. psql is such a powerful tool that you really should use it every day. It saves you so much work and is packed with features that makes your life so much easier. In this post we’ll look at some features you maybe didn’t know before.

Lets start with something very simple: You probably know the “\l” shortcut to display all the databases:

postgres=# \l
                                 List of databases
   Name    |  Owner   | Encoding |  Collate   |   Ctype    |   Access privileges   
 postgres  | postgres | UTF8     | en_US.utf8 | en_US.utf8 | 
 template0 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres          +
           |          |          |            |            | postgres=CTc/postgres
 template1 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres          +
           |          |          |            |            | postgres=CTc/postgres

Did you know you also can pass the shortcuts from your shell directly into psql?

postgres@pgbox:/home/postgres/ [PGDEV] psql -c '\l' postgres
                                 List of databases
   Name    |  Owner   | Encoding |  Collate   |   Ctype    |   Access privileges   
 postgres  | postgres | UTF8     | en_US.utf8 | en_US.utf8 | 
 template0 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres          +
           |          |          |            |            | postgres=CTc/postgres
 template1 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres          +
           |          |          |            |            | postgres=CTc/postgres

But there is even a faster way for retrieving that information:

postgres@pgbox:/home/postgres/ [PGDEV] psql -l
                                 List of databases
   Name    |  Owner   | Encoding |  Collate   |   Ctype    |   Access privileges   
 postgres  | postgres | UTF8     | en_US.utf8 | en_US.utf8 | 
 template0 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres          +
           |          |          |            |            | postgres=CTc/postgres
 template1 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres          +
           |          |          |            |            | postgres=CTc/postgres

Did you know you can log the complete psql session to a logfile?

postgres@pgbox:/home/postgres/ [PGDEV] psql -X -L /var/tmp/log postgres
psql (12devel)
Type "help" for help.

postgres=# select 1;
(1 row)

postgres=# select 2;
(1 row)

postgres=# \! cat /var/tmp/log
********* QUERY **********
select 1;

(1 row)

********* QUERY **********
select 2;


You probably know that copy is the fastest way to get data into and out of PostgreSQL. Did you know you can copy from a program?

postgres=# create table lottery ( draw_date date, winning_numbers text, mega_ball integer, multiplier integer );
postgres=# copy lottery from 
                program 'curl https://data.ny.gov/api/views/5xaw-6ayf/rows.csv?accessType=DOWNLOAD' 
                with (header true, delimiter ',', format csv);
COPY 1713
postgres=# select * from lottery limit 5;
 draw_date  | winning_numbers | mega_ball | multiplier 
 2002-05-17 | 15 18 25 33 47  |        30 |           
 2002-05-21 | 04 28 39 41 44  |         9 |           
 2002-05-24 | 02 04 32 44 52  |        36 |           
 2002-05-28 | 06 21 22 29 32  |        24 |           
 2002-05-31 | 12 28 45 46 52  |        47 |           
(5 rows)

That basically means, whatever “program” you use: As long as the result is something psql understands you can use it.

How often do you dynamically build SQL statements you want to execute right after? There is a quite effective solution for that in psql:

postgres=# select 'create table t'||i||'( a int )' from generate_series(1,10) i; \gexec
 create table t1( a int )
 create table t2( a int )
 create table t3( a int )
 create table t4( a int )
 create table t5( a int )
 create table t6( a int )
 create table t7( a int )
 create table t8( a int )
 create table t9( a int )
 create table t10( a int )
(10 rows)


Did you know you can store the result of a query into a variable and use that later in other statements?

postgres=# select 3 as var; \gset
(1 row)

postgres=# \echo :var
postgres=# select * from lottery where multiplier = :var;
 draw_date  | winning_numbers | mega_ball | multiplier 
 2011-02-18 | 05 06 07 30 45  |        42 |          3
 2011-03-01 | 01 12 19 20 47  |        25 |          3
 2011-04-01 | 13 14 35 36 53  |        19 |          3
 2011-04-08 | 06 40 45 50 56  |        11 |          3
 2011-04-15 | 22 23 33 39 48  |        29 |          3
 2011-04-22 | 03 18 46 51 53  |        17 |          3
 2011-04-26 | 19 29 32 38 55  |        15 |          3
 2011-05-06 | 06 18 26 37 41  |         9 |          3
 2011-05-24 | 09 12 21 42 43  |        42 |          3
 2011-05-31 | 28 30 31 37 55  |        13 |          3
 2011-06-03 | 20 23 41 49 53  |        31 |          3
 2011-06-10 | 18 21 27 37 38  |         7 |          3

The last one for today is one of my favorites: As with the Linux watch command you can watch in psql:

postgres=# select now();
 2018-10-23 21:57:17.298083+02
(1 row)

postgres=# \watch
Tue 23 Oct 2018 09:57:19 PM CEST (every 2s)

 2018-10-23 21:57:19.277413+02
(1 row)

Tue 23 Oct 2018 09:57:21 PM CEST (every 2s)

 2018-10-23 21:57:21.364605+02
(1 row)

Btw: You can see that the PostgreSQL Conference Europe is a technical conference when you take a look at the exhibition area during the sessions: Almost empty :)

Cet article Some psql features you are maybe not aware of est apparu en premier sur Blog dbi services.

Getting in touch with shareplex

Thu, 2018-10-25 02:39

Tuesday this week I had the opportunity to get in touch with shareplex, Quest’s replication solution. This product does not rely on Oracle licenses, so can also be used with Standard Edition. It is competitor of Oracle’s GoldenGate an used for asynchronous replication, too.
An interesting feature is that not only committed transactions can be replicated, which is an advantage with big transactions. With this feature replication latencies within seconds can be realized. Also migrations of big databases over operating system, character set and database release boundaries are possible with this solution. Quest mentions that costs are lower than that of comparable Oracle products.

It can be also used for reporting, high availability, distributed processing and load sharing.

Cet article Getting in touch with shareplex est apparu en premier sur Blog dbi services.

Oracle OpenWorld 2018: Day 3

Wed, 2018-10-24 23:42

Today my first session was about GDPR Data Security in the GDPR Era. It was presented by
Joao Nunes, IT Senior Manager, NOS
Tiago Rocha, Database Administrator, “Nos Comunicaões, Sa.”
Eric Lybeck, Director, PwC
The speakers started by presenting what is GDPR which is a new law protecting data for European citizens. After they explain the changes for companies about this new law.
They talk about the GDPR articles related to Oracle Database Security.
And they conclude by underlining that the technical to be compliant with GDPR is not the most important part and then companies must have well documented processes.
My second session was Oracle Database Security Assessment Tool: Know Your Security Posture Before Hackers Do presented by
Pedro Lopes, DBSAT and EMEA Field Product Manager, Oracle
Marella Folgori, Oracle
Riccardo D’Agostini, Responsabile Progettazione Data Security, Intesa Sanpaolo
This session was about the new Oracle Database Security Assessment Tool (DBSAT) which can help to discover sensitive personal data, identify database users and their entitlements, and understand the configuration and operational security risks. This tool is free for Oracle customers.
I think this picture will help to better understand DBSAT
They also presented new features in the upcoming vesion. Note that actually only CIS rules are included
My last session was Multitenant Security Features Clarify DBA Role in DevOps Cloud presented
Franck Pachot, Database Engineer, CERN
Pieter Van Puymbroeck, Database Administrator, Exitas NV
We will not present Franck Pachot and as usual the session was exciting. It was about security in a multitenant environment.
The speakers explain how privileges can be restricted using lockdown profiles in a multitenant environment.
And to finish this beautiful picture

Cet article Oracle OpenWorld 2018: Day 3 est apparu en premier sur Blog dbi services.

Oracle Open World 2018 D3: Replication what’s new in MySQL 8

Wed, 2018-10-24 20:08

For this last day (25.10.2018) at Oracle Open World my first technical session was “Replication what’s new in MySQL 8“. This session was given by Nuno Carvalho – Oracle MySQL Principal Software Engineer, Vitor Oliveira – Oracle MySQL Performance Architect, Louis Soares – Oracle MySQL Software Development Director. You can find the list of new features linked with MySQL InnoDB Cluster here.

MySQL - high availabiltiy

They introduced the session with the challenges a database has to face today:

  • We share lots of data
  • All things distributed
  • We are not sharing anymore few Ko but Mo
  • Go green requires dynamic and adaptative behavior
  • Moving, transforming and processing data quicker than anyone else means having an edge over competitors
  • We expect service always available even in case of migration/upgrade
  • etc…

Some years ago we solved availability concerns with replication but replication is not anymore able to solve the today’s challenges. Replication was perfect to generate and reproducing multiple copies of data at one or more sites. The MySQL replication technology evolved since version 3.23 where replication was asynchronous. Since version 5.5 thanks to a the semi-synchronous replication plugin, we have semi-synchronous replication and now since version 5.7.17 and 8.01 we have group replication.

MySQL Replication evolution

In order to answer to the today’s challenges, the solution must fit with these requirements:

  • Replicate: The number of servers should grow or shrink dynamically with as little pain as possible
  • Automate: The primary/secondary role assignment has to be automatic. A new primary has to be elected automatically on primary failures. The read/write modes on primary and secondaries have to be setup automatically. A consistent wview of which server is the primary has to be provided.
  • Integrate: MySQL has to fit with other technologies such as Hadoop, Kafka, Solr, Lucene, aso…
  • Scale: Repliacte between clusters for disaster recovery. For read scale out, asynchronous read replicas can be connected to the cluster
  • Enhance: Group replication for higher availability. Asynchronous Replication for Read Scale-out. One-stop shell to deploy and manage the cluster. Seamlessly and automatically route the workload to the proper database server in the cluster (in case of failure). Hide failures from the application.

MySQL role change

Enhancements in MySQL 8 (and 5.7)

The following has been enhanced in version 8 and 5.7:

  • Binary log Enhancements. Thanks to new metadata: Easy to decode what is in the binary log. Further facilitates connecting MySQL to other systems using the binary log stream. Cpaturing data changes through the binary log is simplified. Also more stats showing where the data is/was at a certain point in time.
  • Operations: Preventing Updates On replicas that leave the cluster- Automatic protection against involuntarily tainting of offline replicas. Primary Election Weights – Choose next primary by assigning election weights to the candidates. Trigger primary Election Online – User tells current primary to give up its role and assign it to another server(new in 8.0.13). Relaxed Member Eviction – User controls the amount of time to wait until others decide to evict a member from the group.
  • Performance: Highly efficient Replication Applier – write set parallelization. Fast Group Replication Recovery – Replica quickly online by using WRITESET. High Cluster Througput – More transactions per second while sustaining zero lag on any replica. Efficient Replication of JSON Documents – Replicate only changed fields of documents (Partial JSON Updates).
  • Monitoring: Monitor Lag with microsecond precision – From the immediate master and for each stage of the replication applier process. Global Group Stats Available on Every Server – Version, Role and more

MySQL 8 Group Replication

I finally recommend to have a look on the blogs from the Engineers  where you will find news, technical information and much more: http://mysqlhighavailability.com


Cet article Oracle Open World 2018 D3: Replication what’s new in MySQL 8 est apparu en premier sur Blog dbi services.

My first day at the #pgconfeu 2018 in Lisbon

Wed, 2018-10-24 16:32

After the first Swiss PostgreSQL community dinner yesterday evening, the conference started this morning. dbi services as Gold partner of the 10th European conference in Lisbon, get the opportunity to have a booth to present all our Open Infrastructures Services.


For the occasion we decided to announce this morning our brand new video of our OpenDB Appliance, which is a real success, because we have more than one hundred views and many attended of the conference was coming to our booth to get more information about it.

We are in Lisbon! Come and meet us on our booth at PG Conference Europe and ask @Herveschweitzer, @westermanndanie, @Pybrehier about the OpenDB Appliance – our product that makes your life easier by providing functional virtual DB environments in seconds.#pgconfeu #postgres pic.twitter.com/XzH0kl4Xh7

— dbi services (@dbiservices) October 24, 2018

Today I followed many sessions, but one of them was especially interesting for me “zheap: An answer to PostgreSQL bloat woes” from Amit Kapila
This presentation presented the new Postgres storage engine “ZHEAP” which is currently under development, currently no availability plan of this storage engine exist, I think not before 2020. But I’m exiting to test this new feature of PostgreSQL.

First what it is this new ZHEAP storage engine? ZHEAP allow the usage of a separate UNDO tablespace to guarantee rollbacks, which currently at Postgres is done with keeping the old and new rows into the table itself. The problem of keeping both values into the table, is that the table will bloat.

The presentation is available here on slideshare : link to the presentation

As an experimented Oracle DBA I want to test it. Therefore I asked my colleague Daniel Westermann: how can I test it ? he say “it’s easy”. I always hear that from Postgres that it’s easy, so I say we will do it now.

At 17h20 I started to clone the git repository of the project https://github.com/EnterpriseDB/zheap
30 minutes later after installing and creating my own build, I’m ready for the testing.

See below some output of the new running development ZHEAP database.

02:42:34 postgres@dbi-pg-tun:/u02/pgdata/zheap/pg_log/ [ZHEAP] grep -i undo postgresql-Wed.log 

2018-10-17 02:41:49.498 CEST - 10 - 6544 -  - @ LOG:  background worker "undo worker launcher" (PID 6553) exited with exit code 1

02:42:42 postgres@dbi-pg-tun:/u02/pgdata/zheap/pg_log/ [ZHEAP] grep -i "discard worker" postgresql-Wed.log 

2018-10-17 02:41:52.594 CEST - 1 - 6597 -  - @ LOG:  discard worker started

At startup we see the new “undo worker” and “discard worker” process into the logfile, where Amit Kapila just talked about.
So now I will test to create a new table with the storage_engine “ZHEAP”

02:50:39 postgres@dbi-pg-tun:/u02/pgdata/zheap/pg_log/ [ZHEAP] sqh
psql (12devel dbi services zheap build)
Type "help" for help.

PSQL>  create table t_zheap(c1 int, c2 varchar) with (storage_engine='zheap');
Time: 12.433 ms

That’s it :-) my first table using the zheap storage is created, and I can start testing.

Trust me I was coming back from the session at 17:20 and at less than 30 minutes later, I have a running test system using the ZHEAP storage engine, it is very impressive how fast it is to get access to a Postgres development platform.

Tomorrow I will write a blog where I will make some tests using ZHEAP, because now it’s time for the PGconf.eu Party :-)


Cet article My first day at the #pgconfeu 2018 in Lisbon est apparu en premier sur Blog dbi services.

pgconf.eu finally kicked off

Wed, 2018-10-24 06:30

So, finally it started: Magnus kicked off the 10th annual PostgreSQL Conference Europe this morning in Lisbon. With 450 attendees the conference is even bigger this year than it was last year in Warsaw and it will probably be even bigger next year. One can really feel the increasing interest in PostgreSQL in Europe (and probably around the world as well). Even Tom Lane is attending this year.

Conferences are not only about technical content, social events are important as well. You can meet people, have great discussion, enjoy local food and drinks. And that is exactly what we did yesterday evening when the Swiss PostgreSQL community came together for lunch:

Conferences are not only about fun, sometimes you have to work on your queue. Working at conferences on the other side gives you the possibility to chose nice working places:

… and of course you have to work hard on preparing the booth:

But once you’ve done all that you are ready for the conference:

… and then the mess starts: There is such an impressive line up of speakers, where do you go? Not an easy choice and you will obviously miss one or the other session. But hey, that’s the PostgreSQL community: Everybody is open for questions and discussions, just jump in.

One of the benefits of sponsoring is that you get a big thank you when the conference starts and that you can have your logo on the official t-shirt:

And that brings us to the final thoughts of this post: Why are we doing that? The answer is quite simple: Without sponsoring, organizing such a big community event is impossible. As you know PostgreSQL is a pure community project so it depends on the community not only on the technical but also on the financial level. When you make money with community projects you should give something back and sponsoring is one way of doing that.

Finally, we are committed to open source technologies. You can see that e.g. in the events we are organizing, on our blog and events such as this one. Three days of great content, great discussion and fun ahead.

Cet article pgconf.eu finally kicked off est apparu en premier sur Blog dbi services.

Oracle OpenWorld 2018: Day 2

Wed, 2018-10-24 00:04

Today is my second day of Oracle OpenWorld 2018. I can now go to the Moscone Center without GPS (cool) and then I decided to follow a MySQL session (my boss will be happy). Yes my first session was Using the MySQL Binary Log as a Change Stream by Luis Soares, Software Development Director, Oracle
The speaker explains what are the binary logs.
How to initialize the binary logs and how to manage them
How to inspect them
What Changed on MySQL 8
He also explained how these binary logs can be combined with other tools in a case of replication.

With ProxySQL

With Gh-ost

Using the binary logs we can also undo some transactions (feature not developped by Oracle) but by the community

And the conclusion
My second session was DBAs Versus Autonomous Databases. It was a very funny session but a very interesting topic. The Speaker started by doing a remind of the different version of Oracle since the beginning.
Another funny picture
And the famous topic
And still the famous question
So everybody will understand that the session was very exciting with many questions.

After this session I decided to follow a session about Oracle Sharding. The session was animated by
Mark Dilman, Senior Director, Software Development, Oracle
Srinagesh Battula, Sr. Principal Product Manager, Oracle
Gairik Chakraborty, Senior Director,Database Administration, Epsilon

They start by defining what is Sharding, how to setup, how to manage queries and so on. You can see this blog to understand what is Sharding. And Then after they talk about the New Features on Oracle 19c.


As you can see the quality of pictures may be better, but there are lot people and it’s not easy to take pictures.
After I visit some stands and this was the end of my day.
See you tomorrow for my Day 3.

Cet article Oracle OpenWorld 2018: Day 2 est apparu en premier sur Blog dbi services.

Oracle Open World 2018 D2: Peter Zaitsev – MySQL 8 Field Report

Tue, 2018-10-23 21:45

As a former MySQL Consultant during this second day (24.10.2018) I couldn’t miss a session given by Peter Zaitsev founder of Percona and assuming the role of CEO. This session named MySQL 8 Field Report is a kind of summary of all new features encapsulated in MySQL 8.

Peter Zaitsev

During the first slides, Peter presented performance related to utf8mb4 since it’s the default characterset in version 8. These slides had an Oracle logo on the botom that why I prefer make some tests before speaking about these results. However according to these slides there is a strong performance increase on OTLP database in RO as well as in RW compared to MySQL 5.7.


In terms of security Peter spoke about:

  • Roles
  • Breakdown of Super Privileges
  • Password history
  • Faster cached-SHA2 Authentication
  • skip-grants blocks remote connections
  • Multiple Addresses for bind address (8.0.13)
  • Require Password for Password Change (8.0.13)
  • Redo and Undo Logs are now encrypted if Table Encryption is enabled

In the trend of autonomous database MySQL 8 is able to tune automatically the following parameters:

  • innodb_buffer_pool_size
  • innodb_log_file_size
  • innodb_flush_method

if you set innodb_dedicated_server to auto-tune. However as explained in the documentation: “Only consider enabling this option if your MySQL instance runs on a dedicated server where the MySQL server is able to consume all available system resources. Enabling this option is not recommended if your MySQL instance shares system resources with other applications.”

Partial In-Place Update for JSON and invisible Index

It’s not anymore required to do a full rewrite of a field in MySQL 8 you can now update field in JSON object. However only update and removal of element is supported. Full support has been added in maintenance releases.

Thanks to invisible indexes you can test impact of dropping indexes before actually dropping them. You can use use_invisible_indexes to use invisible indexes in a session.

Improved Optimizer Cost Model

Peter gave us an interesting link regarding MySQL 8.0 Optimizer, the unofficial MySQL 8.0 Optimizer Guide. I really advice you to have a look on this very interesting website.

Performance Schema

About performance schema MySQL 8.0 provides the following:

Resource Groups

“MySQL supports creation and management of resource groups, and permits assigning threads running within the server to particular groups so that threads execute according to the resources available to the group.” – MySQL Documentation

According to Peter’s slides MYSQL 8.0 is about 100% faster (select and update) with resource groups.

Developer features
  • Instant Add Column (add column without rebuilding table)
Alter table t1 add column d int default 1000, algorithm=instant;
  • Better Handlinf of Hot Row Contention
  • Descending flag in index definition is no more ignored
    • Allows efficient handling of ORDER BY A ASC, B DESC queries
  • JSON to Table Conversion (Labs)
  • Much Better GIS
  • Functions in DEFAULT (8.0.13)
Create table t2 (a binary(16) default uuid_to_bin(uuid()));
Create index idx1 ON t1 ((col1+col2));
  • MySQL Document Store
    • Full Text Indexing
    • GeoJSON Support

As a summary Peter concludes by telling that MySQL 8 looks like release to be excited about and has a lot of new features both for Devs and Ops.



Cet article Oracle Open World 2018 D2: Peter Zaitsev – MySQL 8 Field Report est apparu en premier sur Blog dbi services.

Oracle Open World 2018 D2: Mark Hurd’s keynote – Accelerating Growth in the Cloud

Tue, 2018-10-23 18:39

During this second day at Oracle Open World 2018 (24.10.2018) I attended to Mark Hurd’s keynote name “Accelerating Growth in the Cloud. Several famous people participated to this keynote such as:

Ian Bremmer who is the president and founder of Eurasia Group and according to Oracle “the leading global political risk research and consulting firm. Mr Bremmer is also the president and founder of GZERO Media.

Sherry Aaholm who is the Vice President and Chief Information Officer of Cummins Inc. “Cummins Inc. is an American Fortune 500 corporation that designs, manufactures, and distributes engines, filtration, and power generation products” – wikipedia.

Sherry Aaholm with Mark Hurd

Navindra Yadav, Founder of Tetration Analytics. “Cisco Tetration offers holistic workload protection for multicloud data centers by enabling a zero-trust model using segmentation” – Cisco

Navindra Yadav and Mike Hurd

Thaddeus Arroyo, Chief Executive Officer of AT&T Business. Mr Arroyo is responsible for the company’s integrated global business solutions organization, which servces more than 3 million business customers in nearly. “AT&T is the world’s largest telecommunications company, the second largest provider of mobile telephone services, and the largest provider of fixed telephone services in the United States through AT&T Communications.” – wikipedia

Thaddeus Arroyo with Mike Hurd

Geopolitical analysis with Ian Bremmer

The session started with a videoconference between Mark Hurd and Ian Bremmer regarding geopolitical topics. China has been mentioned as the biggest economy in the world and technology superpower. It has also been underlined the alignment between Chinese company and Chinese Government. Regading U.S they spoke about investment in physical defense vs investment in virtual defense where there is still lot to do compared to some other countries.

Disruption as a constant

Mark Hurd then presented few slides starting with a short summary named “With disruption as a constant – technology becomes the differentiator”

  • Data is key asset for business to own, analyze, use and secure
  • Virtual assets will win over physical resources
  • Cyber teams are the new future
  • Cloud and integrated technologies, like AI, help organizations lower costs while driving innovation & improving productivity
Past predictions

He then recapped the predictions he did in 2015/2016 for 2025

  • 80% of production apps will be in the cloud
  • Two SaaS Suite providers will have 80% market share
  • The number of corporate-owned data centers will have decreased by 80%
  • 80% of IT budgets will be spent on cloud services
  • 80% of IT budgets will be spent on business innovation, and only 20% on system maintenance
  • All enterprise data will be stored in the cloud
  • 100% of application development and testing will be conducted in the cloud
  • Enterprise clouds will be the most secure place for IT processing

and the ones he did in 2017 for 2020

  • More than 50% of all enterprise data will be managed autonomously and also be more secure
  • Even highly regulated industries will shift 50% of their production workloads to cloud
  • 90% of all enterprise applications will feature integrated AI capabilities
  • The top ERP vendor in the cloud will own more than half of the total ERP market

Then he presented few predictions that have been afterwards by Forbes and Gartner Reseach to prove that the analysts and press had followed the same predictions…

  • In 15 months, 80% of all IT budgets will be committed to cloud apps and solutions – Forbes, Louis Columbus, “State of Cloud Adoption and Security”, 2017
  • 80% of enterprises will have shut down their traditional data centers by 2025 – Gartner Reserach, Dave Cappuccio, “The Data Center is Dead” 2018
  • The Cloud Could Be Your Most Secure Place for Data, Niall Browne CISO, Domo, 2017
  • Oracle, Salesforce, and MSFT together have a 70% share of all SaaS revenue – Forrester Research, 10 Cloud Computing predictions for 2018
  • AI Technologies Will Be in Almost Every New Software Product by 2020 – Gartner Research, Jim Hare, AI development strategies, 2017

Mark Hurd then spoke about AI in a slide named “Business Applications with AI” where he presented few statistics in order to better understand in what AI(chatbot, blockchain, aso) can help businesses. Not to mention that all these technologies will be encapsulated in Cloud Services.

  • ERP Cloud – 30% of Financial Analyst’s time “roughly 1 full day a week) is spent doing manual reports in excel. using AI, reports become error free and more insightful.
  • HCM Cloud – 35% of job recruiter’s day spent in sourcing and screening candidates. This could be cut in half, and result in improved employee talent.
  • SCM Cloud – 65% of Managers time spent manually tracking the shipment of goods. With Blockchain, this could be automated for improved visibility and trust.
  • CX Cloud – 60% of phone-support time on customer issues could be avoided altogether. With Integrated CX and AI could be addressed in a single call or via a chatbot.
Mark Hurd’s predictions by 2025

Finally he spoke about his own predictions for 2025: By 2025, all cloud apps will include AI

  • These Cloud apps will further distance themselves from legacy applications.
  • AI will be pervasive and woven into all business apps and platform services.
  • The same will be true for technologies like blokchain.

According to him by 2025, 85% of interactions with customers will be automated: Customer experience is fundamentally changing (and will dramatically improve) with these emerging technologies:

  • AI-based Digital Assistanst increases productivity and humanizes experiences
  • AI-driven Analytics helps businesses understand complexity of all customer needs
  • Internet of Things brings customers closer to companies that serve them
New I.T jobs by 2025

Regarding I.T jobs the following has been predicted by Mark Hurd:

  • 60% of the I.T Jobs have not been invented yet (But will be by 2025)

and the new jobs in 2025 will be:

  • Data professional (Analyst Scientist, Engineers)
  • Robot Supervisor
  • Human to Machine UX specialists
  • Smart Cyty Technology Designers
  • AI-Assisted Healtcare Technician

As a summary he concludes with a slide named “Better Business, Better I.T”

  • Cloud is irrefutable and foundational
  • Next in cloud is accelerated productivity and innovation
  • AI and other technologies will be integrated features
  • Autonomous database software will reduce cost and reduce risk

Mike Hurd during OOW2018

Cet article Oracle Open World 2018 D2: Mark Hurd’s keynote – Accelerating Growth in the Cloud est apparu en premier sur Blog dbi services.

Oracle Open World 2018 D1: Top Five MySQL Query Tuning Tips

Tue, 2018-10-23 16:52

Yesterday (22.10.2018) I participated to the Janis Griffin’s session about “Top Five Query Tuning Tips” at #OOW2018. Janis is Senior DBA /Performance Evangelist for SolarWinds and Ace Director. She is specialized in Performance Tuning.

Janis Griffin - MySQL Tuning Tips

She introduces her session by speaking about Challenges of Tuning. “Tuning takes time.”, “You cannot give enough power if SQL is ineficient”, “You therefore have to monitor wait time”. It sounds basic telling that it doesn’t worth adding CPU or memory when your SQL Statements have bad execution plan or are simply ineficient but that a common reflex that I already observed by customers.

But Tuning is hard, you do not always know where to start with (which statement you have to tune at first). It requires expertise in many areas, technical but also business. Of course tuning takes time and it’s not always the priority of the editor companies. Finally where to stop when you start tuning a statement ?

Janis Griffin - Total Wait Time

Let’s start with the tips…

1. Monitor Wait Time and understand the total time a Query spends in Database. MysQL helps by providing Wait Events and Thread States. Of course starting with MySQL 5.6 the Performance_Schema has been greatly improved and has 32 new tables in version 5.7. You can also access to the SYS Schema which is now provided by default with about 100 views.

2. Review the execution plan by using “explain”, “explain extended“, “explain FORMAT=JSON“, “Optimizer Trace” or “MySQL Workbench“. She also gave us some tips such as “Avoiding using table aliases since they don’t translate in plan”. “Optimizer trace” available since version 5.6.3+ can be used with:

set optimizer trace ="enabled=on"

Janis Griffin - Statement

3. Gather object information. Have a look on table definition and find if it’s really a table or if it’s a view. Get size of the table by using

mysqlshow --status database {table} {column}

Then Examine Columns in Where Clause and review selected Column and especially the usage of ‘*’ and scalar column. Have also a look on existing indexes (if multi-column, know the left leading column). Make sure the Optimizer can use the index, indeed functions on indexed columns can turn off index and look for implicit conversions. Her tip is to check keys and constraints, because they help creating better execution plan.

4. Find the driving table. You need to know the size of the actual data sets of each step:

  • In Joins (Right, Left, Outer)
  • What are the filtering predicates
  • When is each filtering predicate applied

But also compare size of the final result set with data examined. The goal is to reduce rows examined.

You also have to check if you are using the best indexes. Keep in mind that adding indexes is not always the right thing to do since you have to consider insert, update and delete operations. Consider also usage of Covering and Partial indexes.

5. Engineer Out the Stupid. Look for performance inhibitors such as:

  • Cursor or row by row processing
  • Parallel query processing. Not always bad but have a look on this blog from Alex Rubin named “increasing slow query performance with parallel query execution
  • Hard-coded hints
  • Nested views
  • Abuse of Wild Cards(*) or No Where Clause
  • Code-based SQL Generation (e.g. PHP generator, LINQ; nHibernate)
  • implicit data conversions
  • Non-sargable /scalar functions (eg. Select… where upper(first_name) = ‘JANIS’

Finally you can have a look on Janis Best practices with MySQL Tuning here.

Cet article Oracle Open World 2018 D1: Top Five MySQL Query Tuning Tips est apparu en premier sur Blog dbi services.