Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 17 hours 34 min ago

Reimaging an old X3-2/X4-2 ODA

Fri, 2018-10-26 10:30
Introduction

X3-2 and X4-2 ODAs are still very capable pieces of hardware in 2018. With 256GB of RAM and at least 16 cores per node, and with 18TB RAW disk capacity as a standard these appliances are far from obsolete even you probably don’t have any more support on the hardware from Oracle.
If you own several ODAs of this kind, hardware support may not really be a problem. If something fails, you can use the other ODA for spare parts.

You probably missed some patches on your old ODAs. Why? Maybe because it’s not so easy to patch, and it’s even more difficult if you don’t patch regularly. Or maybe just because you don’t want to add more tasks to your job (applying each patch is just like never stop patching).

So if you want to give a second life to your ODA, you’d better reimage it.

Reimaging: how to do?

Reimaging is the best way to do the cleanup of your ODA. Current deployment packages are certified for all the ODAs, except from V1 (first generation before the X3-2).

You first have to download all the needed files from MOS. Pay attention to download the deployment packages for OAKCLI stack because ODACLI is limited to lite and newer ODAs.

Assuming you’re using a bare metal configuration and you want to deploy the latest ODA version 12.2.1.4.0, you will need the following files :

  • 12999313 : ISO for reimaging
  • 28216780 : patch for OAKCLI stack (because reimaging actually does not update bioses and firmwares)
  • 12978712 : appliance server for OAKCLI stack
  • 17770873,  19520042 et 27449599 : rdbms clones for database 11.2.0.4, 12.1.0.2 and 12.2.0.1

Network configuration and disk configuration didn’t change: you still need to provide all the IPs, VIPs, DNS and so on for the network, and disk configuration is still not so clear with external backup meaning that you will go for 85/15 repartition between DATA and RECO instead of the default 40/60 split. Don’t forget that you can change the redundancy level for each ASM diskgroup: DATA, RECO and REDO can use high redundancy, but normal redundancy will give you 50% more free space (18TB RAW is 9TB usable in normal redundancy and 6TB usable in high redundancy).

Step 1 – Connect the ISO as a CDROM through ILOM interface and reimage the servers

I won’t give you the extensive procedure for this part: nothing has changed regarding ODA reimaging during last years.

First step is to connect to the ILOM and virtually plug the ISO image on the server. Then, select the CDROM as the next boot device, and do a power cycle of the server. You’ll have to repeat this on the other node too. Reimaging lasts about 1h and is fully automatic. The latest step is still the longest one (post-installation procedure). Once the reimaging is done, each node should have a different default name: oak1 for node 0 and oak2 for node 1 (weird). If the nodes are both oak1, please check the cables connected to the shared storage: they must be connected according to the setup poster.

Step 2 – Configure basic network settings

Reimaging is always ending by a reboot, and depending on the appliance, it will ask you the kind of network you plan to use: Copper of Fiber. Then, through the ILOM, you need to launch the configure firstnet script:

/opt/oracle/oak/bin/oakcli configure firstnet

Repeat this configuration step on the second node. Now your nodes are visible through the network.

Step 3 – Deploy, cleanup, deploy…

Reimaging was so easy… But from now it will be a little more tricky. You now need to deploy the appliance: understand configure the complete network settings, install all the Oracle stack with Grid Infrastrucure, ASM, latest database engine and eventually create a first database. And you will need a graphical interface to configure all these parameters and launch the deployment. So, from the ILOM session, let’s unpack the necessary files, start a graphical session of Linux and launch the deployment GUI.

oakcli unpack -package /opt/dbi/p12978712_122140_Linux-x86-64_1of2.zip
oakcli unpack -package /opt/dbi/p12978712_122140_Linux-x86-64_2of2.zip
oakcli unpack -package /opt/dbi/p27449599_122140_Linux-x86-64.zip
startx
oakcli deploy

Graphical interface will help you to configure all the parameters, but don’t deploy straight away from now. Backup the configuration file and then edit it:

vi /opt/dbi/deploy_oda01

Review all the parameters and adjust them to perfectly match your needs (most of these parameters cannot be changed afterwards).

Now you can launch the real deployment and select your configuration file in the graphical interface:

oakcli deploy

First try will fail and it’s a normal behaviour. Failure is because of the ASM headers: they are still writen on the disks in the storage shelf. Reimaging did nothing on these disks. And already having ASM disks configured will make the deployment process to fail. Now you can exit the deployment and do a cleanup of the failed attempt.

/opt/oracle/oak/onecmd/cleanupDeploy.pl

Unfortunatly you cannot do the cleanup if nothing is already deployed, so you need this first failing attempt. Alternatively, you can do the cleanup before reimaging, or manually clean all the disks headers and partitions on the 20 disks before trying to deploy (with a dd), but it probably won’t be faster.

When the cleanup is done, the ODA will reboot and you’ll have to configure again the firstnet from the ILOM on both nodes.

/opt/oracle/oak/bin/oakcli configure firstnet

Finally, with a new graphical session you can restart the deployment, and this time, if your parameter file is OK, it will be succesful. Yes!

startx
oakcli deploy

Step 4 – Patch the server

It seems weird but reimaging actually doesn’t update the firmware, bios, ilom of the servers, nor the firmware of the disks in the storage shelf. Understand that reimaging is only a software reimaging of the nodes. This is an example of an ODA X4-2 configuration just after reimaging and deploying the appliance:

oakcli show version -detail
System Version  Component Name            Installed Version         Supported Version
--------------  ---------------           ------------------        -----------------
12.2.1.4.0
Controller_INT            11.05.03.00               Up-to-date
Controller_EXT            11.05.03.00               Up-to-date
Expander                  0018                      Up-to-date
SSD_SHARED                944A                      Up-to-date
HDD_LOCAL                 A720                      A7E0
HDD_SHARED {
[ c2d0,c2d1,c2d2,c2d      A720                      A7E0
3,c2d4,c2d5,c2d6,c2d
7,c2d8,c2d9,c2d11,c2
d12,c2d13,c2d14,c2d1
5,c2d16,c2d17,c2d18,
c2d19 ] [ c2d10 ]                 A7E0                      Up-to-date
}
ILOM                      3.2.4.46.a r101689        4.0.2.27.a r123795
BIOS                      25030100                  25060300
IPMI                      1.8.12.4                  Up-to-date
HMP                       2.4.1.0.11                Up-to-date
OAK                       12.2.1.4.0                Up-to-date
OL                        6.9                       Up-to-date
GI_HOME                   12.2.0.1.180417(2767      Up-to-date
4384,27464465)
DB_HOME                   12.2.0.1.180417(2767      Up-to-date
4384,27464465)

Hopefully you can apply the patch even if your ODA is already in the same software version as your patch. Well done Oracle.

So let’s register the patch files and do the patching of the servers (server will probably reboot):


oakcli unpack -package /opt/dbi/p282166780_122140_Linux-x86-64_1of3.zip
oakcli unpack -package /opt/dbi/p282166780_122140_Linux-x86-64_2of3.zip
oakcli unpack -package /opt/dbi/p282166780_122140_Linux-x86-64_3of3.zip
oakcli update -patch 12.2.1.4.0 --server
...


oakcli show version -detail

System Version  Component Name            Installed Version         Supported Version
--------------  ---------------           ------------------        -----------------
12.2.1.4.0
Controller_INT            11.05.03.00               Up-to-date
Controller_EXT            11.05.03.00               Up-to-date
Expander                  0018                      Up-to-date
SSD_SHARED                944A                      Up-to-date
HDD_LOCAL                 A7E0                      Up-to-date
HDD_SHARED {
[ c2d0,c2d1,c2d2,c2d      A720                      A7E0
3,c2d4,c2d5,c2d6,c2d
7,c2d8,c2d9,c2d11,c2
d12,c2d13,c2d14,c2d1
5,c2d16,c2d17,c2d18,
c2d19 ] [ c2d10 ]                 A7E0                      Up-to-date
}
ILOM                      4.0.2.27.a r123795        Up-to-date
BIOS                      25060300                  Up-to-date
IPMI                      1.8.12.4                  Up-to-date
HMP                       2.4.1.0.11                Up-to-date
OAK                       12.2.1.4.0                Up-to-date
OL                        6.9                       Up-to-date
GI_HOME                   12.2.0.1.180417(2767      Up-to-date
4384,27464465)
DB_HOME                   12.2.0.1.180417(2767      Up-to-date
4384,27464465)

Great, our servers are now up-to-date. But storage is still not OK.

Step 4 – Patch the storage

Patching the storage is quite easy (server will probably reboot):

oakcli update -patch 12.2.1.4.0 --storage
...

oakcli show version -detail

System Version  Component Name            Installed Version         Supported Version
--------------  ---------------           ------------------        -----------------
12.2.1.4.0
Controller_INT            11.05.03.00               Up-to-date
Controller_EXT            11.05.03.00               Up-to-date
Expander                  0018                      Up-to-date
SSD_SHARED                944A                      Up-to-date
HDD_LOCAL                 A7E0                      Up-to-date
HDD_SHARED                A7E0                      Up-to-date
ILOM                      4.0.2.27.a r123795        Up-to-date
BIOS                      25060300                  Up-to-date
IPMI                      1.8.12.4                  Up-to-date
HMP                       2.4.1.0.11                Up-to-date
OAK                       12.2.1.4.0                Up-to-date
OL                        6.9                       Up-to-date
GI_HOME                   12.2.0.1.180417(2767      Up-to-date
4384,27464465)
DB_HOME                   12.2.0.1.180417(2767      Up-to-date
4384,27464465)

Everything is OK now!

Conclusion – A few more things
  • When redeploying, consider changing the redundancy of the diskgroups and the partitionning of the disk if needed. This can only be configured during deployment. Disks parameters are located in the deployment file (DISKGROUPREDUNDANCYS and DBBackupType)
  • Always check that all the components are up-to-date to keep your ODA in a consistent state. Check on both nodes because local patching is also possible, and it could make no sense if the nodes are running different level of patch
  • Don’t forget to check/apply your licenses on your ODA because using Oracle software is for sure not free
  • You have to know that a freshly redeployed ODA will have 12.2 database compatibility on diskgroups, making the use of acfs mandatory for your old databases. For me it’s a real drawback considering that acfs is adding useless complexity to ASM
  • Don’t forget to deploy the other dbhomes according to your needs

Cet article Reimaging an old X3-2/X4-2 ODA est apparu en premier sur Blog dbi services.

Short summary of my PGCONF.EU 2018 conference

Fri, 2018-10-26 10:20

So it’s nearly the end of the conference, where I met lot’s of people from the Postgres community, customers, EDB partner and also contributor/developer of Postgres. During the 3 days I had also the chance to follow many technical sessions and I will give you a short feedback of my preferred sessions.
If you want to have the details of the sessions go the conference website https://www.postgresql.eu/events/pgconfeu2018/schedule

PGCONFEU

Tour de Data Types: VARCHAR2 or CHAR(255)?

Very interesting session from Andreas Scherbaum, where he presented the most interesting Data types with use cases and examples.

What’s new in PostgreSQL 11

Magnus it a well know presenter, and he made a good and funny presentation of the Postgres 11 new features.

CREATE STATISTICS – what is it for?

For me it was one of the best presentations it delivered not many information,the quality of the presentation was just perfect. I directly downloaded the presentation and added it to my knowledge book : https://github.com/tvondra/create-statistics-talk

Ace it with ACID: PostgreSQL transactions for fun and profit

The presentation was very good, especially if you want to trust your database.

An answer to PostgreSQL bloat woes

Thanks Amit for the presentation of the new storage engine ZHEAP, and for the information sharing. Directly after the session with Daniel we started to test this new storage engine. If you want to know more about this topic see my blog about ZHEAP.

AUTO PLAN TUNING USING FEEDBACK LOOP

It’s nice to know, which all new features in the Optimizer Tuning will come out the next years (FEEDBACK,ADVISOR,AUTONOMOUS,etc..). I wait impatiently to get access to the project,to be able to tests these new features.

See you next year !

Cet article Short summary of my PGCONF.EU 2018 conference est apparu en premier sur Blog dbi services.

How to change oam-config.xml to adapt to Oracle Access Manager configuration change

Fri, 2018-10-26 10:18

After upgrading the Oracle Access Manager from 11.1.2.3 to 12.2.1.3, I extended the WebLogic Domain with User Messaging Services to be able to use the Forget Password feature.
The Oracle Support note ID 2302623.1 gives a good example how to do.
Oracle Access Manager 12cps3 (OAM 12.2.1.3.0) Sample Application Demonstrates Forgot Password Flow Using Multi Factor Authentication REST APIs ( Doc ID 2302623.1 ).

But the OAM AdaptiveAuthenticatorPlugin was missing the UmsClientUrl property
and without this one, no way to send mails or SMS with the security token to the user requesting the reset of his password.

I decided to modify the oam-config.xml file with the missing property.
During the modification, I modfied the Version of the oam-config.xml to inform OAM about the configuration change.
What was my surprise when I saw that my modifications were reseted after the Administration Server was started new.

This has changed starting from OAM 12.2.1.3. Now the oam-config is stored in the Metadata Database and needs to be exported,
modified and re-imported back to the database. The steps are described in the documentation (here).

I had then to follow then those steps as shown below:
First set the JAVA_HOME and PATH to point to the right Java Version.
$ export JAVA_HOME=/u00/app/oracle/product/Java/jdk
$ export PATH=$JAVA_HOME/bin:$PATH

The export requires a properties file defining te connection to the OAM repository$ more dbschema.properties
oam.entityStore.ConnectString=jdbc:oracle:thin:@vm02:1522/IDM
oam.entityStore.schemaUser=IAM_OAM
oam.entityStore.schemaPassword=Welcome1
oam.importExportDirPath=/home/oracle/OAM_CONFIG
oam.frontending=params=vm03;14100;http
[oracle@vm03 OAM_CONFIG]$
Export the oam-config.xml file$ java -cp /u01/app/fmw_oim_12213/idm/oam/server/tools/config-utility/config-utility.jar:/u01/app/fmw_oim_12213/oracle_common/modules/oracle.jdbc/ojdbc8.jar oracle.security.am.migrate.main.ConfigCommand /u02/app/config/domains/idm_domain/ export dbschema.properties
Oct 15, 2018 6:40:44 PM oracle.security.am.migrate.main.command.CommandFactory getCommand
INFO: executable operation: export
oam.exportDirPath=/home/oracle/OAM_CONFIG
oam.exportedFile=oam-config.xml
oam.operation.time=2654
This exports the oam-config.xml file in the local directory. Modify this file and import it back to the DB.$ java -cp /u01/app/fmw_oim_12213/idm/oam/server/tools/config-utility/config-utility.jar:/u01/app/fmw_oim_12213/oracle_common/modules/oracle.jdbc/ojdbc8.jar oracle.security.am.migrate.main.ConfigCommand /u02/app/config/domains/idm_domain/ import dbschema.properties
Oct 15, 2018 6:43:25 PM oracle.security.am.migrate.main.command.CommandFactory getCommand
INFO: executable operation: import
Oct 15, 2018 6:43:27 PM oracle.security.am.migrate.util.ConfigFileUtil replaceValue
INFO: 191 will be replaced by 192
Oct 15, 2018 6:43:28 PM oracle.security.am.migrate.operation.ImportConfigOperation invoke
INFO: imported config file version to database:192
oam.importDirPath=/home/oracle/OAM_CONFIG
oam.importedFile=oam-config.xml
oam.importedVersion=192
oam.operation.time=2214
During the import, the version is incremented automatically. Take care on not to have typos errors in the oam-config.xml file you import as I’m not sure there is a validation before the import and the OAM schema can be corrupted.

Cet article How to change oam-config.xml to adapt to Oracle Access Manager configuration change est apparu en premier sur Blog dbi services.

Node Manager not starting after using unpack to install a WebLogic domain on a remote machine.

Fri, 2018-10-26 10:14

Created a domain using the config.sh script that is using several hosts. The pack and unpack command has been used to install the domain on the remote servers.
The pack command:cd $FMW_HOME/oracle_common/common/bin
./pack.sh -domain=/u02/config/domains/workshop_domain \
-template=/home/weblogic/workshop/lab_DomainCreation/workshop_template.jar \
-template_name=workshop_template \
-managed=true
The new created jar file was copied on the remote server and the unpack command run:cd $FMW_HOME/oracle_common/common/bin
./unpack.sh -domain=/u02/config/domains/workshop_domain \
-template=/home/weblogic/workshop/lab_DomainCreation/workshop_template.jar
Starting the node manager failed due to the DemoIdentity Java KeyStore file missing. This one was always generated in the previous WebLogic Sofware version.

Errors from the node manager log file:weblogic.nodemanager.common.ConfigException: Identity key store file not found: /u02/config/domains/workshop_domain/security/DemoIdentity.jks
at weblogic.nodemanager.server.SSLConfig.loadKeyStoreConfig(SSLConfig.java:225)
at weblogic.nodemanager.server.SSLConfig.access$000(SSLConfig.java:33)
at weblogic.nodemanager.server.SSLConfig$1.run(SSLConfig.java:118)
at java.security.AccessController.doPrivileged(Native Method)
at weblogic.nodemanager.server.SSLConfig.(SSLConfig.java:115)
at weblogic.nodemanager.server.NMServer.(NMServer.java:169)
at weblogic.nodemanager.server.NMServer.getInstance(NMServer.java:134)
at weblogic.nodemanager.server.NMServer.main(NMServer.java:589)
at weblogic.NodeManager.main(NodeManager.java:31)

Starting from WebLogic 12.2.1.3, the unpack command does not generate the DemoIdentity.jks keystore file anymore. The DemoIdentity JKS file needs to be created manually or the Node manager changed to non SSL. After making sure the java set in the path is the one used by the WebLogic Domain run:cd /u02/config/domains/workshop_domain/security
java utils.CertGen -certfile democert -keyfile demokey -keyfilepass DemoIdentityPassPhrase -noskid
java utils.ImportPrivateKey -certfile democert.pem -keyfile demokey.pem -keyfilepass DemoIdentityPassPhrase -keystore DemoIdentity.jks -storepass DemoIdentityKeyStorePassPhrase -alias demoidentity
After this the Node Manager can be started successfully.

Cet article Node Manager not starting after using unpack to install a WebLogic domain on a remote machine. est apparu en premier sur Blog dbi services.

Deep dive Postgres at the #pgconfeu conference

Fri, 2018-10-26 03:37

Today I followed many good technical sessions at the European Postgres conference. The Postgres conferences are really technical oriented, you will find no marketing sessions there and you learn a lot of things.
As promised yesterday, I wrote today my first blog about the new Postgres storage engine ZHEAP/UNDO, which is a very interesting feature, with very interesting results.

Before you continue to read this blog, if you didn’t read my blog from yesterday,read it first :-) link

First test : table creation

We create 2 tables, one with the default Postgres storage engine HEAP, and one with the new storage enfine ZHEAP.

PSQL> create table heap2 as select a.*, md5(a::varchar), now() from generate_series(1,5000000) a;
 
SELECT 5000000
Time: 12819.369 ms (00:12.819)

PSQL> create table zheap2  with (storage_engine='zheap') as select a.*, md5(a::varchar), now() from generate_series(1,5000000) a;
SELECT 5000000
Time: 19155.004 ms (00:19.155)

You noticed, that with Postgres you can choose your storage engine at table level :-). The table creation with ZHEAP is slower, but is is normal because now we have to create the UNDO segment also.

Second test : Size of the both tables

Before to start the tests we will check the size of the HEAP and ZHEAP tables, as announced yesterday the HEAP table should be smaller, because we have less header information.

PSQL>  select pg_size_pretty(pg_relation_size('heap2'));
 pg_size_pretty 
----------------
 365 MB
PSQL> select pg_size_pretty(pg_relation_size('zheap2'));
 pg_size_pretty 
----------------
 289 MB

The ZHEAP tables is smaller, it exactly what Amit explain us yesterday, because the block header with ZHEAP is smaller. If you want to learn more read his presentation from yesterday. Again the link is on my blog from yesterday.

Third test : Update on the table

To get the bloat effect on the HEAP table, we will now update the full table and see what happen.

PSQL> update heap2 set a=a+12222222;
UPDATE 5000000
Time: 19834.911 ms (00:19.835)

PSQL> update zheap2 set a=a+12222222;
UPDATE 5000000
Time: 26956.043 ms (00:26.956)

PSQL> select pg_size_pretty(pg_relation_size('zheap2'));
 pg_size_pretty 
----------------
 289 MB
PSQL> vacuum heap2;
PSQL> select pg_size_pretty(pg_relation_size('heap2'));
 pg_size_pretty 
----------------
 730 MB

The same as for the creation the update is a bit longer, but the update with ZHEAP write many information into the log file.We should test again this update with disabling the writing of information into the log file about creating undo segment.
But as you can see, the most important information here is that the table don’t bloat as the HEAP table, now the HEAP table is 2 times bigger despite I executed a VACUUM.

Fourth test: test of the ROLLBACK

To test the ROLLBACK we have to open first a transaction with BEGIN;

PSQL>  begin;
BEGIN
PSQL>* update heap2 set a=a+12222222;
UPDATE 5000000
Time: 22071.462 ms (00:22.071)
PSQL> * rollback;
ROLLBACK
Time: 1.437 ms

PSQL> begin;
BEGIN
PSQL> * update zheap2 set a=a+12222222;
UPDATE 5000000
Time: 28210.845 ms (00:28.211)
PSQL> * rollback;
ROLLBACK
Time: 0.567 ms

This is the part where I’m the most surprised, the ROLLBACK for ZHEAP is so fast as for HEAP, I can’t explain that. I will leave my colleague Daniel Westermann making deeper tests :-). Because with ZHEAP he has to apply the undo blocks, where HEAP tables only mark the transactions as aborted.

Fifth tests : Check of the query performances

For this test we have to first flush the filesystem cache and to restart the database, to be sure that nothing is cached.

postgres@dbi-pg-tun:/home/postgres/zheap/ [ZHEAP] pgstop 
waiting for server to shut down.... done
server stopped

postgres@dbi-pg-tun:/home/postgres/ [ZHEAP] sudo sync
postgres@dbi-pg-tun:/home/postgres/ [ZHEAP] sudo echo 3 > /proc/sys/vm/drop_caches

postgres@dbi-pg-tun:/home/postgres/zheap/ [ZHEAP] pgstart
waiting for server to start.... done
server started

Now we are ready for the last test

postgres@dbi-pg-tun:/home/postgres/zheap/ [ZHEAP] sqh
PSQL> select count(*) from heap2;
  count  
---------
 5000000
Time: 3444.869 ms (00:03.445)

PSQL> select count(*) from zheap2;
  count  
---------
 5000000
Time: 593.894 ms

As you can see the query performance are improved significantly for full table scan :-), because the table didn’t bloat as for the HEAP table. For you information I started additionally 2 times a full update before to restart the database and the HEAP table is now 3 times bigger.

PSQL> select pg_size_pretty(pg_relation_size('heap2'));
 pg_size_pretty 
----------------
 1095 MB

Time: 0.508 ms
PSQL> select pg_size_pretty(pg_relation_size('zheap2'));
 pg_size_pretty 
----------------
 289 MB
Conclusion of these tests
  • Postgres allow the usage or not of UNDO’s at the table level
  • We are surprise how fast the ROLLBACK are, but this must be tested again, I don’t understand why
  • Select performance are improved significantly for full table scan :-)
  • The storage will not bloat anymore with ZHEAP
  • Finally only the updates are a little bit slower

It will be interesting to follow the discussions around this feature on the mailing list.

Cet article Deep dive Postgres at the #pgconfeu conference est apparu en premier sur Blog dbi services.

Deploy DC/OS using Ansible (Part 2) – Playbooks

Fri, 2018-10-26 02:02

Finally, after all the configuration stuff is done, we can run the playbooks

Create SSH Access

First the SSH Access on all nodes need to be created. Therefore the access-onprem.yml is used:
Be careful, I used CentOS on my system, so I commented the apt-get and the debian-based part out.
If you want to run the playbook on another operating system, adjust it carefully.

---
# This playbook enable access to all ansible targets via ssh

- name: setup the ansible requirements on all nodes
  hosts: all:!localhost
  #hosts: all
  serial: 20
  remote_user: "{{ initial_remote_user | default('root') }}"
  become: true
  tasks:

#    - name: attempt to update apt's cache
#      raw: test -e /usr/bin/apt-get && apt-get update
#      ignore_errors: yes

#    - name: attempt to install Python on Debian-based systems
#      raw: test -e /usr/bin/apt-get && apt-get -y install python-simplejson python
#      ignore_errors: yes

    - name: attempt to install Python on CentOS-based systems
      raw: test -e /usr/bin/yum && yum -y install python-simplejson python
      ignore_errors: yes

    - name: Create admin user group
      group:
        name: admin
        system: yes
        state: present

    - name: Ensure sudo is installed
      package:
        name: sudo
        state: present

    - name: Remove user centos
      user:
        name: centos
        state: absent
        remove: yes

    - name: Create Ansible user
      user:
        name: "{{ lookup('ini', 'remote_user section=defaults file=../ansible.cfg') }}"
        shell: /bin/bash
        comment: "Ansible management user"
        home: "/home/{{ lookup('ini', 'remote_user section=defaults file=../ansible.cfg') }}"
        createhome: yes
        password: "admin123"

    - name: Add Ansible user to admin group
      user:
        name: "{{ lookup('ini', 'remote_user section=defaults file=../ansible.cfg') }}"
        groups: admin
        append: yes

    - name: Add authorized key
      authorized_key:
        user: "{{ lookup('ini', 'remote_user section=defaults file=../ansible.cfg') }}"
        state: present
        key: "{{ lookup('file', lookup('env','HOME') + '/.ssh/ansible-dcos.pub') }}"

    - name: Copy sudoers file
      command: cp -f /etc/sudoers /etc/sudoers.tmp

    - name: Backup sudoers file
      command: cp -f /etc/sudoers /etc/sudoers.bak

    - name: Ensure admin group can sudo
      lineinfile:
        dest: /etc/sudoers.tmp
        state: present
        regexp: '^%admin'
        line: '%admin ALL=(ALL) NOPASSWD: ALL'
      when: ansible_os_family == 'Debian'

    - name: Ensure admin group can sudo
      lineinfile:
        dest: /etc/sudoers.tmp
        state: present
        regexp: '^%admin'
        insertafter: '^root'
        line: '%admin ALL=(ALL) NOPASSWD: ALL'
      when: ansible_os_family == 'RedHat'

    - name: Replace sudoers file
      shell: visudo -q -c -f /etc/sudoers.tmp && cp -f /etc/sudoers.tmp /etc/sudoers

    - name: Test Ansible user's access
      local_action: "shell ssh {{ lookup('ini', 'remote_user section=defaults file=../ansible.cfg') }}@{{ ansible_host }} 'sudo echo success'"
      become: False
      register: ansible_success

    - name: Remove Ansible SSH key from bootstrap user's authorized keys
      lineinfile:
        path: "{{ ansible_env.HOME }}/.ssh/authorized_keys"
        state: absent
        regexp: '^ssh-rsa AAAAB3N'
      when: ansible_success.stdout == "success"
Start the Playbook for the SSH access
[root@dcos-ansible ansible-dcos]# pwd
/root/ansible-dcos

[root@dcos-ansible ansible-dcos]# ansible-playbook plays/access-onprem.yml
PLAY [setup the ansible requirements on all nodes] 
****************************************************************************************
TASK [Gathering Facts]
****************************************************************************************
ok: [192.168.22.103]
ok: [192.168.22.102]
ok: [192.168.22.104]
ok: [192.168.22.101]
ok: [192.168.22.100]

[....]

PLAY RECAP 
**************************************************************************************
192.168.22.100             : ok=14   changed=6    unreachable=0    failed=0
192.168.22.101             : ok=14   changed=6    unreachable=0    failed=0
192.168.22.102             : ok=14   changed=6    unreachable=0    failed=0
192.168.22.103             : ok=14   changed=6    unreachable=0    failed=0
192.168.22.104             : ok=14   changed=6    unreachable=0    failed=0

This is not the whole output of the playbook. Important to know, during the “TASK [Test Ansible user’s access]” I had to insert the Ansible password 5 times. After that the playbooks finished successfully.

Ping the servers using Ansible

After the playbook finished successfully do a test ping

[root@dcos-ansible ansible-dcos]# ansible all -m ping
192.168.22.102 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
192.168.22.100 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
192.168.22.104 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
192.168.22.101 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
192.168.22.103 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}

In case of trouble it is really helpful to use the “-vvv” option.
It is also possible to ping only one server using

ansible 192.168.22.100 -m ping
Rollout the DC/OS installation
[root@dcos-ansible ansible-dcos]# pwd
/root/ansible-dcos
[root@dcos-ansible ansible-dcos]# cat plays/install.yml
---
- name: setup the system requirements on all nodes
  hosts: all
  serial: 20
  become: true
  roles:
    - common
    - docker

- name: generate the DC/OS configuration
  hosts: bootstraps
  serial: 1
  become: true
  roles:
    - bootstrap

- name: deploy nodes
  hosts: [ masters, agents, agent_publics]
  serial: 20
  become: true
  roles:
    - node-install

[root@dcos-ansible ansible-dcos]# pwd
/root/ansible-dcos
[root@dcos-ansible ansible-dcos]# ansible-playbook plays/install.yml

PLAY [setup the system requirements on all nodes]
*********************************************************************

TASK [Gathering Facts]
*********************************************************************
ok: [192.168.22.102]
ok: [192.168.22.104]
ok: [192.168.22.101]
ok: [192.168.22.100]
[....]

In case some installation steps fail, Ansible will skip for that server and gives you the opportunity to rerun the playbook on the failed server.

ansible-playbook plays/install.yml --limit @/root/ansible-dcos/plays/install.retry

If you cannot connect to your master via browser: Check your /var/log/messages for error messages. In my case the master searched for the eth0 interface. Which isn’t available on my VM.
Just change the detect-ip script as follows, according to your network interface. Same step is needed on all agent-nodes as well.

[root@dcos-master bin]# cat /opt/mesosphere/bin/detect_ip
#!/usr/bin/env bash
set -o nounset -o errexit
export PATH=/usr/sbin:/usr/bin:$PATH
echo $(ip addr show enp0s8 | grep -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' | head -1)
Install the CLI

For those of you, which prefer a CLI, just install it on your master.

[root@dcos-master ~]#  [ -d /usr/local/bin ] || sudo mkdir -p /usr/local/bin
[root@dcos-master ~]# curl https://downloads.dcos.io/binaries/cli/linux/x86-64/dcos-1.11/dcos -o dcos
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 13.9M  100 13.9M    0     0  1313k      0  0:00:10  0:00:10 --:--:-- 3920k
[root@dcos-master ~]# sudo mv dcos /usr/local/bin
[root@dcos-master ~]# chmod +x /usr/local/bin/dcos
[root@dcos-master ~]# dcos cluster setup http://192.168.22.101
If your browser didn't open, please go to the following link:

http://192.168.22.101/login?redirect_uri=urn:ietf:wg:oauth:2.0:oob

Enter OpenID Connect ID Token: eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6Ik9UQkVOakZFTWtWQ09VRTRPRVpGTlRNMFJrWXlRa015Tnprd1JrSkVRemRCTWpBM1FqYzVOZyJ9.eyJlbWFpbCI6Imp1bGlhLmd1Z2VsQGdtYWlsLmNvbSIsImVtYWlsX3ZlcmlmaWVkIjp0cnVlLCJpc3MiOiJodHRwczovL2Rjb3MuYXV0aDAuY29tLyIsInN1YiI6Imdvb2dsZS1vYXV0aDJ8MTA2NTU2OTI5OTM1NTc2MzQ1OTEyIiwiYXVkIjoiM3lGNVRPU3pkbEk0NVExeHNweHplb0dCZTlmTnhtOW0iLCJpYXQiOjE1NDA0NTA4MTcsImV4cCI6MTU0MDg4MjgxN30.M8d6dT4QNsBmUXbAH8B58K6Q2XvnCKnEd_yziiijBXHdW18P2OnJEYrKa9ewvOfFhyisvLa7XMU3xeBUhoqX5T6mGkQo_XUlxXM82Ohv3zNCdqyNCwPwoniX4vU7R736blcLRx1aB8TJnydNb0H0IzEAVzaYBQ1CRV-4a9KsiMXKBBPlskOSvek4b_FRghA6hsjMA2eO-G5r3B6UgHo6CCwdwVrhsOygvJ5NwDC0xiFrnkW-SjZRZztCN8cRj7b40VH43uY6R2ibxJfE7SaGpbWzLyp7juUJ766WXar3O7ww42bYIqLnAx6YmWG5kFeJnmJGT-Rdmhl2JuvdABoozA

That’s it, now you can configure and use your DC/OS. Always keep in mind: the ntpd service is really essential for a working DC/OS Node. Also use the /var/log/messages, it really helps!
One little thing I have to mention at the end. Don’t confide in the official documentation and the troubleshooting guide, it does not help as much as expected…

Cet article Deploy DC/OS using Ansible (Part 2) – Playbooks est apparu en premier sur Blog dbi services.

Deploy DC/OS using Ansible (Part 1) – Getting Started

Fri, 2018-10-26 02:01

To start into this topic I want to shortly explain some basics. Afterwards I show you how to prepare the configuration files.

Ansible

Ansible is a Open Source automation utility. It is used for orchestration and configuration as well as the administration of PCs/Servers. You could say, okay but we have puppet or saltstack or an other framework, why should I use Ansible? Ansible differs! It has no Agent installed on the systems, it just needs a working SSH connection and a python installation. For deploying changes just write a Ansible Playbook, a simple YAML-File. For further information about Ansible just visit the Ansible Homepage.

DC/OS

Mesosphere’s DC/OS is a distributed operating system based on Apache Mesos (read more). It gives you the possibility to manage multiple machines as if they were one. Resource management, process placement scheduling, simplified installations and management of distributed services can be automated using DC/OS. DC/OS comes with a web interface as well as a command-line interface which can be used for monitoring and remote management.
DC/OS can be used as cluster manager, container platform and operating system. A quite mighty tool. To explain all the functionalities would go to far.

For setup the minimal DC/OS Using Ansible you need at least six servers:
– one Ansible
– one Bootstrap
– one Master
– two private Agents
– one public Agent

Bootstrap Node

In general, the bootstrap is the essential one when you spin up a server. It is used as staging location for the software installation, stores the DC/OS configuration and the bootstrap files for the DC/OS.

Master Node

The DC/OS master manages “the rest” of the cluster. It’s possible to run one or more master nodes. They contain most of the DC/OS components and the Mesos master process. It also provides the web interface, which provides a nice graphical view of the DC/OS Cluster

Private Agent Node

The private agents do not allow access from outside the cluster. They provide resources to the cluster.

Public Agent Node

The public agent node is a node on the network, that allows access from the outside of the DC/OS. The public agent is primary used as a kind of load balancer to decrease the surface that could be accessed by attackers.
In a cluster you need less public agent nodes than private agent nodes as they can handle the multiple agent services.

I just described the node components used for the installation. If you want more and deeper insights into DC/OS and it’s architecture, you can find a detailed documentation on the Mesosphere Homepage

Architecture

For the minimal installation of this DC/OS you need six servers:
Each of the server with a public and a private IP expect the ansible server.

servers

To install the DC/OS using Ansible I used the playbooks from GitHub. But as usual there is some specific stuff, when you test it at your environment.

Prepare the ansible server Install git and get ansible-dcos from git-hub
[root@dcos-ansible ~]# yum install git -y

[root@dcos-ansible ~]# git clone https://github.com/dcos-labs/ansible-dcos
Cloning into 'ansible-dcos'...
remote: Enumerating objects: 69, done.
remote: Counting objects: 100% (69/69), done.
remote: Compressing objects: 100% (48/48), done.
remote: Total 1957 (delta 25), reused 42 (delta 15), pack-reused 1888
Receiving objects: 100% (1957/1957), 312.95 KiB | 0 bytes/s, done.
Resolving deltas: 100% (982/982), done.

[root@dcos-ansible ~]# cd ansible-dcos/
[root@dcos-ansible ansible-dcos]# git tag
v0.1.0-alpha
v0.2.0-alpha
v0.2.1-alpha
v0.3.0-alpha
v0.4.0-alpha
v0.5.0-dcos-1.10
v0.6.0-dcos-1.11
v0.6.1-dcos-1.11
v0.7.0-dcos-1.11
[root@dcos-ansible ansible-dcos]# git checkout v0.7.0-dcos-1.11
Note: checking out 'v0.7.0-dcos-1.11'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b new_branch_name

HEAD is now at 1f2cf7d... Prepare version v0.7.0-dcos-1.11
Install ansible
[root@dcos-ansible ansible-dcos]# yum install ansible
Prepare the hosts.yaml file
[root@dcos-ansible ansible-dcos]# pwd
/root/ansible-dcos
[root@dcos-ansible ansible-dcos]# cp hosts.example.yaml hosts.yaml

[root@dcos-ansible ansible-dcos]# cat hosts.yaml
---
# Example for an ansible inventory file
all:
  children:
    bootstraps:
      hosts:
        # Public IP Address of the Bootstrap Node
        192.168.22.100:
    masters:
      hosts:
        # Public IP Addresses for the Master Nodes
        192.168.22.101:
    agents:
      hosts:
        # Public IP Addresses for the Agent Nodes
        192.168.22.102:
        192.168.22.103:
    agent_publics:
      hosts:
        # Public IP Addresses for the Public Agent Nodes
        192.168.22.104:
  vars:
    # IaaS target for DC/OS deployment
    # options: aws, gcp, azure or onprem
    dcos_iaas_target: 'onprem'

    # Choose the IP Detect Script
    # options: eth0, eth1, ... (or other device name for existing network interface)
    dcos_ip_detect_interface: 'eth0'

    # (internal/private) IP Address of the Bootstrap Node
    dcos_bootstrap_ip: '10.0.0.1'

    # (internal/private) IP Addresses for the Master Nodes
    dcos_master_list:
      - 10.0.0.2

    # DNS Resolvers
    dcos_resolvers:
      - 8.8.4.4
      - 8.8.8.8

    # DNS Search Domain
    dcos_dns_search: 'None'

    # Internal Loadbalancer DNS for Masters (only needed for exhibitor: aws_s3)
    dcos_exhibitor_address: 'masterlb.internal'

    # External Loadbalancer DNS for Masters or
    # (external/public) Master Node IP Address (only needed for cli setup)
    dcos_master_address: 'masterlb.external'
Create the setup variables for DC/OS
[root@dcos-ansible ansible-dcos]# pwd
/root/ansible-dcos
cp group_vars/all.example group_vars/all
enable SSH access on nodes with Ansible

In case of authentication problem using ansible playbooks, repeat the steps with “exec….” and “sshd-add…”

ssh-keygen -t rsa -b 4096 -C "admin@it.dbi-services.com" -f ~/.ssh/ansible-dcos
[root@dcos-ansible ansible-dcos]# exec /usr/bin/ssh-agent $SHELL
[root@dcos-ansible ansible-dcos]# ssh-add ~/.ssh/ansible-dcos
Enter passphrase for /root/.ssh/ansible-dcos:
Identity added: /root/.ssh/ansible-dcos (/root/.ssh/ansible-dcos)
Enter lines for initial SSH access on all nodes with ansible in group_vars/all

All systems must have the same username/password combination, otherwise ansible runs into failures.
In this step you have to change only the last 4 lines of group_vars/all

[root@dcos-ansible ansible-dcos]# cat group_vars/all
---
# Install latest operating system updates
os_system_updates: False

# DC/OS cluster version
dcos_version: '1.11.4'

# If planning to upgrade a previous deployed DC/OS Cluster,
# uncomment the following variable
#dcos_upgrade_from_version: '1.11.3'

# Download URL for DC/OS
dcos_download: "https://downloads.dcos.io/dcos/stable/{{ dcos_version }}/dcos_generate_config.sh"

# Name of the DC/OS Cluster
dcos_cluster_name: 'demo'

# Deploy Mesosphere Enterprise DC/OS or DC/OS OSS?
dcos_deploy_ee_package: False

# Optional if dcos_iaas_target := aws
#dcos_exhibitor: 'aws_s3'
#dcos_aws_access_key_id: '******'
#dcos_aws_secret_access_key: '******'
#dcos_aws_region: 'us-west-2'
#dcos_s3_bucket: 'bucket-name'

# Optional if dcos_iaas_target := azure
#dcos_exhibitor: 'azure'
#dcos_exhibitor_azure_account_name: 'name'
#dcos_exhibitor_azure_account_key: '******'

# Only required when deploying Mesosphere Enterprise DC/OS
dcos_ee_security: 'permissive'
dcos_ee_license_key_contents: '******'
dcos_ee_superuser_username: admin
# Default password:= admin
dcos_ee_superuser_password_hash: "$6$rounds=656000$8CXbMqwuglDt3Yai$ZkLEj8zS.GmPGWt.dhwAv0.XsjYXwVHuS9aHh3DMcfGaz45OpGxC5oQPXUUpFLMkqlXCfhXMloIzE0Xh8VwHJ."

# Configure rexray to enable support of external volumes (only for Mesosphere Enterprise DC/OS)
# Note: Set rexray_config_method: file and edit ./roles/bootstrap/templates/rexray.yaml.j2 for a custom rexray configuration
# options: empty, file
dcos_ee_rexray_config_method: empty

#For initial SSH access on nodes with Ansible
ansible_password: "password"
ansible_become_pass: "password"
#initial_remote_user: root
Change ansible configuration file
[defaults]
inventory = hosts.yaml
host_key_checking = False
remote_user = ansible
roles_path = ./roles
[all:vars]
ansible_connection=ssh
ansible_user=ansible
ansible_ssh_pass=password
Insert the servers to Ansible hosts file

As most of this file is commented out, I just post the section I added

[root@dcos-ansible ansible-dcos]# cat /etc/ansible/hosts
[dcos_servers]
192.168.22.100
192.168.22.101
192.168.22.102
192.168.22.103
192.168.22.104

Finally we can start with the playbooks – Read part 2 for this

Cet article Deploy DC/OS using Ansible (Part 1) – Getting Started est apparu en premier sur Blog dbi services.

What’s OpenDB Appliance ?

Thu, 2018-10-25 10:07

Thanks to the OpenDB Appliance, the “click, hope and pray” approach is a thing of the past. Use the tool developed by dbi services’ specialists to make your work easier”

OpenDB Appliance makes life easier, discover why.

Get furhter information about the OpenDB Appliance.

Cet article What’s OpenDB Appliance ? est apparu en premier sur Blog dbi services.

Some psql features you are maybe not aware of

Thu, 2018-10-25 04:36

It is the time of The 10th Annual PostgreSQL Conference Europe, so this is the perfect time to blog about some tips and tricks around psql you’ll love. psql is such a powerful tool that you really should use it every day. It saves you so much work and is packed with features that makes your life so much easier. In this post we’ll look at some features you maybe didn’t know before.

Lets start with something very simple: You probably know the “\l” shortcut to display all the databases:

postgres=# \l
                                 List of databases
   Name    |  Owner   | Encoding |  Collate   |   Ctype    |   Access privileges   
-----------+----------+----------+------------+------------+-----------------------
 postgres  | postgres | UTF8     | en_US.utf8 | en_US.utf8 | 
 template0 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres          +
           |          |          |            |            | postgres=CTc/postgres
 template1 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres          +
           |          |          |            |            | postgres=CTc/postgres

Did you know you also can pass the shortcuts from your shell directly into psql?

postgres@pgbox:/home/postgres/ [PGDEV] psql -c '\l' postgres
                                 List of databases
   Name    |  Owner   | Encoding |  Collate   |   Ctype    |   Access privileges   
-----------+----------+----------+------------+------------+-----------------------
 postgres  | postgres | UTF8     | en_US.utf8 | en_US.utf8 | 
 template0 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres          +
           |          |          |            |            | postgres=CTc/postgres
 template1 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres          +
           |          |          |            |            | postgres=CTc/postgres

But there is even a faster way for retrieving that information:

postgres@pgbox:/home/postgres/ [PGDEV] psql -l
                                 List of databases
   Name    |  Owner   | Encoding |  Collate   |   Ctype    |   Access privileges   
-----------+----------+----------+------------+------------+-----------------------
 postgres  | postgres | UTF8     | en_US.utf8 | en_US.utf8 | 
 template0 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres          +
           |          |          |            |            | postgres=CTc/postgres
 template1 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres          +
           |          |          |            |            | postgres=CTc/postgres

Did you know you can log the complete psql session to a logfile?

postgres@pgbox:/home/postgres/ [PGDEV] psql -X -L /var/tmp/log postgres
psql (12devel)
Type "help" for help.

postgres=# select 1;
 ?column? 
----------
        1
(1 row)

postgres=# select 2;
 ?column? 
----------
        2
(1 row)

postgres=# \! cat /var/tmp/log
********* QUERY **********
select 1;
**************************

 ?column? 
----------
        1
(1 row)

********* QUERY **********
select 2;
**************************

postgres=# 

You probably know that copy is the fastest way to get data into and out of PostgreSQL. Did you know you can copy from a program?

postgres=# create table lottery ( draw_date date, winning_numbers text, mega_ball integer, multiplier integer );
CREATE TABLE
postgres=# copy lottery from 
                program 'curl https://data.ny.gov/api/views/5xaw-6ayf/rows.csv?accessType=DOWNLOAD' 
                with (header true, delimiter ',', format csv);
COPY 1713
postgres=# select * from lottery limit 5;
 draw_date  | winning_numbers | mega_ball | multiplier 
------------+-----------------+-----------+------------
 2002-05-17 | 15 18 25 33 47  |        30 |           
 2002-05-21 | 04 28 39 41 44  |         9 |           
 2002-05-24 | 02 04 32 44 52  |        36 |           
 2002-05-28 | 06 21 22 29 32  |        24 |           
 2002-05-31 | 12 28 45 46 52  |        47 |           
(5 rows)

That basically means, whatever “program” you use: As long as the result is something psql understands you can use it.

How often do you dynamically build SQL statements you want to execute right after? There is a quite effective solution for that in psql:

postgres=# select 'create table t'||i||'( a int )' from generate_series(1,10) i; \gexec
         ?column?          
---------------------------
 create table t1( a int )
 create table t2( a int )
 create table t3( a int )
 create table t4( a int )
 create table t5( a int )
 create table t6( a int )
 create table t7( a int )
 create table t8( a int )
 create table t9( a int )
 create table t10( a int )
(10 rows)

CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE

Did you know you can store the result of a query into a variable and use that later in other statements?

postgres=# select 3 as var; \gset
 var 
-----
   3
(1 row)

postgres=# \echo :var
3
postgres=# select * from lottery where multiplier = :var;
 draw_date  | winning_numbers | mega_ball | multiplier 
------------+-----------------+-----------+------------
 2011-02-18 | 05 06 07 30 45  |        42 |          3
 2011-03-01 | 01 12 19 20 47  |        25 |          3
 2011-04-01 | 13 14 35 36 53  |        19 |          3
 2011-04-08 | 06 40 45 50 56  |        11 |          3
 2011-04-15 | 22 23 33 39 48  |        29 |          3
 2011-04-22 | 03 18 46 51 53  |        17 |          3
 2011-04-26 | 19 29 32 38 55  |        15 |          3
 2011-05-06 | 06 18 26 37 41  |         9 |          3
 2011-05-24 | 09 12 21 42 43  |        42 |          3
 2011-05-31 | 28 30 31 37 55  |        13 |          3
 2011-06-03 | 20 23 41 49 53  |        31 |          3
 2011-06-10 | 18 21 27 37 38  |         7 |          3
...

The last one for today is one of my favorites: As with the Linux watch command you can watch in psql:

postgres=# select now();
              now              
-------------------------------
 2018-10-23 21:57:17.298083+02
(1 row)

postgres=# \watch
Tue 23 Oct 2018 09:57:19 PM CEST (every 2s)

              now              
-------------------------------
 2018-10-23 21:57:19.277413+02
(1 row)

Tue 23 Oct 2018 09:57:21 PM CEST (every 2s)

              now              
-------------------------------
 2018-10-23 21:57:21.364605+02
(1 row)

Btw: You can see that the PostgreSQL Conference Europe is a technical conference when you take a look at the exhibition area during the sessions: Almost empty :)
sdr

Cet article Some psql features you are maybe not aware of est apparu en premier sur Blog dbi services.

Getting in touch with shareplex

Thu, 2018-10-25 02:39

Tuesday this week I had the opportunity to get in touch with shareplex, Quest’s replication solution. This product does not rely on Oracle licenses, so can also be used with Standard Edition. It is competitor of Oracle’s GoldenGate an used for asynchronous replication, too.
An interesting feature is that not only committed transactions can be replicated, which is an advantage with big transactions. With this feature replication latencies within seconds can be realized. Also migrations of big databases over operating system, character set and database release boundaries are possible with this solution. Quest mentions that costs are lower than that of comparable Oracle products.

It can be also used for reporting, high availability, distributed processing and load sharing.

Cet article Getting in touch with shareplex est apparu en premier sur Blog dbi services.

Oracle OpenWorld 2018: Day 3

Wed, 2018-10-24 23:42

Today my first session was about GDPR Data Security in the GDPR Era. It was presented by
Joao Nunes, IT Senior Manager, NOS
Tiago Rocha, Database Administrator, “Nos Comunicaões, Sa.”
Eric Lybeck, Director, PwC
The speakers started by presenting what is GDPR which is a new law protecting data for European citizens. After they explain the changes for companies about this new law.
They talk about the GDPR articles related to Oracle Database Security.
openday3_0
And they conclude by underlining that the technical to be compliant with GDPR is not the most important part and then companies must have well documented processes.
openday3_11
My second session was Oracle Database Security Assessment Tool: Know Your Security Posture Before Hackers Do presented by
Pedro Lopes, DBSAT and EMEA Field Product Manager, Oracle
Marella Folgori, Oracle
Riccardo D’Agostini, Responsabile Progettazione Data Security, Intesa Sanpaolo
This session was about the new Oracle Database Security Assessment Tool (DBSAT) which can help to discover sensitive personal data, identify database users and their entitlements, and understand the configuration and operational security risks. This tool is free for Oracle customers.
I think this picture will help to better understand DBSAT
openday3_1
They also presented new features in the upcoming vesion. Note that actually only CIS rules are included
openday3_2
My last session was Multitenant Security Features Clarify DBA Role in DevOps Cloud presented
SPEAKERS
Franck Pachot, Database Engineer, CERN
Pieter Van Puymbroeck, Database Administrator, Exitas NV
We will not present Franck Pachot and as usual the session was exciting. It was about security in a multitenant environment.
The speakers explain how privileges can be restricted using lockdown profiles in a multitenant environment.
And to finish this beautiful picture
openday3_3

Cet article Oracle OpenWorld 2018: Day 3 est apparu en premier sur Blog dbi services.

Oracle Open World 2018 D3: Replication what’s new in MySQL 8

Wed, 2018-10-24 20:08

For this last day (25.10.2018) at Oracle Open World my first technical session was “Replication what’s new in MySQL 8“. This session was given by Nuno Carvalho – Oracle MySQL Principal Software Engineer, Vitor Oliveira – Oracle MySQL Performance Architect, Louis Soares – Oracle MySQL Software Development Director. You can find the list of new features linked with MySQL InnoDB Cluster here.

MySQL - high availabiltiy

They introduced the session with the challenges a database has to face today:

  • We share lots of data
  • All things distributed
  • We are not sharing anymore few Ko but Mo
  • Go green requires dynamic and adaptative behavior
  • Moving, transforming and processing data quicker than anyone else means having an edge over competitors
  • We expect service always available even in case of migration/upgrade
  • etc…

Some years ago we solved availability concerns with replication but replication is not anymore able to solve the today’s challenges. Replication was perfect to generate and reproducing multiple copies of data at one or more sites. The MySQL replication technology evolved since version 3.23 where replication was asynchronous. Since version 5.5 thanks to a the semi-synchronous replication plugin, we have semi-synchronous replication and now since version 5.7.17 and 8.01 we have group replication.

MySQL Replication evolution

In order to answer to the today’s challenges, the solution must fit with these requirements:

  • Replicate: The number of servers should grow or shrink dynamically with as little pain as possible
  • Automate: The primary/secondary role assignment has to be automatic. A new primary has to be elected automatically on primary failures. The read/write modes on primary and secondaries have to be setup automatically. A consistent wview of which server is the primary has to be provided.
  • Integrate: MySQL has to fit with other technologies such as Hadoop, Kafka, Solr, Lucene, aso…
  • Scale: Repliacte between clusters for disaster recovery. For read scale out, asynchronous read replicas can be connected to the cluster
  • Enhance: Group replication for higher availability. Asynchronous Replication for Read Scale-out. One-stop shell to deploy and manage the cluster. Seamlessly and automatically route the workload to the proper database server in the cluster (in case of failure). Hide failures from the application.

MySQL role change

Enhancements in MySQL 8 (and 5.7)

The following has been enhanced in version 8 and 5.7:

  • Binary log Enhancements. Thanks to new metadata: Easy to decode what is in the binary log. Further facilitates connecting MySQL to other systems using the binary log stream. Cpaturing data changes through the binary log is simplified. Also more stats showing where the data is/was at a certain point in time.
  • Operations: Preventing Updates On replicas that leave the cluster- Automatic protection against involuntarily tainting of offline replicas. Primary Election Weights – Choose next primary by assigning election weights to the candidates. Trigger primary Election Online – User tells current primary to give up its role and assign it to another server(new in 8.0.13). Relaxed Member Eviction – User controls the amount of time to wait until others decide to evict a member from the group.
  • Performance: Highly efficient Replication Applier – write set parallelization. Fast Group Replication Recovery – Replica quickly online by using WRITESET. High Cluster Througput – More transactions per second while sustaining zero lag on any replica. Efficient Replication of JSON Documents – Replicate only changed fields of documents (Partial JSON Updates).
  • Monitoring: Monitor Lag with microsecond precision – From the immediate master and for each stage of the replication applier process. Global Group Stats Available on Every Server – Version, Role and more

MySQL 8 Group Replication

I finally recommend to have a look on the blogs from the Engineers  where you will find news, technical information and much more: http://mysqlhighavailability.com

 

Cet article Oracle Open World 2018 D3: Replication what’s new in MySQL 8 est apparu en premier sur Blog dbi services.

My first day at the #pgconfeu 2018 in Lisbon

Wed, 2018-10-24 16:32

After the first Swiss PostgreSQL community dinner yesterday evening, the conference started this morning. dbi services as Gold partner of the 10th European conference in Lisbon, get the opportunity to have a booth to present all our Open Infrastructures Services.

IMG_6085

For the occasion we decided to announce this morning our brand new video of our OpenDB Appliance, which is a real success, because we have more than one hundred views and many attended of the conference was coming to our booth to get more information about it.

We are in Lisbon! Come and meet us on our booth at PG Conference Europe and ask @Herveschweitzer, @westermanndanie, @Pybrehier about the OpenDB Appliance – our product that makes your life easier by providing functional virtual DB environments in seconds.#pgconfeu #postgres pic.twitter.com/XzH0kl4Xh7

— dbi services (@dbiservices) October 24, 2018

Today I followed many sessions, but one of them was especially interesting for me “zheap: An answer to PostgreSQL bloat woes” from Amit Kapila
This presentation presented the new Postgres storage engine “ZHEAP” which is currently under development, currently no availability plan of this storage engine exist, I think not before 2020. But I’m exiting to test this new feature of PostgreSQL.

First what it is this new ZHEAP storage engine? ZHEAP allow the usage of a separate UNDO tablespace to guarantee rollbacks, which currently at Postgres is done with keeping the old and new rows into the table itself. The problem of keeping both values into the table, is that the table will bloat.

The presentation is available here on slideshare : link to the presentation

As an experimented Oracle DBA I want to test it. Therefore I asked my colleague Daniel Westermann: how can I test it ? he say “it’s easy”. I always hear that from Postgres that it’s easy, so I say we will do it now.

At 17h20 I started to clone the git repository of the project https://github.com/EnterpriseDB/zheap
30 minutes later after installing and creating my own build, I’m ready for the testing.

See below some output of the new running development ZHEAP database.

02:42:34 postgres@dbi-pg-tun:/u02/pgdata/zheap/pg_log/ [ZHEAP] grep -i undo postgresql-Wed.log 

2018-10-17 02:41:49.498 CEST - 10 - 6544 -  - @ LOG:  background worker "undo worker launcher" (PID 6553) exited with exit code 1

02:42:42 postgres@dbi-pg-tun:/u02/pgdata/zheap/pg_log/ [ZHEAP] grep -i "discard worker" postgresql-Wed.log 

2018-10-17 02:41:52.594 CEST - 1 - 6597 -  - @ LOG:  discard worker started

At startup we see the new “undo worker” and “discard worker” process into the logfile, where Amit Kapila just talked about.
So now I will test to create a new table with the storage_engine “ZHEAP”

02:50:39 postgres@dbi-pg-tun:/u02/pgdata/zheap/pg_log/ [ZHEAP] sqh
psql (12devel dbi services zheap build)
Type "help" for help.

PSQL>  create table t_zheap(c1 int, c2 varchar) with (storage_engine='zheap');
CREATE TABLE
Time: 12.433 ms
PSQL> 

That’s it :-) my first table using the zheap storage is created, and I can start testing.

Trust me I was coming back from the session at 17:20 and at less than 30 minutes later, I have a running test system using the ZHEAP storage engine, it is very impressive how fast it is to get access to a Postgres development platform.

Tomorrow I will write a blog where I will make some tests using ZHEAP, because now it’s time for the PGconf.eu Party :-)

IMG_6109

Cet article My first day at the #pgconfeu 2018 in Lisbon est apparu en premier sur Blog dbi services.

pgconf.eu finally kicked off

Wed, 2018-10-24 06:30

So, finally it started: Magnus kicked off the 10th annual PostgreSQL Conference Europe this morning in Lisbon. With 450 attendees the conference is even bigger this year than it was last year in Warsaw and it will probably be even bigger next year. One can really feel the increasing interest in PostgreSQL in Europe (and probably around the world as well). Even Tom Lane is attending this year.

Conferences are not only about technical content, social events are important as well. You can meet people, have great discussion, enjoy local food and drinks. And that is exactly what we did yesterday evening when the Swiss PostgreSQL community came together for lunch:
sdr

Conferences are not only about fun, sometimes you have to work on your queue. Working at conferences on the other side gives you the possibility to chose nice working places:
sdr

… and of course you have to work hard on preparing the booth:
sdr

But once you’ve done all that you are ready for the conference:
cof
sdr

… and then the mess starts: There is such an impressive line up of speakers, where do you go? Not an easy choice and you will obviously miss one or the other session. But hey, that’s the PostgreSQL community: Everybody is open for questions and discussions, just jump in.

One of the benefits of sponsoring is that you get a big thank you when the conference starts and that you can have your logo on the official t-shirt:
oznor
cof

And that brings us to the final thoughts of this post: Why are we doing that? The answer is quite simple: Without sponsoring, organizing such a big community event is impossible. As you know PostgreSQL is a pure community project so it depends on the community not only on the technical but also on the financial level. When you make money with community projects you should give something back and sponsoring is one way of doing that.

Finally, we are committed to open source technologies. You can see that e.g. in the events we are organizing, on our blog and events such as this one. Three days of great content, great discussion and fun ahead.

Cet article pgconf.eu finally kicked off est apparu en premier sur Blog dbi services.

Oracle OpenWorld 2018: Day 2

Wed, 2018-10-24 00:04

Today is my second day of Oracle OpenWorld 2018. I can now go to the Moscone Center without GPS (cool) and then I decided to follow a MySQL session (my boss will be happy). Yes my first session was Using the MySQL Binary Log as a Change Stream by Luis Soares, Software Development Director, Oracle
The speaker explains what are the binary logs.
openday3_1
How to initialize the binary logs and how to manage them
openday3_2
How to inspect them
openday3_3
What Changed on MySQL 8
openday3_4
He also explained how these binary logs can be combined with other tools in a case of replication.

With ProxySQL
openday3_5

With Gh-ost
openday3_6

Using the binary logs we can also undo some transactions (feature not developped by Oracle) but by the community
openday3_7

And the conclusion
openday3_8
My second session was DBAs Versus Autonomous Databases. It was a very funny session but a very interesting topic. The Speaker started by doing a remind of the different version of Oracle since the beginning.
openday3_9
Another funny picture
openday3_10
And the famous topic
openday3_11
And still the famous question
openday3_12
So everybody will understand that the session was very exciting with many questions.

After this session I decided to follow a session about Oracle Sharding. The session was animated by
Mark Dilman, Senior Director, Software Development, Oracle
Srinagesh Battula, Sr. Principal Product Manager, Oracle
Gairik Chakraborty, Senior Director,Database Administration, Epsilon

They start by defining what is Sharding, how to setup, how to manage queries and so on. You can see this blog to understand what is Sharding. And Then after they talk about the New Features on Oracle 19c.

openday3_13

As you can see the quality of pictures may be better, but there are lot people and it’s not easy to take pictures.
After I visit some stands and this was the end of my day.
See you tomorrow for my Day 3.

Cet article Oracle OpenWorld 2018: Day 2 est apparu en premier sur Blog dbi services.

Oracle Open World 2018 D2: Peter Zaitsev – MySQL 8 Field Report

Tue, 2018-10-23 21:45

As a former MySQL Consultant during this second day (24.10.2018) I couldn’t miss a session given by Peter Zaitsev founder of Percona and assuming the role of CEO. This session named MySQL 8 Field Report is a kind of summary of all new features encapsulated in MySQL 8.

Peter Zaitsev

During the first slides, Peter presented performance related to utf8mb4 since it’s the default characterset in version 8. These slides had an Oracle logo on the botom that why I prefer make some tests before speaking about these results. However according to these slides there is a strong performance increase on OTLP database in RO as well as in RW compared to MySQL 5.7.

Security

In terms of security Peter spoke about:

  • Roles
  • Breakdown of Super Privileges
  • Password history
  • Faster cached-SHA2 Authentication
  • skip-grants blocks remote connections
  • Multiple Addresses for bind address (8.0.13)
  • Require Password for Password Change (8.0.13)
  • Redo and Undo Logs are now encrypted if Table Encryption is enabled

In the trend of autonomous database MySQL 8 is able to tune automatically the following parameters:

  • innodb_buffer_pool_size
  • innodb_log_file_size
  • innodb_flush_method

if you set innodb_dedicated_server to auto-tune. However as explained in the documentation: “Only consider enabling this option if your MySQL instance runs on a dedicated server where the MySQL server is able to consume all available system resources. Enabling this option is not recommended if your MySQL instance shares system resources with other applications.”

Partial In-Place Update for JSON and invisible Index

It’s not anymore required to do a full rewrite of a field in MySQL 8 you can now update field in JSON object. However only update and removal of element is supported. Full support has been added in maintenance releases.

Thanks to invisible indexes you can test impact of dropping indexes before actually dropping them. You can use use_invisible_indexes to use invisible indexes in a session.

Improved Optimizer Cost Model

Peter gave us an interesting link regarding MySQL 8.0 Optimizer, the unofficial MySQL 8.0 Optimizer Guide. I really advice you to have a look on this very interesting website.

Performance Schema

About performance schema MySQL 8.0 provides the following:

Resource Groups

“MySQL supports creation and management of resource groups, and permits assigning threads running within the server to particular groups so that threads execute according to the resources available to the group.” – MySQL Documentation

According to Peter’s slides MYSQL 8.0 is about 100% faster (select and update) with resource groups.

Developer features
  • Instant Add Column (add column without rebuilding table)
Alter table t1 add column d int default 1000, algorithm=instant;
  • Better Handlinf of Hot Row Contention
  • Descending flag in index definition is no more ignored
    • Allows efficient handling of ORDER BY A ASC, B DESC queries
  • JSON to Table Conversion (Labs)
  • Much Better GIS
  • Functions in DEFAULT (8.0.13)
Create table t2 (a binary(16) default uuid_to_bin(uuid()));
Create index idx1 ON t1 ((col1+col2));
  • MySQL Document Store
    • Full Text Indexing
    • GeoJSON Support

As a summary Peter concludes by telling that MySQL 8 looks like release to be excited about and has a lot of new features both for Devs and Ops.

 

 

Cet article Oracle Open World 2018 D2: Peter Zaitsev – MySQL 8 Field Report est apparu en premier sur Blog dbi services.

Oracle Open World 2018 D2: Mark Hurd’s keynote – Accelerating Growth in the Cloud

Tue, 2018-10-23 18:39

During this second day at Oracle Open World 2018 (24.10.2018) I attended to Mark Hurd’s keynote name “Accelerating Growth in the Cloud. Several famous people participated to this keynote such as:

Ian Bremmer who is the president and founder of Eurasia Group and according to Oracle “the leading global political risk research and consulting firm. Mr Bremmer is also the president and founder of GZERO Media.

Sherry Aaholm who is the Vice President and Chief Information Officer of Cummins Inc. “Cummins Inc. is an American Fortune 500 corporation that designs, manufactures, and distributes engines, filtration, and power generation products” – wikipedia.

Sherry Aaholm with Mark Hurd

Navindra Yadav, Founder of Tetration Analytics. “Cisco Tetration offers holistic workload protection for multicloud data centers by enabling a zero-trust model using segmentation” – Cisco

Navindra Yadav and Mike Hurd

Thaddeus Arroyo, Chief Executive Officer of AT&T Business. Mr Arroyo is responsible for the company’s integrated global business solutions organization, which servces more than 3 million business customers in nearly. “AT&T is the world’s largest telecommunications company, the second largest provider of mobile telephone services, and the largest provider of fixed telephone services in the United States through AT&T Communications.” – wikipedia

Thaddeus Arroyo with Mike Hurd

Geopolitical analysis with Ian Bremmer

The session started with a videoconference between Mark Hurd and Ian Bremmer regarding geopolitical topics. China has been mentioned as the biggest economy in the world and technology superpower. It has also been underlined the alignment between Chinese company and Chinese Government. Regading U.S they spoke about investment in physical defense vs investment in virtual defense where there is still lot to do compared to some other countries.

Disruption as a constant

Mark Hurd then presented few slides starting with a short summary named “With disruption as a constant – technology becomes the differentiator”

  • Data is key asset for business to own, analyze, use and secure
  • Virtual assets will win over physical resources
  • Cyber teams are the new future
  • Cloud and integrated technologies, like AI, help organizations lower costs while driving innovation & improving productivity
Past predictions

He then recapped the predictions he did in 2015/2016 for 2025

  • 80% of production apps will be in the cloud
  • Two SaaS Suite providers will have 80% market share
  • The number of corporate-owned data centers will have decreased by 80%
  • 80% of IT budgets will be spent on cloud services
  • 80% of IT budgets will be spent on business innovation, and only 20% on system maintenance
  • All enterprise data will be stored in the cloud
  • 100% of application development and testing will be conducted in the cloud
  • Enterprise clouds will be the most secure place for IT processing

and the ones he did in 2017 for 2020

  • More than 50% of all enterprise data will be managed autonomously and also be more secure
  • Even highly regulated industries will shift 50% of their production workloads to cloud
  • 90% of all enterprise applications will feature integrated AI capabilities
  • The top ERP vendor in the cloud will own more than half of the total ERP market

Then he presented few predictions that have been afterwards by Forbes and Gartner Reseach to prove that the analysts and press had followed the same predictions…

  • In 15 months, 80% of all IT budgets will be committed to cloud apps and solutions – Forbes, Louis Columbus, “State of Cloud Adoption and Security”, 2017
  • 80% of enterprises will have shut down their traditional data centers by 2025 – Gartner Reserach, Dave Cappuccio, “The Data Center is Dead” 2018
  • The Cloud Could Be Your Most Secure Place for Data, Niall Browne CISO, Domo, 2017
  • Oracle, Salesforce, and MSFT together have a 70% share of all SaaS revenue – Forrester Research, 10 Cloud Computing predictions for 2018
  • AI Technologies Will Be in Almost Every New Software Product by 2020 – Gartner Research, Jim Hare, AI development strategies, 2017

Mark Hurd then spoke about AI in a slide named “Business Applications with AI” where he presented few statistics in order to better understand in what AI(chatbot, blockchain, aso) can help businesses. Not to mention that all these technologies will be encapsulated in Cloud Services.

  • ERP Cloud – 30% of Financial Analyst’s time “roughly 1 full day a week) is spent doing manual reports in excel. using AI, reports become error free and more insightful.
  • HCM Cloud – 35% of job recruiter’s day spent in sourcing and screening candidates. This could be cut in half, and result in improved employee talent.
  • SCM Cloud – 65% of Managers time spent manually tracking the shipment of goods. With Blockchain, this could be automated for improved visibility and trust.
  • CX Cloud – 60% of phone-support time on customer issues could be avoided altogether. With Integrated CX and AI could be addressed in a single call or via a chatbot.
Mark Hurd’s predictions by 2025

Finally he spoke about his own predictions for 2025: By 2025, all cloud apps will include AI

  • These Cloud apps will further distance themselves from legacy applications.
  • AI will be pervasive and woven into all business apps and platform services.
  • The same will be true for technologies like blokchain.

According to him by 2025, 85% of interactions with customers will be automated: Customer experience is fundamentally changing (and will dramatically improve) with these emerging technologies:

  • AI-based Digital Assistanst increases productivity and humanizes experiences
  • AI-driven Analytics helps businesses understand complexity of all customer needs
  • Internet of Things brings customers closer to companies that serve them
New I.T jobs by 2025

Regarding I.T jobs the following has been predicted by Mark Hurd:

  • 60% of the I.T Jobs have not been invented yet (But will be by 2025)

and the new jobs in 2025 will be:

  • Data professional (Analyst Scientist, Engineers)
  • Robot Supervisor
  • Human to Machine UX specialists
  • Smart Cyty Technology Designers
  • AI-Assisted Healtcare Technician

As a summary he concludes with a slide named “Better Business, Better I.T”

  • Cloud is irrefutable and foundational
  • Next in cloud is accelerated productivity and innovation
  • AI and other technologies will be integrated features
  • Autonomous database software will reduce cost and reduce risk

Mike Hurd during OOW2018

Cet article Oracle Open World 2018 D2: Mark Hurd’s keynote – Accelerating Growth in the Cloud est apparu en premier sur Blog dbi services.

Oracle Open World 2018 D1: Top Five MySQL Query Tuning Tips

Tue, 2018-10-23 16:52

Yesterday (22.10.2018) I participated to the Janis Griffin’s session about “Top Five Query Tuning Tips” at #OOW2018. Janis is Senior DBA /Performance Evangelist for SolarWinds and Ace Director. She is specialized in Performance Tuning.

Janis Griffin - MySQL Tuning Tips

She introduces her session by speaking about Challenges of Tuning. “Tuning takes time.”, “You cannot give enough power if SQL is ineficient”, “You therefore have to monitor wait time”. It sounds basic telling that it doesn’t worth adding CPU or memory when your SQL Statements have bad execution plan or are simply ineficient but that a common reflex that I already observed by customers.

But Tuning is hard, you do not always know where to start with (which statement you have to tune at first). It requires expertise in many areas, technical but also business. Of course tuning takes time and it’s not always the priority of the editor companies. Finally where to stop when you start tuning a statement ?

Janis Griffin - Total Wait Time

Let’s start with the tips…

1. Monitor Wait Time and understand the total time a Query spends in Database. MysQL helps by providing Wait Events and Thread States. Of course starting with MySQL 5.6 the Performance_Schema has been greatly improved and has 32 new tables in version 5.7. You can also access to the SYS Schema which is now provided by default with about 100 views.

2. Review the execution plan by using “explain”, “explain extended“, “explain FORMAT=JSON“, “Optimizer Trace” or “MySQL Workbench“. She also gave us some tips such as “Avoiding using table aliases since they don’t translate in plan”. “Optimizer trace” available since version 5.6.3+ can be used with:

set optimizer trace ="enabled=on"

Janis Griffin - Statement

3. Gather object information. Have a look on table definition and find if it’s really a table or if it’s a view. Get size of the table by using

mysqlshow --status database {table} {column}

Then Examine Columns in Where Clause and review selected Column and especially the usage of ‘*’ and scalar column. Have also a look on existing indexes (if multi-column, know the left leading column). Make sure the Optimizer can use the index, indeed functions on indexed columns can turn off index and look for implicit conversions. Her tip is to check keys and constraints, because they help creating better execution plan.

4. Find the driving table. You need to know the size of the actual data sets of each step:

  • In Joins (Right, Left, Outer)
  • What are the filtering predicates
  • When is each filtering predicate applied

But also compare size of the final result set with data examined. The goal is to reduce rows examined.

You also have to check if you are using the best indexes. Keep in mind that adding indexes is not always the right thing to do since you have to consider insert, update and delete operations. Consider also usage of Covering and Partial indexes.

5. Engineer Out the Stupid. Look for performance inhibitors such as:

  • Cursor or row by row processing
  • Parallel query processing. Not always bad but have a look on this blog from Alex Rubin named “increasing slow query performance with parallel query execution
  • Hard-coded hints
  • Nested views
  • Abuse of Wild Cards(*) or No Where Clause
  • Code-based SQL Generation (e.g. PHP generator, LINQ; nHibernate)
  • implicit data conversions
  • Non-sargable /scalar functions (eg. Select… where upper(first_name) = ‘JANIS’

Finally you can have a look on Janis Best practices with MySQL Tuning here.

Cet article Oracle Open World 2018 D1: Top Five MySQL Query Tuning Tips est apparu en premier sur Blog dbi services.

SQL Server availability groups, SQL Browser and Shared Memory considerations

Tue, 2018-10-23 13:06

Few weeks ago, my colleagues and me discussed availability groups and network considerations for one of our customers including disabling SQL Browser service and shared memory protocol. The point was disabling both features may lead to unexpected behaviors when creating availability groups.

blog 145 - 0 - AG network banner

Let’s start with the SQL Browser service. It is not uncommon to disable this service at customer shops and to use directly SQL Server listen ports instead. But if you go trough the availability group wizard you will find there a plenty of blockers when actions than require connecting to the secondary replica as adding a database, performing a failover and so on.

Disabling the SQL Browser service doesn’t mean you can not reach out your SQL Server instance by using the named instance format SERVER\INSTANCE. There are some scenarios that’s work perfectly including either connecting from the local server through the shared memory or by using SQL Server aliases. Let’s say my infrastructure includes 2 AG replicas vmtest2012r04\SQL2014 and vmtest2012r05\SQL2014. SQL browser is disabled and shared memory is enabled on each. There are no aliases as well. If you try to connect from the vmtest2012r04\SQL2014 by using named instance format it will work on the local replica (through shared memory) but it won’t work if you try to connect to the remote replica vmtest2012r05\SQL2014. In the latest case, you have will to use SERVER,PORT format as shown below:

C:\Users\dab>sqlcmd -S vmtest2012r204\SQL2014 -Q"SELECT 'OK' AS connection"
connection
---------
OK

(1 rows affected)

C:\Users\dab>sqlcmd -S vmtest2012r205\SQL2014 -Q"SELECT 'OK' AS connection"
Sqlcmd: Error: Microsoft ODBC Driver 13 for SQL Server : SQL Server Network Inte
rfaces: Error Locating Server/Instance Specified [xFFFFFFFF]. .
Sqlcmd: Error: Microsoft ODBC Driver 13 for SQL Server : Login timeout expired.
Sqlcmd: Error: Microsoft ODBC Driver 13 for SQL Server : A network-related or in
stance-specific error has occurred while establishing a connection to SQL Server
. Server is not found or not accessible. Check if instance name is correct and i
f SQL Server is configured to allow remote connections. For more information see
 SQL Server Books Online..

C:\Users\dab>sqlcmd -S vmtest2012r205,1453 -Q"SELECT 'OK' AS connection"
connection
---------
OK

 

But I guess this is not a big surprise for you. This kind of configuration works well with availability group but at the cost of some compromises. Indeed, creating an availability group remains pretty easy and you just have to keep using SERVER,PORT format when the wizard asks for connection information.

blog 145 - 1 - AG wizard - replica

But the game is different for adding a database to the AG or at least any operation that requires to connect to the replicas. In this case the Wizard forces to connect to the second replica by using SERVER\INSTANCE format leading to get stuck at this step.

blog 145 - 2 - AG wizard - add DB

The only way is to go through T-SQL script (or PowerShell command) to change the format to SERVER,PORT. Probably something that may be fixed by Microsoft in the future.

Let’s add now to the equation disabling the shared memory protocol on each replica. I met some customers who disable it to meet their internal best practices because their applications are not intended to connect locally on the same server than their database engine. At the first glance, this is not a bad idea but we may get in trouble with operations performed on availability group architectures. This is a least what we experienced every time we were in this specific context. For instance, if I try to create an availability group, I will face the following timeout error message:

blog 145 - 3 - AG wizard - shared memory disabled

This is a pretty weird issue and to get more details, we have to take a look at the cluster log. Here the interesting sample of messages we may find out:

...2018/10/22-20:42:51.436 ERR   [RES] SQL Server Availability Group <AG2014>: [hadrag] ODBC Error: [08001] [Microsoft][SQL Server Native Client 11.0]SQL Server Network Interfaces: Error Locating Server/Instance Specified [xFFFFFFFF].  (268435455)
...2018/10/22-20:42:51.436 ERR   [RES] SQL Server Availability Group <AG2014>: [hadrag] ODBC Error: [HYT00] [Microsoft][SQL Server Native Client 11.0]Login timeout expired (0)
...2018/10/22-20:42:51.436 ERR   [RES] SQL Server Availability Group <AG2014>: [hadrag] ODBC Error: [08001] [Microsoft][SQL Server Native Client 11.0]A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online. (268435455)
...2018/10/22-20:42:51.436 INFO  [RES] SQL Server Availability Group <AG2014>: [hadrag] Could not connect to SQL Server (rc -1)
...2018/10/22-20:42:51.436 INFO  [RES] SQL Server Availability Group <AG2014>: [hadrag] SQLDisconnect returns following information
...2018/10/22-20:42:51.451 ERR   [RES] SQL Server Availability Group <AG2014>: [hadrag] ODBC Error: [08003] [Microsoft][ODBC Driver Manager] Connection not open (0)
...2018/10/22-20:42:51.451 ERR   [RES] SQL Server Availability Group <AG2014>: [hadrag] Failed to connect to SQL Server
...2018/10/22-20:42:51.451 ERR   [RHS] Online for resource AG2014 failed.

 

It seems that the RHS.exe, through the resource associated to my AG, is not able to connect to the SQL Server replica during the initialization phase. According to the above cluster log, the ODBC connection seems to be limited to connect by using INSTANCE\NAME format and as I far as I know there is no interface to change it with the AG cluster resource DLL (thanks Microsoft guys for confirming this point). Therefore, disabling both SQL Browser and shared memory leads to the AG cannot be brought online because a communication channel cannot be established between the primary and the cluster service. My friend MVP Christophe Laporte tried also some funny tests by trying to create custom DSN connections without luck.   So, the simplest way to fix it if you want to keep disable the SQL Browser service is to enable the shared memory on each replica. Another workaround may consist in using SQL aliases but it leads to a static configuration that requires to document well your architecture.

In a nutshell, disabling SQL Browser limits the AG operations that can be done through the GUI. Adding the shared memory to the equation may have a bigger impact to the underlying WSFC infrastructure that you have to be aware of. According to my tests, this behavior seems to be same with versions from SQL2012 to SQL2017 (on Windows) regardless the WSFC version.

Hope this helps!

 

 

 

 

 

Cet article SQL Server availability groups, SQL Browser and Shared Memory considerations est apparu en premier sur Blog dbi services.

Oracle OpenWorld 2018: Day 1

Tue, 2018-10-23 01:07

The first session I assisted today was Oracle Active Data Guard: Best Practices and New Features Deep Dive.
This session was done by Nitin Karkhanis Director of Software Development managing the Data Guard Broker development team and Mahesh Girkar Senior Director of Software Development in Oracle’s Database Division. His team is responsible for developing High Availability features for Data Guard and Active Data Guard..
It was really a very interesting session. It was divided in two parts: The new features for 18c and new features they will implement for Oracle 19c.

Some Active Data Guard New Features for Oracle 18c

>Multi-Instance Redo Apply support now change block tracking
>Data Guard and Database Nologging mode
>The database buffer cache will be preserved on an Active Data Guard during role change
>Creating private temporary table is supported in Active Data Guard
>Better protection against failed logins

Some New Data Guard Broker Commands for Oracle 18c
dgmgrl > show all;
dgmgrl > set debug ON | OFF
dgmgrl > set echo ON | OFF
dgmgrl > set time ON | OFF
dgmgrl > validate  database boston spfile;
dgmgrl > validate  network configuration for boston;
Some New features for Data Guard in 19c

>Multi-Instance Redo Apply will work with the In Memory Column Store
>Global Temporary Tables can be now created and dropped in an Active Data Guard Standby
>Tunable Automatic Outage Resolution. The parameters that control the wait time that determine a hung process will be now documented
>DATA_GUARD_MAX_IO_TIME and DATA_GUARD_MAX_LONGIO_TIME. In former versions these parameters were hidden.
>In the future is flashback is done at the primary database no action will be needed on standby side. We just have to mount the standby
and Oracle will do the rest

Some Data Guard Broker Feature in 19c

TRACE_LEVEL replaces DEBUG

dgmgrl > set TRACE_LEVEL USER|SUPPORT 

New Commands to Set Database Parameters

dgmgrl>EDIT DATABASE SET PARAMETER parameter_name=value
dgmgrl>EDIT DATABASE RESET PARAMETER parameter_name=value

New command to export and import broker configuration file

dgmgrl> export configuration to ‘meta.xml’
dgmgrl> import configuration from ‘meta.xml’ 

Properties now pass through to Initialization Parameters
>DataGuardSyncLatency (DATA_GUARD_SYNC_LATENCY)
>StandbyFileManagement (STANDBY_FILE_MANAGEMENT)

Another session I attended was Best Practices for Maintaining Oracle RAC/Single Instance Database Environments presented by Bill Burton Consulting Member of Technical Staff Oracle
Scott Jesse Senior Director,Customer Support, DB Scalability, Security, Networking, SSC Oracle
Bryan Vongray, Senior Principal Technical Support Engineer, Oracle.
In fact according to their statistics more than 50% of opened SR concern well kown issues. So in this session they present some tools that can help for troubleshooting and monitoring RAC and Single Instance. They present
TFA
OraCheck and ExaCheck
By many illustrations, the speakers explain how to use the different tools to diagnose our environment.

The last session I attended was Inside the Head of a Database Hacker by Mark Fallon, Chief Security Architect, Oracle

The speaker tried in simple words to understand the motivations and the way the hackers act.
This will help to protect any environment. He comes to the conclusion that the security needs to be a collaboration
>Database Security
>Application Security
>Network Security
>End-point Security
>Process
>Employee Education
>Physical Security
>Supply Chain Security

So we will come back tomorrow for another briefing of our day

Cet article Oracle OpenWorld 2018: Day 1 est apparu en premier sur Blog dbi services.

Pages