Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 1 hour 18 min ago

VirtualBox 5.2 exports the VM to the Oracle Cloud

11 hours 32 min ago

The new release of Oracle VM VirtualBox (aka VirtualBox) is there with a new functionality to export a VM to the Oracle Cloud Compute (aka Oracle Cloud Infrastructure). That can be interesting to prepare a VM on my laptop and move it to the Cloud to get it accessible from everywhere. Here’s my first try. In my opinion, it’s idea but probably need further evolution.

VirtualBox

Here is what is new: in addition to .ova you can export to an Oracle Public Cloud image:
CaptureVboxCloud000

This takes some time, as it compresses and writes all the disk images

CaptureVboxCloud002

The result is a .tar.gz for each disk attached to my VM. It is actually the image of the disk (.img) that is tar-ed and then gzipped. My VM (called VM101) had two disks (VM101-disk1.vdi and VM101-disk2.vdi). The export generated: VM101.tar.gz (containing VM101-disk002.img which looks like my first disk) and VM101-disk003.tar.gz (VM101-disk003.img which looks like my second disk)

Here is the content:


$ tar -ztvf VM101.tar.gz
-rw-r----- vboxopc10/vbox_v5.2.VBOX_VERSION_PATCHr11 4294967296 2017-10-19 21:23 VM101-disk002.img
 
$ tar -ztvf VM101-disk003.tar.gz
-rw-r----- vboxopc10/vbox_v5.2.VBOX_VERSION_PATCHr11 27917287424 2017-10-19 21:25 VM101-disk003.img

The .img is the image of the disk, with the partition and boot sector.

Compute Cloud

In the Oracle Public Cloud I can import this image: Compute Classic -> Images -> Upload Image

CaptureVboxCloud003

I upload only the image of the first disk, which contains the root filesystem:

CaptureVboxCloud004

CaptureVboxCloud005

And then I create the compute instance with the ‘Associate Image’ button:

CaptureVboxCloud006

Now, I’m ready to create an instance for it: Instance -> Customize -> Private Images

CaptureVboxCloud010

Then, I can define the shape (OCPU and memory), upload my SSH public key, and add storage (I could add my second disk here) and create the instance.

Here I’ve started it:

CaptureVboxCloud008

Unfortunately, my VM still has the network interface defined for my VirtualBox environment and then I have no way to connect to it. I hope that this feature will evolve to also export virtual network interfaces.

I have not seen any way to open a terminal on console. The only thing I can do is take snapshots of it:

CaptureVboxCloud009

Ok, so there’s a problem way before the network interfaces. My VM from Oracle VM VirtualBox (aka VirtualBox) now starts on Oracle VM (aka OVM) and besides the similar marketing name, they are different hypervisors (OVM running XEN). Probably a driver is missing to access block devices and maybe this Bug 21244825.

That’s probably all my tests on this until the next version. It is currently not easy to have a VM that can be started on different hypervisors and network environment.

So what?

Nothing very special here. Moving a VM from one hypervisor to the other is not an easy thing, but it is a good idea. And I hope that the integration into Oracle Cloud will be easier in the future with virtual disk and network interfaces. For the Oracle Cloud, it will be nice to have access to the console, but at least a screenshot may help to troubleshoot.

 

Cet article VirtualBox 5.2 exports the VM to the Oracle Cloud est apparu en premier sur Blog dbi services.

PostgreSQL Index Suggestion With Powa

Fri, 2017-10-20 09:21

A few time ago my colleague Daniel did a blog about POWA. In a nice article he shown how this tool can be used to monitor our PostgreSQL.
In this present article I am going to show how this powerful tool can help by suggesting indexes which can optimize our queries.
I am using postgeSQL 9.6

[root@pgservertools extension]# yum install postgresql96-server.x86_64
[root@pgservertools extension]# yum install postgresql96-contrib.x86_64

And Then I initialize a cluster

[root@pgservertools extension]# /usr/pgsql-9.6/bin/postgresql96-setup initdb
Initializing database ... OK

POWA require following extensions:
pg_qualstats: gathers statistics on predicates found in WHERE statements and JOIN clauses
pg_stat_kcache : gathers statistics about real reads and writes done by the filesystem layer
hypopg : extension adding hypothetical indexes in PostgreSQL. This extension can be used to see if PostgreSQL will use the index or no
btree_gist : provides GiST index operator classes that implement B-tree equivalent behavior for various data types
powa_web : will provide access to powa via a navigator

Just we will note that following packages are installed to resolve some dependencies during the installation of these extensions.

yum install python-backports-ssl_match_hostname.noarch
rpm -ivh python-tornado-2.2.1-8.el7.noarch.rpm
rpm -ivh python-tornado-2.2.1-8.el7.noarch.rpm

And then extensions are installed using yum

yum install powa_96.x86_64 pg_qualstats96.x86_64 pg_stat_kcache96.x86_64 hypopg_96.x86_64 powa_96-web.x86_64

After the installation the postgresql.conf is modified to load the extensions

[root@pgservertools data]# grep shared_preload_libraries postgresql.conf | grep -v ^#
shared_preload_libraries = 'pg_stat_statements,powa,pg_stat_kcache,pg_qualstats' # (change requires restart)
[root@pgservertools data]#

And then restart the PostgreSQL

[root@pgservertools data]# systemctl restart postgresql-9.6.service

For POWA configuration, the first step is to create a user for powa

postgres=# CREATE ROLE powa SUPERUSER LOGIN PASSWORD 'root';
CREATE ROLE

and the repository database we will use.

postgres=# create database powa;
CREATE DATABASE

The extensions must be created in the repository database and in all databases we want to monitor

postgres=#\c powa
powa=# CREATE EXTENSION pg_stat_statements;
CREATE EXTENSION
powa=# CREATE EXTENSION btree_gist;
CREATE EXTENSION
powa=# CREATE EXTENSION powa;
CREATE EXTENSION
powa=# CREATE EXTENSION pg_qualstats;
CREATE EXTENSION
powa=# CREATE EXTENSION pg_stat_kcache;
CREATE EXTENSION
powa=# CREATE EXTENSION hypopg;
CREATE EXTENSION

We can verify that extensions are loaded in the database using

powa=# \dx
List of installed extensions
Name | Version | Schema | Description
--------------------+---------+------------+-----------------------------------------------------------
btree_gist | 1.2 | public | support for indexing common datatypes in GiST
hypopg | 1.1.0 | public | Hypothetical indexes for PostgreSQL
pg_qualstats | 1.0.2 | public | An extension collecting statistics about quals
pg_stat_kcache | 2.0.3 | public | Kernel statistics gathering
pg_stat_statements | 1.4 | public | track execution statistics of all SQL statements executed
plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language
powa | 3.1.1 | public | PostgreSQL Workload Analyser-core
(7 rows)
powa=#

Now let’s create a database named mydb for our tests and let’s create all extensions inside the database.

[postgres@pgservertools ~]$ psql
psql (9.6.5)
Type "help" for help.
postgres=# create database mydb;
CREATE DATABASE
postgres=#

Let’s again verify extensions into the database mydb

mydb=# \dx
List of installed extensions
Name | Version | Schema | Description
--------------------+---------+------------+-----------------------------------------------------------
btree_gist | 1.2 | public | support for indexing common datatypes in GiST
hypopg | 1.1.0 | public | Hypothetical indexes for PostgreSQL
pg_qualstats | 1.0.2 | public | An extension collecting statistics about quals
pg_stat_kcache | 2.0.3 | public | Kernel statistics gathering
pg_stat_statements | 1.4 | public | track execution statistics of all SQL statements executed
plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language
powa | 3.1.1 | public | PostgreSQL Workload Analyser-core
(7 rows)
mydb=#

In mydb database we create a table mytab and insert in it some rows

mydb=# \d mytab
Table "public.mytab"
Column | Type | Modifiers
--------+---------+-----------
id | integer |
val | text |
.
mydb=# select count(*) from mytab;
count
-----------
100000000
(1 row)

The last step is to configure the powa-web configuration file. Below is our file

[root@pgservertools etc]# pwd
/etc
[root@pgservertools etc]# cat powa-web.conf
servers={
'main': {
'host': 'localhost',
'port': '5432',
'database': 'powa',
'query': {'client_encoding': 'utf8'}
}
}
cookie_secret="secret"
[root@pgservertools etc]#

And then powa-beb can be started by following command

[root@pgservertools etc]# powa-web &
[1] 5600
[root@pgservertools etc]# [I 171006 13:54:42 powa-web:12] Starting powa-web on http://0.0.0.0:8888

We can now log with the user powa we created at http://localhost:8888/
powa1

And then we can choose mydb database to monitor it
powa2

Now let’s run some queries. As my load is very low I set my pg_qualstats.sample_rate=1 in the postgresql.conf file (thanks to Julien Rouhaud)

[postgres@pgservertools data]$ grep pg_qualstats.sample_rate postgresql.conf
pg_qualstats.sample_rate = 1


mydb=# select * from mytab where id in (75,25,2014,589);
id | val
------+-----------
25 | line 25
75 | line 75
589 | line 589
2014 | line 2014
(4 rows)

Time: 9472.525 ms
mydb=#

Using the tab Index suggestions, we click on Optimize the database. We can see that an index creation is recommended with the potential gain.
powa3
powa4
powa5
We will just note that PostgreSQL uses the extension hypopg to see if the index will be used or no. Let’s see how this extension works. Hypothetical indexes are useful to know if specific indexes can increase performance of a query. They do not cost CPU as they don’t exist.
Let’s create a virtual index in mydb database

mydb=# select * from hypopg_create_index('create index on mytab (id)');
indexrelid | indexname
------------+-----------------------
55799 | btree_mytab_id
(1 row)
mydb=#

We can verify the existence of the virtual index by

mydb=# SELECT * FROM hypopg_list_indexes();
indexrelid | indexname | nspname | relname | amname
------------+-----------------------+---------+---------+--------
55799 | btree_mytab_id | public | mytab | btree
(1 row)

Using explain, we can see that PostgreSQL will use the index.

mydb=# explain select * from mytab where id in (75,25,2014,589);
QUERY PLAN
-------------------------------------------------------------------------------------
Index Scan using btree_mytab_id on mytab (cost=0.07..20.34 rows=4 width=17)
Index Cond: (id = ANY ('{75,25,2014,589}'::integer[]))
(2 rows)

Just not that explain analyze will not use the virtual index
Conclusion
In this article we see how POWA can help for optimizing our PostgreSQL database.

References: https://pgxn.org/dist/hypopg/; http://powa.readthedocs.io/en/latest/

 

Cet article PostgreSQL Index Suggestion With Powa est apparu en premier sur Blog dbi services.

Managing Oracle Big Data Cloud – CE with REST API

Thu, 2017-10-19 10:29

In this blog post, we will see how to manage Oracle Public Cloud Big Data service Compute Edition with REST API. Scheduling the start/stop/restart of a metered PaaS in the Oracle cloud can be interesting for managing your cloud credits consumptions.

We need first consult the official documentation, to understand how the API is composed. https://docs.oracle.com/en/cloud/paas/big-data-compute-cloud/csbdp/QuickStart.html 

Use the following URL composition for accessing to REST endpoint:
https://region-prefix.oraclecloud.com/resource-path

According to Oracle documentation, the following information should be taken into account.

Connection Information:

  • Identity Domain: axxxxxx
  • REstFull URL: https://psm.europe.oraclecloud.com/
  • username -password

Terminology:

  • {instanceName} = Name of the BDCS-CE service (= Cluster Name)
  • {identityDomainId} = “X-ID-TENANT-NAME: axxxxxx”
  • {function} = start, stop, restart
  • {allServiceHosts} = the entire cluster VMs (all instances which composed the cluster)
  • “Accept: <value>” = Media Type (default value = application/json)

Before starting an automation script to manage your Big Data cluster, execute single GET/POST commands to understand how the API is working.

GET request: View all Service BDCS-CE instances

/paas/api/v1.1/instancemgmt/{identityDomainId}/services/BDCSCE/instances

curl -i -X GET \
        -u "username:password" \
        -H "X-ID-TENANT-NAME: axxxxxx" \
        -H "Accept: application/json" \
        "https://psm.europe.oraclecloud.com/paas/api/v1.1/instancemgmt/axxxxxx/services/BDCSCE/instances"

Result:

HTTP/1.1 200 OK
Server: Oracle-Application-Server-11g
Strict-Transport-Security: max-age=31536000;includeSubDomains
Content-Language: en
...

{"services":{"cluster-iot":{"...

According to the HTTP status code, the command was successful.

GET request: View a specific Service BDCS-CE instances

Add the instance name to get the status of a specific cluster. Note that a BDCS-CE instance is your Big Data cluster.

/paas/api/v1.1/instancemgmt/{identityDomainId}/services/BDCSCE/instances/{instanceName}

curl -i -X GET \
        -u "username:password" \
        -H "X-ID-TENANT-NAME: axxxxxx" \
        -H "Accept: application/json" \
        "https://psm.europe.oraclecloud.com/paas/api/v1.1/instancemgmt/axxxxxx/services/BDCSCE/instances/cluster-iot"

Then use the same requests structure to start/stop/restart your Big Data cluster.

POST request: Start / Stop / Restart Service Instances BDCS-CE: cluster-iot

/paas/api/v1.1/instancemgmt/{identityDomainId}/services/BDCSCE/instances/{instanceName}/hosts/{function}

As it’s specified in the documentation, you need to change the media type to application/vnd.com.oracle.oracloud.provisioning.Service+json and use a body parameter to specify which hosts you want to manage. In our case, we want to manage all cluster hosts.

curl -i -X POST -u "username:password" \
-H "X-ID-TENANT-NAME: axxxxxx" \
-H "Content-Type: application/vnd.com.oracle.oracloud.provisioning.Service+json" \
-d '{"allServiceHosts":"true"}' "https://psm.europe.oraclecloud.com/paas/api/v1.1/instancemgmt/axxxxxx/services/BDCSCE/instances/cluster-iot/hosts/stop"

You can now, start to develop an automation script to manage your Oracle Big Data Compute Edition cluster.

Python prerequistes:

Install Python-PIP before:

dbi@host:~/$ sudo apt-get install python-pip

Install Requests module with PIP:

dbi@host:~/$ sudo pip install requests

Code:

#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
__author__ = "Mehdi Bada"
__company__= dbi services sa
__version__ = "1.0"
__maintainer__ = "Mehdi Bada"
__email__ = ""
__status__ = "Dev"

"""

import os, sys, getopt
import requests
import simplejson
import json

# Variables defintion.

identityDomainId="axxxxxx"
instanceName="cluster-iot"
server = "https://psm.europe.oraclecloud.com"
commands = ['start', 'stop', 'restart']


def usage():
    print "\nScript Usage \n"
    print "Usage:", sys.argv[0], "-c [start|stop|restart] | -h \n"

if len(sys.argv) < 3:
    usage()
    sys.exit(2)

try:
    opts, args = getopt.getopt(sys.argv[1:], "ch", ["command", "help"])
except getopt.GetoptError:
    usage()
    sys.exit(2)

for opt, arg in opts:
    if opt in ("-h", "--help"):
        usage()
        sys.exit()
    elif opt in ("-c", "--command"):
        icommand=sys.argv[2]
        if icommand in commands:
                icommand=sys.argv[2]
        else:
                usage()
                sys.exit(2)


url = server + "/paas/api/v1.1/instancemgmt/%s/services/BDCSCE/instances/%s/hosts/%s" % (identityDomainId,instanceName,icommand)

payload = "{\"allServiceHosts\":\"true\"}"

headers = {
    'x-id-tenant-name': "%s" %(identityDomainId),
    'accept': "application/vnd.com.oracle.oracloud.provisioning.Service+json",
    'content-type': "application/json",
    'authorization': " ",
    }

response = requests.request("POST", url, data=payload, headers=headers)

# Print the status code of the response.
print("\n")
print(response.status_code)

# Json Parsing
content=response.content
j = simplejson.loads(content)
print (j['details']['message'])

Usage:

dbi@host:~/$ ./bdcsce_start_stop_test.py -h

Script Usage

Usage: ./bdcsce_start_stop_test.py -c [start|stop|restart] | -h

 

Oracle REST API is not very well documented, that why multiple tests should be performed before understanding how it works.

 

Cet article Managing Oracle Big Data Cloud – CE with REST API est apparu en premier sur Blog dbi services.

Documentum: IndexAgent uninstalled continues to queue requests

Wed, 2017-10-18 03:15

We had a strange behavior by a customer regarding the indexing queue. We used to have two IA configured and we uninstalled one.
I figured out that we still had indexing queue requests for the old index agent while it was totally uninstalled.

I checked the following objects to see if the agent was still configured somewhere: dm_fulltext_index, dm_ftengine_config, dm_ftindex_agent_config. But the old IA was not declared anymore.

The main problem is that it continued to queue all changes in the indexing queue and nothing cleaned it up, so we got like 2 million requests, filling up the db table.

I finally found out where the old IA was declared: in the registry events.
select * from dmi_registry where user_name = ‘dm_fulltext_index_user_01′;

r_object_id          user_name                     registered_id        event
2601b86480001d03     dm_fulltext_index_user_01     0301b86480000104     dm_save
2601b86480001d04     dm_fulltext_index_user_01     0301b86480000104     dm_destroy
2601b86480001d05     dm_fulltext_index_user_01     0301b86480000105     dm_save
2601b86480001d06     dm_fulltext_index_user_01     0301b86480000105     dm_readonlysave
2601b86480001d07     dm_fulltext_index_user_01     0301b86480000105     dm_checkin
...

In order to unregister the events, use the following:
unregister,c,<registered_id>,<event>,<queue_name>

So for me:

unregister,c,0301b86480000104,dm_save,dm_fulltext_index_user_01
unregister,c,0301b86480000104,dm_destroy,dm_fulltext_index_user_01
unregister,c,0301b86480000105,dm_save,dm_fulltext_index_user_01
unregister,c,0301b86480000105,dm_readonlysave,dm_fulltext_index_user_01
...

If you want to check if the old IA still queues requests, you can use:
select distinct name from dmi_queue_item where name like ‘dm_fulltext%';

If you see the old queue name, that means you still have the registered events.

 

Cet article Documentum: IndexAgent uninstalled continues to queue requests est apparu en premier sur Blog dbi services.

SQL Server Management Studio New Features

Thu, 2017-10-12 09:24

regularly, Microsoft SQL Server comes up with its new features, and in particular SSMS 2017 17.3

in this post, i present you a new feature : Import flat files

How does it works :

Open SSMS, and right click on a selected database

Photo1

Choose Import Flat File

Photo2

 

Click Next

Photo3

 

Tip the Path of your file to import and the new target table’s name

Photo4

this is an overview of the file content

Click Next

Photo5

As you can see a data type conversion is purposed , click Next

Photo6

 

Click Finish and import task start

Photo7

The import task is complet

Photo8

 

to verify, go to your database and execute a select on the brand new table

 

CONCLUSION:

This is a small tool that will make life easier for us to insert flat files quickly, the only drawback is that you can not insert the data on an existing table, the tool will ask you to create a new table.

 

Cet article SQL Server Management Studio New Features est apparu en premier sur Blog dbi services.

Oracle Database Multilingual Engine (MLE)

Wed, 2017-10-11 01:35

My ODC appreciation blog post was about Javascript in the database running in the beta of the Oracle Database Multilingual Engine (MLE). Here I’ll detail my first test which is a comparison, in performance, between a package written in Javascript, running in the MLE, and one written and running in PL/SQL.

I’ve downloaded the 12GB .ova from OTN, installed the latest SQLcl, and I’m ready to load my first Javascript procedure. I want something simple that I can run a lot of times because I want to test my main concern when running code in a different engine: the context switch between the SQL engine and the procedural one.

My kid’s maths exercises were about GCD (greatest common divisor) this week-end so I grabbed the Euclid’s algorithm in Javascript. This algorithm was the first program I ever wrote long time ago, on ZX-81, in BASIC. Now in Javascript it can use recursion. So here is my gcd.js file:

module.exports.gcd = function (a, b) {
function gcd(a, b) {
if (b == 0)
{return a}
else
{return gcd(b, a % b)}
}
return gcd(a,b)
}

We need strong typing to be able to load it as a stored procedure, so here is the TypeScript definition in gcd.d.ts

export function gcd(a:number, b:number ) : number;

I load it with the dbjs utility, which I run in verbose mode:

[oracle@dbml MLE]$ dbjs deploy -vv gcd.js -u demo -p demo -c //localhost:1521/DBML
deploy: command called /media/sf_share/MLE/gcd.js oracle
Oracle backend: starting transpiler
gcd: processed function
Oracle backend: opening connection to database
gcd.js: retrieving functions
dropModule: called with gcd.js
loadModule: called with gcd.js
BEGIN
EXECUTE IMMEDIATE 'CREATE PACKAGE GCD AS
FUNCTION GCD("p0" IN NUMBER, "p1" IN NUMBER) RETURN NUMBER AS LANGUAGE JS LIBRARY "gcd.js" NAME "gcd" PARAMETERS("p0" DOUBLE, "p1" DOUBLE);
END GCD;';
END;
: generated PLSQL
+ gcd.js
└─┬ gcd
└── SCALAR FUNCTION GCD.GCD("p0" IN NUMBER, "p1" IN NUMBER) RETURN NUMBER

As it is mentioned in the verbose log, the Javascript code is transpiled. My guess is that the Javascript is parsed by the Oracle Truffle framework and compiled by Oracle GaalVM. More info in the One VM to Rule Them All paper.

This has loaded the package, the library and an ‘undefined’ object of type 144 (this MLE is in beta so not all dictionary views have been updated):


SQL> select * from dba_objects where owner='DEMO';
 
OWNER OBJECT_NAME SUBOBJECT_NAME OBJECT_ID DATA_OBJECT_ID OBJECT_TYPE CREATED LAST_DDL_TIME TIMESTAMP STATUS TEMPORARY GENERATED SECONDARY NAMESPACE EDITION_NAME SHARING EDITIONABLE ORACLE_MAINTAINED
----- ----------- -------------- --------- -------------- ----------- ------- ------------- --------- ------ --------- --------- --------- --------- ------------ ------- ----------- -----------------
DEMO GCD 93427 PACKAGE 09-OCT-2017 15:29:33 09-OCT-2017 15:29:33 2017-10-09:15:29:33 VALID N N N 1 NONE Y N
DEMO gcd.js 93426 LIBRARY 09-OCT-2017 15:29:33 09-OCT-2017 15:29:33 2017-10-09:15:29:33 VALID N N N 1 NONE Y N
DEMO gcd.js 93425 UNDEFINED 09-OCT-2017 15:29:33 09-OCT-2017 15:29:33 2017-10-09:15:29:33 VALID N N N 129 NONE N
 
 
SQL> select * from sys.obj$ where obj# in (select object_id from dba_objects where owner='DEMO');
 
OBJ# DATAOBJ# OWNER# NAME NAMESPACE SUBNAME TYPE# CTIME MTIME STIME STATUS REMOTEOWNER LINKNAME FLAGS OID$ SPARE1 SPARE2 SPARE3 SPARE4 SPARE5 SPARE6 SIGNATURE SPARE7 SPARE8 SPARE9
---- -------- ------ ---- --------- ------- ----- ----- ----- ----- ------ ----------- -------- ----- ---- ------ ------ ------ ------ ------ ------ --------- ------ ------ ------
93427 284 GCD 1 9 09-OCT-2017 15:29:33 09-OCT-2017 15:29:33 09-OCT-2017 15:29:33 1 0 6 65535 284 51713CBD7509C7BDA23B4805C3E662DF 0 0 0
93426 284 gcd.js 1 22 09-OCT-2017 15:29:33 09-OCT-2017 15:29:33 09-OCT-2017 15:29:33 1 0 6 65535 284 8ABC0DDB16E96DC9586A7738071548F0 0 0 0
93425 284 gcd.js 129 144 09-OCT-2017 15:29:33 09-OCT-2017 15:29:33 09-OCT-2017 15:29:33 1 0 6 65535 284 0 0 0

MLE Javascript

So, I’ve executed the function multiple times for each one of 10 millions rows:

SQL> select distinct gcd(rownum,rownum+1),gcd(rownum,rownum+2),gcd(rownum,rownum+3) from xmltable('1 to 10000000');
 
Elapsed: 00:00:17.64

The execution on 30 million took 17 seconds

PL/SQL function

In order to compare, I’ve created the same in PL/SQL:

SQL> create or replace function gcd_pl(a number, b number) return number as
2 function gcd(a number, b number) return number is
3 begin
4 if b = 0 then
5 return a;
6 else
7 return gcd_pl.gcd(b,mod(a,b));
8 end if;
9 end;
10 begin
11 return gcd_pl.gcd(a,b);
12 end;
13 /

Here is the execution:

SQL> select distinct gcd_pl(rownum,rownum+1),gcd_pl(rownum,rownum+2),gcd_pl(rownum,rownum+3) from xmltable('1 to 10000000');
 
Elapsed: 00:01:21.05

PL/SQL UDF function

In 12c we can declare a function with the pragma UDF so that it is optimized for calling from SQL

SQL> create or replace function gcd_pl_udf(a number, b number) return number as
2 pragma UDF;
3 function gcd(a number, b number) return number is
4 begin
5 if b = 0 then
6 return a;
7 else
8 return gcd_pl_udf.gcd(b,mod(a,b));
9 end if;
10 end;
11 begin
12 return gcd_pl_udf.gcd(a,b);
13 end;
14 /

Here is the execution:

SQL> select distinct gcd_pl_udf(rownum,rownum+1),gcd_pl_udf(rownum,rownum+2),gcd_pl_udf(rownum,rownum+3) from xmltable('1 to 10000000');
 
Elapsed: 00:00:51.85

Native compilation

We can also improve PL/SQL runtime by compiling it in native, rather than being interpreted on p-code

SQL> alter session set plsql_code_type=native;
Session altered.
 
SQL> alter function gcd_pl_udf compile;
Function altered.
 
SQL> alter function gcd_pl compile;
Function altered.

and here is the result:

SQL> select distinct gcd_pl_udf(rownum,rownum+1),gcd_pl_udf(rownum,rownum+2),gcd_pl_udf(rownum,rownum+3) from xmltable('1 to 10000000');
 
Elapsed: 00:01:10.31
 
SQL> select distinct gcd_pl_udf(rownum,rownum+1),gcd_pl_udf(rownum,rownum+2),gcd_pl_udf(rownum,rownum+3) from xmltable('1 to 10000000');
 
Elapsed: 00:00:45.54

Inline PL/SQL

Finally, similar to an UDF function, we can declare the function in the query, inlined in a WITH clause:


SQL> with function gcd_pl_in(a number, b number) return number as
2 function gcd(a number, b number) return number is
3 begin
4 if b = 0 then
5 return a;
6 else
7 return gcd(b,mod(a,b));
8 end if;
9 end;
10 begin
11 return gcd(a,b);
12 end;
13 select distinct gcd_pl_in(rownum,rownum+1),gcd_pl_in(rownum,rownum+2),gcd_pl_in(rownum,rownum+3) from xmltable('1 to 10000000')
14 /

And here is the result:

Elapsed: 00:00:48.92

Elapsed time summary

Here is a recap of the elapsed time:
CaptureMLE

Elapsed: 00:00:17.64 for MLE Javascript
Elapsed: 00:00:45.54 for PL/SQL UDF function (native)
Elapsed: 00:00:48.92 for Inline PL/SQL
Elapsed: 00:00:51.85 for PL/SQL UDF function (interpreted)
Elapsed: 00:01:10.31 for PL/SQL function (native)
Elapsed: 00:01:21.05 for PL/SQL function (interpreted)

The top winner is Javascript!

Perfstat Flame Graph

My tests were deliberately doing something we should avoid for performance and scalability: call a function for each row, because this involves a lot of time spent in switching the context between the SQL and the procedural engine. But this is however good for code maintainability. This overhead is not easy to measure from the database. We can look at the call stack to see what happens when the process is evaluating the operand (evaopn2) and switches to PL/SQL (evapls), and what happens besides running the PL/SQL itself (pfrrun). I have recorded perf-stat for the cases above to display the Flame Graph on the call stack. When looking for more information I remembered that Frits Hoogland already did that so I let you read Frits part1 and part2

You can download my Flame Graphs and here is a summary of .svg name and call stack from operand evaluation to PL/SQL run:

PL/SQL UDF function (native) perf-gcd_pl_UDF_native.svg evaopn2>evapls>peidxrex>penrun
Inline PL/SQL perf-gcd_pl_inline.svg evaopn2>evapls>kkxmss_speedy_stub>peidxrex>pfrrun>pfrrun_no_tool
PL/SQL UDF function (interpreted) perf-gcd_pl_UDF_interpreted.svg evaopn2>evapls>peidxexe>pfrrun>pfrrun_no_tool
PL/SQL function (native) perf-gcd_pl_native.svg evaopn2>evapls>kgmexec>kkxmpexe>kkxdexe>peidxexe>peidxr_run>plsql_run>penrun
PL/SQL function (interpreted) perf-gcd_pl_interpreted.svg evaopn2>evapls>kgmexec>kkxmpexe>kkxdexe>peidxexe>peidxr_run>plsql_run>pfrrun>pfrrun_no_tool

But more interesting is the Flame Graph for the JavaScript execution:
CaptureMLEFlame

My interpretation on this is limited but I don’t see a stack of context switching function before calling the MLE engine, which is probably the reason why it is fast. Besides the ‘unknown’ which is probably the run of the JavaScript itself (the libwalnut.so library has no symbols) we can see that most of the time is in converting SQL data types into JavaScript types at call, and the opposite on return:

  • com.oracle.walnut.core.types.OraNumberUtil.doubleToNumber
  • com.oracle.walnut.core.types.OraNumberUtil.numberToDouble

This is the price to pay when running a different language, with different data types.

So what?

This MultiLingual Engine looks promising, both for functionalities (choose the language to run in the database) and performance (same address space than the SQL, and context switching is minimal). Of course, this is only in beta. There may be more things to implement, with more overhead. For example, we can imagine that if it goes to production there will be some instrumentation to measure time and record it in the Time Model. It may also be optimized further. You can test it (download from the MLE home and give feedback about it (on the MLE forum).

This post was about to measuring performance when switching from SQL to PL/SQL. In next post, I’ll look at callbacks when running SQL from MLE.

 

Cet article Oracle Database Multilingual Engine (MLE) est apparu en premier sur Blog dbi services.

ODC Appreciation Day : Javascript in the database

Mon, 2017-10-09 23:00

Tim Hall has launched the idea to post small blogs this day, from all the Oracle community, about an Oracle feature. I choose one feature that is only released in beta test for the moment: the Multilingual Engine (MLE) which is able to run Javascript stored procedures in the database.

Why?

When I first heard about this idea, last year before OOW16, I didn’t understand the idea at all. But the good thing at Oracle Open World, is that we can discuss with Oracle product managers, and with other Oracle DBAs or Developers, rather than relying on rumors or wrong ideas. My perception of Javascript was narrowed to the language used at client-side in thin clients, in the browser, to manage the presentation layer. It is interpreted by the browser, has no type checking, and errors are not easy to understand. Clearly, the opposite if something that I want to run in my database, on my data. PL/SQL is obviously the best choice: compiled and run into the database, strong typing to avoid runtime errors, directly integrated with SQL for better performance, etc.

So that idea of JS in the database made me smile, but I was wrong. What I didn’t get is that Javascript is just a language, and running Javascript does not mean that it has to be interpreted like when it is running on a browser.

Multilingual Engine (MLE)

Actually, what Oracle is developing in its lab goes far beyond just running Javascript in the database. They are building an execution engine, like PL/SQL or SQL execution engine, but this one being able to run programs written in different languages. They start with Javascript and TypeScript (and then strong typing here) but this can be extended in the future (Python, and why not PL/SQL one day running there). The programs will be loaded into the database as stored procedures/functions/packages and compiled into an intermediate representation, like bytecode. This code is optimized to access efficiently to data, like the PL/SQL engine.

Actually, I’ll show in a future post that this new engine can run faster than PL/SQL for some processing and that it looks like the context switching with the SQL engine is highly efficient.

Javascript

So, why would you write your stored procedure in Javascript? The first reason is that there are a lot of existing libraries available and you may not want to re-write one. For example, I remember when working on an airline company application that I had to write in PL/SQL the function to calculate the orthodromic distance (aka great circle). This is a very simple example. But if you can get the formula in Javascript, then why not compile from this rather than translate it into another language? Currently, you can find pretty everything in Javascript or Python.

The second reason is that your application may have to use the same function at different layers. For example, you can check that a credit card number is correctly formed in the presentation layer, in order to show quickly to the user if it is correct or not. That may be Javascript in the browser. But the database should also verify that in case the rows are inserted with a different application, or in case the number has been corrupt in between. That may be PL/SQL in the database. Then you have to maintain two libraries in two different languages, but doing the same thing. Being able to run Javascript in the database let us re-use exactly the same library in the client and in the database.

Finally, one reason why some enterprise architects do not want to write procedures in the database is that the language for that, PL/SQL, can only run on Oracle. If they can write their business logic in a language that can run everywhere, then there is no vendor lock-in anymore. They have the possibility to run on another RDBMS if needed, and still get the optimal performance of processing data in the database.

Public Beta

Currently, this is a lab project from Oracle in Zurich. They have released a public beta downloadable as a VM. Just go to the download page at http://www.oracle.com/technetwork/database/multilingual-engine/overview/index.html

Capture;LE

And stay tuned to this blog to see some performance comparison with PL/SQL User-Defined Function.

 

Cet article ODC Appreciation Day : Javascript in the database est apparu en premier sur Blog dbi services.

Autonomous Database

Sun, 2017-10-08 12:09

Larry Ellison has announced Oracle 18c to be the Autonomous Database, the no-human labor, and self-driven, database. Here is a quick recap of what it is behind the marketing words. My opinion only.

Autonomous

Since Oracle decided to be a public cloud PaaS provider, they announced ‘unmanaged’ and ‘managed’ services. The managed service for DBaaS is where you are not the Database Administrator. Autonomous Database 017-10-04 15.51.53 You have full administration right to deploy and develop your application, but the system administration is done by Oracle: provisioning, patching, upgrade, system tuning, availability. The first managed service was announced last year at OOW16: the Oracle Exadata Express Cloud Service, for developers. And the second managed service is the Oracle Data Warehouse Cloud Service, for data warehousing. It is planned for December 2017 and will be based on new version: Oracle 18c.
And let’s be clear, the ‘autonomous’ label came at the last minute, as the marketing message for Open World, to show that the managed service is fully automated, because Cloud is about automation.

So, is that only marketing? I don’t think so. There are real features behind it. And some of them exist for a long time, and had just to be enhanced further in 18c.

Availability

The features are there for a long time for full automation. RAC protects from instance or server crash and Data Guard protects from all other failures. Both are fully automated with no-human intervention: service relocation in RAC, Fast-Start FailOver in Data Guard. And both can failover transparently with Application Continuity. Oracle also recently introduced Sharding, to link data availability to different datacenter. And this technique will also be used for RAC instance affinity in Oracle 18c.

Patch and Upgrade

You can apply patches in a rolling manner with RAC. The problem was OJVM, but this will be also rolling in 18c. You can do rolling upgrade with Data Guard. And 18c will have automated and faster upgrade at PDB plug-in. From a system point of view, all is fully automated. However, we will still need human intervention for testing, and planning it at the right time, and of course for troubleshooting when something goes wrong. The autonomous database is not incompatible with that. With logical replication (Golden Gate, Transient logical standby) or versioning (Edition Based Redefinition, Workspace Manager) Oracle has the tools to automatically provide the environment to test the upgrade before it is opened in production.

We can also imagine that other features may help to avoid regression. For example, SQL Plan Management can prevent execution plan regressions on common use-cases, and let the (human) user accept evolved plans later. This can also be done automatically (but humans still have to define the capture). Of course, we all know the limits of the advisors and automatic implementation. But there are also some applications where it can be fine. This ‘autonomous’ Cloud Service is a possibility, not a universal solution.

Tuning

Again, we all know that the database cannot be optimized without the knowledge of the data and the design of the application. But Oracle has also a lot of features to automate some common practices. Automatic Data Optimization, Segment Advisor, SQL Access Advisor, Auto DOP, automatic choice of buffered or direct reads,… We have seen a lot of drawbacks with SQL Plan Directives, but that was mainly in OLTP. This new managed service is for DWH where dynamic sampling is not a bad idea.

The idea is to ‘load data and run’ and Oracle takes care of index, partitioning, etc. You create the table and declare Primary keys and Foreign key (RELY DISABLE NOVALIDATE see a previous blog post about that). Then I suppose that Oracle can guess which are the dimension tables and the fact tables. And then do some common things to do on that: partition on the date dimension (if there is only one – maybe it has to detect some load/query patterns), create bitmap indexes on all fact foreign key. Online statistics gathering will be extended in 18c to incrementally maintain statistics on bulk-insert, and this may include histograms.

I’m very skeptical on that point, because I’ve seen lot of datawarehouse databases where, even on big hardware, there is always a need for optimization. But my point of view may be biased. I’m a consultant, and then I see only the databases where people think they need human analysis and troubleshooting. There may be some non-critical datawarehouse databases where nobody is doing any optimisation, and then implementing some default optimization may be sufficient to make it a bit more efficient. This autonomous elastic cloud service may be a good start for some projects, when it is difficult to plan the hardware and human resources that will be needed. But I’m quite sure that after a while, designing an efficient and scalable infrastructure and data model will still require our DBA skills and human intelligence.

Load

This service is provisioned as a PDB where what we can do is limited by the multitenant lockdown profiles. We can connect easily (with a credentials .zip) from SQL Developer, and we can load data from an object store using the DBMS_CLOUD package to define the credentials (Swift) and load text files. Once again, it is an evolution of existing features like external tables and preprocessors.

Scalability

This service is Elastic: it can scale the CPU resource up and down without stopping the service. Again this can use existing features: OVM for the host, and Resource Manager for the CDB and PDB.

So what?

Some will say that it is only marketing with nothing behind, and the same announcement that was made by each previous version. Others will say that it is really autonomous, self-driving, self-securing, self-repairing. This ‘self-driven’ idea is an analogy with Tesla (Elon Musk is a big friend of Larry Ellison), but for the moment, there’s still a human inside a Tesla. Autonomous is a vision for the future, not the present. The present is a new managed service, more online operations, and easier upgrades.

What I really like is the integration of existing features to serve a new marketing trend. Do you remember when the flashback features came out? Flashback query existed internally since the invention of rollback segments (and maybe even before with the Before Image). It was exposed in 9i with a dbms package, and 10g in the SQL ‘as of’. All was already there to bring a feature that no other RDBMSs are capable of. The existing features support the marketing message at the right time, and this marketing message encourages to develop new features again, like new online operations. And those do not benefit only to the managed cloud services as they go to the main branch of Oracle Database.

The other thing I like is the idea to have a configuration dedicated to specific needs. In the OOW17 hands-on lab, there was even a “_cloud_service_type” parameter set to DWCS. Here this service is for DWH, and there will be one dedicated to OLTP mid-2018. This is something I would like to see in future versions. For example, there was a lot of discussion about 12.1 enabling adaptive statistics by default, and 12.2 disabling them. This kind of default settings could be set depending on the database type: more conservative for OLTP upgrades, more adaptive for new datawarehouse projects.

 

Cet article Autonomous Database est apparu en premier sur Blog dbi services.

Changing the LOCALE in CentOS/RedHat 7

Fri, 2017-10-06 07:04

What I really don’t like is this: Someone wants me to work on a Linux system for whatever purpose and then I get messages like this:

postgres@pgbox:/home/postgres/ [pg960final] cat /proc/sysrq-trigger 
cat: /proc/sysrq-trigger: Permission non accordée

or:

postgres@pgbox:/home/postgres/ [pg960final] cat /proc/sysrq-trigger 
cat: /proc/sysrq-trigger: Permiso denegado

It is not that I don’t like French or Spanish but usually (well, not in that simple case) I am not able to translate that without the help from others. So how can you change that to English (which should be the default anyway nowadays)? Lets go:

A quick fix for getting the above messages in English would be:

postgres@pgbox:/home/postgres/ [pg960final] export LANG="en_EN.UTF8"
postgres@pgbox:/home/postgres/ [pg960final] cat /proc/sysrq-trigger 
cat: /proc/sysrq-trigger: Permission denied

The good thing with this approach is, that the setting is temporary and valid only for my current session. The default for that particular user and the system default will not be touched. But what when you want to make this persistent for this user? Easy as well:

postgres@pgbox:/home/postgres/ [pg960final] echo "LANG=\"en_EN.UTF8\"" >> ~/.bash_profile 
postgres@pgbox:/home/postgres/ [pg960final] echo "export LANG" >> ~/.bash_profile 

Once you have that every new session will have that set. The system default is defined in /etc/locale.conf:

postgres@pgbox:/home/postgres/ [pg960final] cat /etc/locale.conf 
LANG="en_US.UTF-8"

So when you want to make it the persistent default for the whole system then change it there. Not a big deal, but good to know.

 

Cet article Changing the LOCALE in CentOS/RedHat 7 est apparu en premier sur Blog dbi services.

And finally it is there…PostgreSQL 10

Thu, 2017-10-05 08:45

Selection_001

What are you waiting for?

 

Cet article And finally it is there…PostgreSQL 10 est apparu en premier sur Blog dbi services.

udev rules for ASM devices on RHEL 7

Thu, 2017-10-05 06:28

Preparing the storage devices for use with ASM is one of the first tasks you should do. When you do not want to use ASMLib then udev is what you should use for this. As the order/naming of the devices in the device tree is not guaranteed (e.g. dm-1 is not guaranteed to be the same device after a reboot of the system) you need a way to fix this and that is what udev can be used for.

The first step is to get the UUIDs of all the devices you want to use with ASM (dm-1 to dm-3, in this case). This assumes that multi pathing is already setup:

[root@xxx ~]$ udevadm info --query=all --name=/dev/dm-1 | grep -i DM_UUID
E: DM_UUID=mpath-3600a098000a819ff0000018559bf68d7
[root@xxx ~]$ udevadm info --query=all --name=/dev/dm-2 | grep -i DM_UUID
E: DM_UUID=mpath-3600a098000a81955000001b359bf6920
[root@xxx ~]$ udevadm info --query=all --name=/dev/dm-3 | grep -i DM_UUID
E: DM_UUID=mpath-3600a098000a819ff0000018359bf68ce

Having this the udev rules should look like this (of course the name does not need to be same as below):

[root@xxx ~]$ cat /etc/udev/rules.d/30-oracle.rules 
ACTION=="add|change", ENV{DM_UUID}=="mpath-3600a098000a81955000001b359bf6920", NAME="asm-crs",  OWNER="oracle", GROUP="asmadmin", MODE="0600"
ACTION=="add|change", ENV{DM_UUID}=="mpath-3600a098000a819ff0000018359bf68ce", NAME="asm-fra",  OWNER="oracle", GROUP="asmadmin", MODE="0600"
ACTION=="add|change", ENV{DM_UUID}=="mpath-3600a098000a819ff0000018559bf68d7", NAME="asm-data", OWNER="oracle", GROUP="asmadmin", MODE="0600"

Once the system rebooted the permissions on the devices should be fine and naming persistent:

[root@xxx ~]$ ls -al /dev/mapper/
total 0
drwxr-xr-x  2 root root     280 Oct  4 08:50 .
drwxr-xr-x 20 root root    4180 Oct  4 08:50 ..
lrwxrwxrwx  1 root root       7 Oct  4 13:10 asm-crs -> ../dm-2
lrwxrwxrwx  1 root root       7 Oct  4 13:10 asm-data -> ../dm-1
lrwxrwxrwx  1 root root       7 Oct  4 13:10 asm-fra -> ../dm-3
crw-------  1 root root 10, 236 Oct  4 08:50 control
lrwxrwxrwx  1 root root       7 Oct  4 12:51 disk00 -> ../dm-4
lrwxrwxrwx  1 root root       7 Oct  4 08:50 vg_root-lv_openafs -> ../dm-8
lrwxrwxrwx  1 root root       7 Oct  4 12:51 vg_root-lv_root -> ../dm-0
lrwxrwxrwx  1 root root       8 Oct  4 12:51 vg_root-lv_swap -> ../dm-10
lrwxrwxrwx  1 root root       7 Oct  4 12:51 vg_root-lv_tmp -> ../dm-7
lrwxrwxrwx  1 root root       7 Oct  4 12:51 vg_root-lv_var -> ../dm-6
lrwxrwxrwx  1 root root       7 Oct  4 12:51 vg_root-lv_var_log -> ../dm-5
lrwxrwxrwx  1 root root       7 Oct  4 12:51 vg_root-lv_var_log_audit -> ../dm-9
[root@xxx ~]$ ls -la /dev/dm*
brw-rw---- 1 root   disk     253,  0 Oct  4 12:51 /dev/dm-0
brw------- 1 oracle asmadmin 253,  1 Oct  4 13:11 /dev/dm-1
brw-rw---- 1 root   disk     253, 10 Oct  4 12:51 /dev/dm-10
brw------- 1 oracle asmadmin 253,  2 Oct  4 13:11 /dev/dm-2
brw------- 1 oracle asmadmin 253,  3 Oct  4 13:11 /dev/dm-3
brw-rw---- 1 root   disk     253,  4 Oct  4 12:51 /dev/dm-4
brw-rw---- 1 root   disk     253,  5 Oct  4 12:51 /dev/dm-5
brw-rw---- 1 root   disk     253,  6 Oct  4 12:51 /dev/dm-6
brw-rw---- 1 root   disk     253,  7 Oct  4 12:51 /dev/dm-7
brw-rw---- 1 root   disk     253,  8 Oct  4 08:50 /dev/dm-8
brw-rw---- 1 root   disk     253,  9 Oct  4 12:51 /dev/dm-9

Hope this helps.

 

Cet article udev rules for ASM devices on RHEL 7 est apparu en premier sur Blog dbi services.

Going from SLES12 SP2 to SLES12 SP3, online

Wed, 2017-10-04 05:59

SLES 12 SP3 was released some time ago and as we have customers running on that I thought it might be a good idea to test the upgrade from SP2 to SP3. Actually it turned out this is quite easy and can be done online. The supported methods for doing this is either by using YaST or Zypper directly. As I wanted to stay on the command line I’ll use Zypper for the scope of this post. Lets go…

As said above I’ll be upgrading from SLES 12 SP2:

postgres@sles12sp2:/home/postgres/ [pg963] cat /etc/os-release 
NAME="SLES"
VERSION="12-SP2"
VERSION_ID="12.2"
PRETTY_NAME="SUSE Linux Enterprise Server 12 SP2"
ID="sles"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:suse:sles:12:sp2"

When you have another setup make sure that the upgrade path is supported, which you can check here.

My current kernel version is:

postgres@sles12sp2:/home/postgres/ [pg963] uname -a
Linux sles12sp2 4.4.74-92.38-default #1 SMP Tue Sep 12 19:43:46 UTC 2017 (545c055) x86_64 x86_64 x86_64 GNU/Linux

The first thing you should do is to apply the latest patches:

postgres@sles12sp2:/home/postgres/ [pg963] sudo zypper patch
Refreshing service 'Containers_Module_12_x86_64'.
Refreshing service 'SUSE_Linux_Enterprise_Server_12_SP2_x86_64'.
Refreshing service 'SUSE_Linux_Enterprise_Software_Development_Kit_12_SP2_x86_64'.
Retrieving repository 'SLES12-SP2-Updates' metadata ...............................................................................................[done]
Building repository 'SLES12-SP2-Updates' cache ....................................................................................................[done]
Retrieving repository 'SLE-SDK12-SP2-Updates' metadata ............................................................................................[done]
Building repository 'SLE-SDK12-SP2-Updates' cache .................................................................................................[done]
Loading repository data...
Reading installed packages...
Resolving package dependencies...

The following 4 NEW patches are going to be installed:
  SUSE-SLE-SERVER-12-SP2-2017-1597 SUSE-SLE-SERVER-12-SP2-2017-1602 SUSE-SLE-SERVER-12-SP2-2017-1606 SUSE-SLE-SERVER-12-SP2-2017-1622

The following 8 packages are going to be upgraded:
  autofs gtk3-data gtk3-lang gtk3-tools libgtk-3-0 typelib-1_0-Gtk-3_0 xinetd yast2-xml

8 packages to upgrade.
Overall download size: 6.4 MiB. Already cached: 0 B. After the operation, additional 12.0 KiB will be used.
Continue? [y/n/...? shows all options] (y): y
Retrieving package autofs-5.0.9-28.3.5.x86_64                                                                       (1/8), 407.0 KiB (  2.0 MiB unpacked)
Retrieving delta: ./x86_64/autofs-5.0.9-27.2_28.3.5.x86_64.drpm, 160.5 KiB
Retrieving: autofs-5.0.9-27.2_28.3.5.x86_64.drpm ..................................................................................................[done]
Applying delta: ./autofs-5.0.9-27.2_28.3.5.x86_64.drpm ............................................................................................[done]
Retrieving package gtk3-data-3.20.10-17.3.20.noarch                                                                 (2/8), 162.7 KiB ( 10.8 KiB unpacked)
Retrieving: gtk3-data-3.20.10-17.3.20.noarch.rpm ..................................................................................................[done]
Retrieving package xinetd-2.3.15-8.8.1.x86_64                                                                       (3/8), 126.6 KiB (286.4 KiB unpacked)
Retrieving delta: ./x86_64/xinetd-2.3.15-8.5.1_8.8.1.x86_64.drpm, 17.8 KiB
Retrieving: xinetd-2.3.15-8.5.1_8.8.1.x86_64.drpm .................................................................................................[done]
Applying delta: ./xinetd-2.3.15-8.5.1_8.8.1.x86_64.drpm ...........................................................................................[done]
Retrieving package yast2-xml-3.1.2-2.3.1.x86_64                                                                     (4/8),  48.2 KiB (127.7 KiB unpacked)
Retrieving delta: ./x86_64/yast2-xml-3.1.1_3.1.2-1.46_2.3.1.x86_64.drpm, 10.3 KiB
Retrieving: yast2-xml-3.1.1_3.1.2-1.46_2.3.1.x86_64.drpm ..........................................................................................[done]
Applying delta: ./yast2-xml-3.1.1_3.1.2-1.46_2.3.1.x86_64.drpm ....................................................................................[done]
Retrieving package gtk3-tools-3.20.10-17.3.20.x86_64                                                                (5/8), 237.8 KiB (294.2 KiB unpacked)
Retrieving: gtk3-tools-3.20.10-17.3.20.x86_64.rpm .................................................................................................[done]
Retrieving package libgtk-3-0-3.20.10-17.3.20.x86_64                                                                (6/8),   2.5 MiB (  8.4 MiB unpacked)
Retrieving delta: ./x86_64/libgtk-3-0-3.20.10-16.2_17.3.20.x86_64.drpm, 175.6 KiB
Retrieving: libgtk-3-0-3.20.10-16.2_17.3.20.x86_64.drpm ...........................................................................................[done]
Applying delta: ./libgtk-3-0-3.20.10-16.2_17.3.20.x86_64.drpm .....................................................................................[done]
Retrieving package typelib-1_0-Gtk-3_0-3.20.10-17.3.20.x86_64                                                       (7/8), 335.2 KiB (879.3 KiB unpacked)
Retrieving delta: ./x86_64/typelib-1_0-Gtk-3_0-3.20.10-16.2_17.3.20.x86_64.drpm, 159.9 KiB
Retrieving: typelib-1_0-Gtk-3_0-3.20.10-16.2_17.3.20.x86_64.drpm ..................................................................................[done]
Applying delta: ./typelib-1_0-Gtk-3_0-3.20.10-16.2_17.3.20.x86_64.drpm ............................................................................[done]
Retrieving package gtk3-lang-3.20.10-17.3.20.noarch                                                                 (8/8),   2.6 MiB ( 18.8 MiB unpacked)
Retrieving delta: ./noarch/gtk3-lang-3.20.10-16.2_17.3.20.noarch.drpm, 178.6 KiB
Retrieving: gtk3-lang-3.20.10-16.2_17.3.20.noarch.drpm ............................................................................................[done]
Applying delta: ./gtk3-lang-3.20.10-16.2_17.3.20.noarch.drpm ......................................................................................[done]
Checking for file conflicts: ......................................................................................................................[done]
(1/8) Installing: autofs-5.0.9-28.3.5.x86_64 ......................................................................................................[done]
Additional rpm output:
Updating /etc/sysconfig/autofs...


(2/8) Installing: gtk3-data-3.20.10-17.3.20.noarch ................................................................................................[done]
(3/8) Installing: xinetd-2.3.15-8.8.1.x86_64 ......................................................................................................[done]
Additional rpm output:
Updating /etc/sysconfig/xinetd...


(4/8) Installing: yast2-xml-3.1.2-2.3.1.x86_64 ....................................................................................................[done]
(5/8) Installing: gtk3-tools-3.20.10-17.3.20.x86_64 ...............................................................................................[done]
(6/8) Installing: libgtk-3-0-3.20.10-17.3.20.x86_64 ...............................................................................................[done]
(7/8) Installing: typelib-1_0-Gtk-3_0-3.20.10-17.3.20.x86_64 ......................................................................................[done]
(8/8) Installing: gtk3-lang-3.20.10-17.3.20.noarch ................................................................................................[done]

Once all is patched make sure you have the zypper-migration-plugin installed on your system (which is already there in my case):

postgres@sles12sp2:/home/postgres/ [pg963] sudo zypper install zypper-migration-plugin
Refreshing service 'Containers_Module_12_x86_64'.
Refreshing service 'SUSE_Linux_Enterprise_Server_12_SP2_x86_64'.
Refreshing service 'SUSE_Linux_Enterprise_Software_Development_Kit_12_SP2_x86_64'.
Loading repository data...
Reading installed packages...
'zypper-migration-plugin' is already installed.
No update candidate for 'zypper-migration-plugin-0.10-12.4.noarch'. The highest available version is already installed.
Resolving package dependencies...

Nothing to do.
postgres@sles12sp2:/home/postgres/ [pg963] 

Ready to migrate to SP3:

postgres@sles12sp2:/home/postgres/ [pg963] sudo zypper migration

Executing 'zypper  refresh'

Retrieving repository 'SLE-Module-Containers12-Pool' metadata .....................................................................................[done]
Building repository 'SLE-Module-Containers12-Pool' cache ..........................................................................................[done]
Repository 'SLE-Module-Containers12-Updates' is up to date.                                                                                              
Repository 'SLES12-SP2-12.2-0' is up to date.                                                                                                            
Retrieving repository 'SLES12-SP2-Pool' metadata ..................................................................................................[done]
Building repository 'SLES12-SP2-Pool' cache .......................................................................................................[done]
Repository 'SLES12-SP2-Updates' is up to date.                                                                                                           
Retrieving repository 'SLE-SDK12-SP2-Pool' metadata ...............................................................................................[done]
Building repository 'SLE-SDK12-SP2-Pool' cache ....................................................................................................[done]
Repository 'SLE-SDK12-SP2-Updates' is up to date.                                                                                                        
All repositories have been refreshed.

Executing 'zypper  --no-refresh patch-check --updatestack-only'

Loading repository data...
Reading installed packages...

0 patches needed (0 security patches)

Available migrations:

    1 | SUSE Linux Enterprise Server 12 SP3 x86_64
        SUSE Linux Enterprise Software Development Kit 12 SP3 x86_64
        Containers Module 12 x86_64 (already installed)
       

[num/q]: 1

Executing 'snapper create --type pre --cleanup-algorithm=number --print-number --userdata important=yes --description 'before online migration''

Upgrading product SUSE Linux Enterprise Server 12 SP3 x86_64.
Found obsolete repository SLES12-SP2-12.2-0
Disable obsolete repository SLES12-SP2-12.2-0 [y/n] (y): y
... disabling.
Upgrading product SUSE Linux Enterprise Software Development Kit 12 SP3 x86_64.
Upgrading product Containers Module 12 x86_64.

Executing 'zypper --releasever 12.3 ref -f'

Warning: Enforced setting: $releasever=12.3
Forcing raw metadata refresh
Retrieving repository 'SLE-Module-Containers12-Pool' metadata ..............................................................................................[done]
Forcing building of repository cache
Building repository 'SLE-Module-Containers12-Pool' cache ...................................................................................................[done]
Forcing raw metadata refresh
Retrieving repository 'SLE-Module-Containers12-Updates' metadata ...........................................................................................[done]
Forcing building of repository cache
Building repository 'SLE-Module-Containers12-Updates' cache ................................................................................................[done]
Forcing raw metadata refresh
Retrieving repository 'SLES12-SP3-Pool' metadata ...........................................................................................................[done]
Forcing building of repository cache
Building repository 'SLES12-SP3-Pool' cache ................................................................................................................[done]
Forcing raw metadata refresh
Retrieving repository 'SLES12-SP3-Updates' metadata ........................................................................................................[done]
Forcing building of repository cache
Building repository 'SLES12-SP3-Updates' cache .............................................................................................................[done]
Forcing raw metadata refresh
Retrieving repository 'SLE-SDK12-SP3-Pool' metadata ........................................................................................................[done]
Forcing building of repository cache
Building repository 'SLE-SDK12-SP3-Pool' cache .............................................................................................................[done]
Forcing raw metadata refresh
Retrieving repository 'SLE-SDK12-SP3-Updates' metadata .....................................................................................................[done]
Forcing building of repository cache
Building repository 'SLE-SDK12-SP3-Updates' cache ..........................................................................................................[done]
All repositories have been refreshed.

Executing 'zypper --releasever 12.3  --no-refresh  dist-upgrade --no-allow-vendor-change '

Warning: Enforced setting: $releasever=12.3
Warning: You are about to do a distribution upgrade with all enabled repositories. Make sure these repositories are compatible before you continue. See 'man zypper' for more information about this command.
Loading repository data...
Reading installed packages...
Computing distribution upgrade...

The following 21 NEW packages are going to be installed:
  blog crash-kmp-default-7.1.8_k4.4.73_5-3.9 kernel-default-4.4.82-6.9.1 libefivar1 libfastjson4 libfreebl3 libibverbs libibverbs1 libmlx5-1 librados2 libsnapper4
  libsoftokn3 libtidyp-1_04-0 lttng-ust mozilla-nss mozilla-nss-certs patterns-sles-yast2 perl-HTML-Tidy python-talloc rdma-core sles-release-POOL

The following NEW pattern is going to be installed:
  yast2

The following 2 packages are going to be REMOVED:
  libsnapper3 pytalloc

The following 168 packages are going to be upgraded:
  aaa_base aaa_base-extras augeas augeas-lenses autoyast2 autoyast2-installation binutils cpupower crash cups-filters-ghostscript dbus-1 dbus-1-x11 dracut
  efibootmgr ethtool filesystem gdb grub2 grub2-i386-pc grub2-snapper-plugin grub2-systemd-sleep-plugin hwinfo ipmitool iscsiuio kdump kexec-tools kpartx
  libaugeas0 libblkid1 libcpupower0 libdbus-1-3 libdcerpc0 libdcerpc-binding0 libdrm2 libdrm_amdgpu1 libdrm_intel1 libdrm_nouveau2 libdrm_radeon1 libfdisk1
  libgbm1 libgnutls28 libldb1 libLLVM libmount1 libndr0 libndr-krb5pac0 libndr-nbt0 libndr-standard0 libnetapi0 libp11-kit0 libparted0 libsamba-credentials0
  libsamba-errors0 libsamba-hostconfig0 libsamba-passdb0 libsamba-util0 libsamdb0 libsmartcols1 libsmbclient0 libsmbconf0 libsmbldap0 libsolv-tools libstorage7
  libstorage-ruby libtalloc2 libtasn1 libtasn1-6 libtdb1 libtevent0 libtevent-util0 libuuid1 libwbclient0 libx86emu1 libyui7 libyui-ncurses7 libyui-ncurses-pkg7
  linux-glibc-devel logrotate makedumpfile mcelog mdadm Mesa Mesa-libEGL1 Mesa-libGL1 Mesa-libglapi0 multipath-tools open-iscsi openslp openslp-server openssh
  openssh-helpers p11-kit p11-kit-tools parted patterns-sles-base patterns-sles-laptop patterns-sles-Minimal perl-Bootloader perl-solv postfix python-solv
  release-notes-sles rollback-helper rsyslog ruby2.1-rubygem-cfa ruby2.1-rubygem-cfa_grub2 samba-client samba-libs samba-winbind sle-sdk-release
  sle-sdk-release-POOL sles-release snapper snapper-zypp-plugin sudo SUSEConnect SuSEfirewall2 systemd-presets-branding-SLE sysvinit-tools util-linux
  util-linux-lang util-linux-systemd yast2 yast2-add-on yast2-bootloader yast2-ca-management yast2-core yast2-country yast2-country-data yast2-dhcp-server
  yast2-dns-server yast2-firewall yast2-ftp-server yast2-hardware-detection yast2-http-server yast2-installation yast2-iscsi-client yast2-kdump yast2-ldap
  yast2-mail yast2-migration yast2-network yast2-nis-client yast2-ntp-client yast2-packager yast2-pam yast2-perl-bindings yast2-pkg-bindings yast2-printer
  yast2-registration yast2-ruby-bindings yast2-samba-client yast2-schema yast2-services-manager yast2-slp yast2-smt yast2-snapper yast2-storage yast2-support
  yast2-theme-SLE yast2-trans-en_US yast2-update yast2-users yast2-vm yast2-ycp-ui-bindings zypper zypper-lifecycle-plugin zypper-log

The following 3 patterns are going to be upgraded:
  base laptop Minimal

The following 2 products are going to be upgraded:
  "SUSE Linux Enterprise Server 12 SP2" "SUSE Linux Enterprise Software Development Kit 12"

The following 19 packages are going to be downgraded:
  branding-SLE grub2-branding-SLE iptables kernel-firmware libapparmor1 libICE6 libiptc0 libjasper1 libldap-2_4-2 libxtables10 libzypp nfs-client
  nfs-kernel-server openldap2-client openldap2-devel plymouth-branding-SLE wallpaper-branding-SLE xtables-plugins yast2-auth-client

The following package is going to change architecture:
  yast2-network  x86_64 -> noarch


The following 5 packages are not supported by their vendor:
  libtidyp-1_04-0 openldap2-devel perl-HTML-Tidy sle-sdk-release sle-sdk-release-POOL

168 packages to upgrade, 19 to downgrade, 21 new, 2 to remove, 1 to change arch.
Overall download size: 171.1 MiB. Already cached: 0 B. After the operation, additional 184.6 MiB will be used.
Continue? [y/n/...? shows all options] (y): y

As I am fine with the summary I’ll proceed:

(Use arrows or pgUp/pgDown keys to scroll the text by lines or pages.)

In order to install 'SUSE Linux Enterprise Software Development Kit 12 SP3' (product), you must agree to terms of the following license agreement:

SUSE(R) Linux Enterprise Software Development Kit 12 Service Pack 3
 
SUSE End User License Agreement 

Of course you need to agree to the license agreement and then the migration starts. At the end you should see something like this:

    dracut: *** Creating image file '/boot/initrd-4.4.82-6.9-default' ***
    dracut: *** Creating initramfs image file '/boot/initrd-4.4.82-6.9-default' done ***

There are some running programs that might use files deleted by recent upgrade. You may wish to check and restart some of them. Run 'zypper ps -s' to list these programs.

Executing 'snapper create --type post --pre-number 29 --cleanup-algorithm=number --print-number --userdata important=yes --description 'after online migration''

That’s all:

postgres@sles12sp2:/home/postgres/ [pg963] cat /etc/os-release 
NAME="SLES"
VERSION="12-SP3"
VERSION_ID="12.3"
PRETTY_NAME="SUSE Linux Enterprise Server 12 SP3"
ID="sles"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:suse:sles:12:sp3"

Of course the kernel is still the old one and the new one will become active after the next reboot:

postgres@sles12sp2:/home/postgres/ [pg963] sudo systemctl reboot
...
...
postgres@sles12sp2:/home/postgres/ [pg963] uname -a
Linux sles12sp2 4.4.82-6.9-default #1 SMP Tue Sep 19 06:38:15 UTC 2017 (d4a2d80) x86_64 x86_64 x86_64 GNU/Linux

Quite easy.

 

Cet article Going from SLES12 SP2 to SLES12 SP3, online est apparu en premier sur Blog dbi services.

ODA X7

Tue, 2017-10-03 14:58
ODA X6-2

So, the ODA X6 End of Life seems to be planned for October, and be careful, as far as I know, this concerns also the storage expansion.

The main feedback from the X6-2 series were:

  • not enough cores when consolidating multiple databases in M, L and HA
  • not enough storage on HA when compared with X5-2
  • Standard Edition on HA would be great for cheaper RAC
  • The position of the ‘Large’ version of ODA Lite was not clear
ODA X7-2S

The Small version of the ODA Lite has the same number of cores as in X6-2: 10 cores in one socket. Those are Intel Xeon Silver 4114 processor (2.2GHz). Some more memory: 6x32GB instead of 4x32GB, so 192GB expandable to 384GB as before.

And the big new thing is that it can run virtualized (X6-2 was Bare Metal only). Even if it is still OVM, the hypervisor is KVM. OVM is accepted by Oracle to license per vCPU, but for the moment the database cannot run on a VM so you need to get database licences for all the cores enabled on the ODA.

The storage for database is two 6.4TB Flash NVMe, which means 12.8TB raw capacity, 4.8TB double-mirrored, 3.2TB triple-mirrored.

ODA X7-2 M

You can look at it in the 3D view

The larger version of ODA Lite has more cores: 36 cores instead of 20 before: two sockets with Intel Xeon Gold 6140 processor (2.3GHz). Some more memory: 12x32GB=384GB instead of 256GB. Expandable to 768GB as before.

As the Small one, it can be virtualized with KVM.

The storage for database is two 6.4TB Flash NVMe, and you can add 6 additional ones to bring the storage to 51.2TB raw capacity.

Because of this new capacity in the ‘Medium’ one, and the large possibility to add disks, there is no ‘Large’ version of ODA X7-2.

ODA X7-2 HA

You can look at it in the 3D view

The HA version of ODA has more cores: 72 cores instead of 40 before: two 18-cores sockets with Intel Xeon Gold 6140 processor (2.3GHz), in two nodes. Some more memory: 768GB instead of 256GB. Expandable to 1.5GB as before.

The big new thing is that you are allowed to run Standard Edition, and then RAC with no option. In the latter case, with SE2 only one socket per node is enabled.

The second important thing is that the storage which was limited (12TB expandable to 24TB or 48TB with a second shelf) is now declined into:

  • High Performance 3.2GB SSD for 16TB (raw, which is 8TB in normal redundancy or 5.3 TB in normal redundancy) expandable to 64TB, or 128TB with additional shelf
  • High Capacity starts the same with 4 SSD, but you add mechanical 10TB disks for additional 150TB to 300TB

For the REDO, there are 4 additional 800GB disks (4 additional in the expansion shelf).

In the initial release of X7 the virtualization on HA still used OVM/Xen.

Common new features

You will be able to install 12cR2 (12.2.0.1).

You have the choice for the network between:

  • two 10 GbE ports (RJ45)
  • or two 10/25 GbE ports (SFP128)

OAKCLI will be replaced everywhere by ODACLI

CaptureODA-X7-2

 

Cet article ODA X7 est apparu en premier sur Blog dbi services.

SQL Server 2017 RTM is available: time to try!

Tue, 2017-10-03 09:47

Now SQL Server 2017 RTM  is available. :-D
The first step is to go to the Evaluation Center here

Install_00

After completing the registration with your contact information, you download the file “SQL Server 2017-SSEI-Eval.exe
Install_02
Run SQL Server 2017-SSEI-Eval.exe and now it begins!

A first screen gives me 3 choices to install SQL Server

  • Basic: easy and fast to download and install SQL Server 2017
  • Custom: Download the setup package and launch the SQL Installation Center
  • Download Media: choose between an ISO or a CAB

Install_03

Basic Installation

In this mode, you have 2 steps to follow:

    • Accept the license

Install_basic_01

 

    • Choose the destination folder

Install_basic_02

And now the installation begins…
Install_basic_03
At the end, you have a SQL Server installed with a default instance as you can see in this summary window.
Install_basic_04
If you click on the button “Connect now”, the interface run a SQLCMD
Install_basic_05_connectnow

The second button “Customize”, run the SQL Server setup that you know to install a SQL Server instance
Install_basic_05_customize

The “Install SSMS” button retursn you to the web page to download SQL Server management Studio (SSMS)

Install_basic_05_ssms

Here the direct link to SSMS

At the end, you can see also the installation folder from your default instance…
Install_basic_06Be very carefuly with this installation type, at the end you have a default instance installed!

The last point to notice is that on C:\SQLServer2017Media, you retrieve the full Setup from SQL Server 2017
Install_basic_07

Custom installation

This installation asks the target location to dowload the installation files (Default is C:\SQLServer2017Media).

installation_custom

 

Remarks: If you have like me installed the Basic before and click the custom installation, it will detect that you have already the installation files (download phase is shunted!).

After the download, it will run the “Classical” setup (SQL Server Installation Center) to install an instance

Install_custom02

Download Media

This type of installation gives you the language choice, the package choice (.iso or .cab) and the folder for the download file.
Install_04
Click Download and let’s go!
Install_05
After few minutes, the file (iso in my case) is here.
Install_06

This last installation type is the best for me because I can mount the iso file on VMs to install and test SQL Server 2017!

Now, It’s time to test this new version for my Team and write a lot of blogs for you! 8-)

 

Cet article SQL Server 2017 RTM is available: time to try! est apparu en premier sur Blog dbi services.

Oracle 18c RU and RUR for pioneers and followers

Tue, 2017-10-03 08:44

In a previous post, I explained the announcement of the new release model, with Release Updates and Release Update Revisions replacing Proactive Bundle Patch and PSUs. Here is a description that will help you to plan your patching activity.

Which DBA are you?

There are currently a lot of discussions about the different types of DBAs. When it comes to patching, the DBA type is important.

Some DBAs are very conservative: do not use new features, do not upgrade to ‘.1′ releases, because every change can break anything. At least, maybe just patch to SPU for the security fixes. Some are more confident and apply the PSUs, with more fixes. The problem is that what is not fixed may be subject to a bug one day, and need an urgent one-of patch.

And some others are very pro-active: they patch and upgrade the development environment as soon as there is a new Proactive Bundle Patch, then they always have the latest fixes.

Actually, this does not depend only on the personality of the DBA but also on the ability to test. If you have only the production database with representative volume and activity, then any patch set can be dangerous. If in addition to that any intervention or decision takes 3 months, the problems encountered are even worse. But if you are in DevOps, where developers work on a production copy, when integration tests run every night, covering most of the critical use cases, then you can be confident that once those databases have run a few days with the latest Proactive Bundle Patch, then you can do the same in production. And if you come upon the low probability of a regression, you can react immediately with workarounds because you are agile.

The SPUer, the PSUer and the PBPer

In the previous release model, still used for 11g and 12c up to 12.2, this context will define if you are a SPUer (only security patches with very low risk of regression), PSUer (the critical fixes), or Proactive BPer (more fixes) but this will still not include optimizer fixes, and then may still require one-offs.

The .0, the .1 and the .2

With the new release model, you will have releases (R) every year, like 18c, increasing the first number, such as 18.1.0 which should arrive in January. Then during the support window, you will have Release Updates (RU) release quarterly to proactively bring software to the lasted fixes, increasing the second number, such as 18.2.0. But, because those may bring some regressions, you will have additional Release Update Revisions (RUR) which will fix the issues encountered in the last 6 months Release Updates, increasing the third number, such as 18.2.1 and 18.2.2

Then, there will be 3 types of approaches

The pioneer will go to the latest software

The pioneer is confident in his ability to cover most of the critical uses cases in his automated regression tests and is agile enough to fix quickly any issue in the non-covered cases. Then, he will upgrade to release 18.1.0 in January 2018. He will apply the Release Updates as soon as they are there: 18.2.0 in April, 18.3.0 in July, 18.4.0 in October. In January 2019 he can apply the 18c release update 18.5.0 or upgrade to 19c release 19.1.0

The wise will be on the latest Release Updates (RU)

Because each release brings a bunch of new features, they may come with regression on existing ones. Then, the wise DBA will prefer to wait that the pioneers have encountered them and that Oracle has fixed them in Release Updates. He can upgrade to 18.2.0 in April 2018, then apply the RU as they are released: 18.3.0 in July, 18.4.0 in October, 18.5.0 in January 2019,… He can continue for few years according to the support lifecycle (MOS 742060.1)

Of course, he doesn’t need to apply the RU immediately and can choose to lag for a few ones in order to be sure that more pioneers or eager wise ones have encountered more issues. But then, he will also lag on the security fixes, which is not good. And this is where Release Update Revisions come up.

The follower will be on the latest revision (RUR)

When you want to be up-to-date only on security fixes, with the minimal risk of regression, you will stay on the RUR branch, where the potential issues encountered on RU will be fixed. This is the same philosophy as applying only the SPU in the previous release model, but without lagging for years on other fixes. In the new release model, the RUR will be provided only for the last two previous RU. This extends the lifetime of RU by 6 months. The follower can apply the latest security fixes, but they will also come with other fixes. However, the risk is lowered here because the other fixes included are only those that have been tested for 6 months by other users. This 6 month lag on pro-active fixes keeps the risk minimal without running old software. He will upgrade to 18.2.1 in July for a 3 months lag, or waits October to upgrade to 18.3.2 for a 6 months lag.

Of course, you don’t have upgrade to 18c immediately and to patch every 3 months. But when you want to patch, you have the choice to be on the latest software to get new features immediately, or on latest RU to get new fixes immediately, or on latest RUR to let the others endure early problems for a few months. Be aware that RUs have a bit more than what Proactive BP included. In RU you may have some optimizer fixes, but those are disabled by default if there is a risk of plan change. You may also find some minor enhancements. You have the flexibility to switch between RU and RUR. And in the low probability that you need a one-off patch, it will be delivered quickly for the latest RU and RUR.

So what?

This new model helps to run a software that is not too old, which is more and more mandatory, at least for security reasons. Finally, all is about testing. Do you prefer to let others test the quarterly upgrades on their database, in the hope that what they run is similar to what you run on your database? Then be a ‘.1′ or ‘.2′ RUR follower. Or improve and automate your own testing and you can reduce the lag by being an up-to-date ‘.0′ RU patcher.

 

Cet article Oracle 18c RU and RUR for pioneers and followers est apparu en premier sur Blog dbi services.

SQL Server 2017 on RHEL first installation

Mon, 2017-10-02 04:24

Beginning of last week Microsoft announced the general availability of SQL Server 2017 for today, the second of October. In the same time, but in a more marketing side, Microsoft announced a SQL Server on Red Hat Enterprise Linux offer, more information here.
It looked for me like the good time to do my first installation of SQL Server 2017 on Linux.

My first concern was to download an ISO of the Red Hat Enterprise Linux. I found it here and mapped it in my new Hyper-V Virtual Machine. I created my Virtual machine with small settings: just one vCPU, 4GB RAM and 50GB disk.
To be honest for a Windows guy, having to work in a Linux environment is a little bit stressful…
The first step has been to install my Linux.
I did it with the minimal possible configurations and let the installation configured automatically the needed mount points.
Once the installation finished, the tricky part for me has been to configure my two network cards: one for internal purpose in order to interconnect with my others VMs and a second one to connect to the Internet… what a challenge… after some requests to my Oracle colleagues and some tests I finally managed to set my two cards ;-)

I started the installation with the YUM Red Hat package manager following the Microsoft documentation.
First error: my Red Hat Enterprise Linux system is not registered… In order to register and automatically subscribe I ran this command with the userid and password I used to create an account before downloading the REHL iso:

# subscription-manager register --username <username> --password <password> --auto-attach

After this registration, the first step was to download the Microsoft SQL Server Red Hat referential file into the /etc/yum.repos.d folder and to run a YUM Update to install the last available updates… more than 240 for me…
Once done  I was able to execute the following script to download and install the SQL Server packages:

SQL2017_1

SQL2017_2

During the installation of the packages I have been asked to run the SQL Server configuration setup to finalyze the setup of my SQL Server, I did it and needed to accept the license agreement, to choose an edition and to give a password to the SA login:

SQL2017_3

SQL2017_4

It’s already done!
My SQL Server instance is installed and processes are running:

SQL2017_5

SQL2017_6

At this point it was already possible to connect to my instance via Management Studio installed in another Virtual Machine. I used the Ip address of my Virtual Machine (“ip a” to have it on Linux) and my System Administrator user and password, the port is the default one 1433:

SQL2017_10

SQL2017_11

To be able to query my instance directly from my Linux machine, I installed the SQL Server tools by downloading as before the Microsoft Red Hat repository configuration file (1 in the below picture) and after installed the tools with the packages previously downloaded (2 in my picture):

SQL2017_7

SQL2017_8

I was able after this installation to directly query my instance from my Linux with the following command:

SQL2017_9

This first step with Microsoft SQL Server 2017 on Red Hat Enterprise Linux was a good experience and also a good way to remember some Linux commands which I didn’t use for a while…

Before to conclude, a small tips if you want to change the port of your SQL Server instance.
As you know by default SQL Server will run under port 1433. To change it on SQL Server 2017, I used the file mssql-conf which is a configuration script installed with this new version of SQL Server. This script is used to set some parameters like default directories, collation, trace flags…
Here I setted the port to 48100:

SQL2017_12

I needed to restart my SQL Server service and after I could run a sqlcmd command with my new port:

SQL2017_13

That’s all Folks!
It was quite funny to play around with command lines as I learnt during my studies many years ago ;-)

 

 

 

 

 

 

Cet article SQL Server 2017 on RHEL first installation est apparu en premier sur Blog dbi services.

SQL Server 2016: New Dynamic Management Views (DMVs)

Fri, 2017-09-29 08:32

In SQL Server 2016, you will discover a lot of new Dynamic Management Views(DMVs).
In this article, I will just give you a little overview of these useful views for us as DBA.

SQL Server 2012 has 145 DMVs and SQL Server 2014 has 166 DMVs.
Now, SQL Server 2016 has 185 DMVs.

How to see it?

It is very easy to have a look using the sys.all_objects view:

SELECT * FROM sys.all_objects WHERE TYPE=’V’ AND NAME LIKE ‘dm_%’ order by name ASC

DMV_SQL2016

From SQL Server 2012 to SQL Server 2014, we can notice that a lot of new DMVs comes with the In-Memory technology with the syntax “dm_xtp_xxxxxxxx” or “dm_db_xtp_xxxxxxxx”

In SQL Server 2016, a lot of new “dm_exec_xxxxxxxx” is present.

All definitions for these views come from the Microsoft documentation or web site.

To begin, you will see 10 DMVs for the PolyBase technology:

  • dm_exec_compute_node_status
  • dm_exec_dms_workers

A useful msdn page resumes all DMVs for these new views here

Other dm_exec_xxx views are basically usefull like:

  • dm_exec_query_optimizer_memory_gateways
    • Returns the current status of resource semaphores used to throttle concurrent query optimization.
    • Microsoft Reference here
  • dm_exec_session_wait_stats
    • Returns information about all the waits encountered by threads that executed for each session
    • Microsoft Reference here

3 new DMVs for the Columstore technology:

  • dm_column_store_object_pool
  • dm_db_column_store_row_group_operational_stats
    • Returns current row-level I/O, locking, and access method activity for compressed rowgroups in a columnstore index.
    • Microsoft Reference here
  • dm_db_column_store_row_group_physical_stats
    • Provides current rowgroup-level information about all of the columnstore indexes in the current database
    • Microsoft Reference here

2 new DMVs for Stretch Databases in the database context and with rda(remote database archive):

  • dm_db_rda_migration_status
    • For the current database, list of state information of the remote data archive schema update task.
    • Microsoft Reference here

This list can change if a Service Pack is  applied.
It is just for you to have a little reference view about these useful views! 8-)

 

Cet article SQL Server 2016: New Dynamic Management Views (DMVs) est apparu en premier sur Blog dbi services.

“_suppress_identifiers_on_dupkey” – the SAP workaround for bad design

Fri, 2017-09-29 08:01

In SQL, ‘upsert’ is a conditional insert or update: if the row is there, you update it, but if it is not there, you insert it. In Oracle, you should use a MERGE statement for that. You are clearly doing it wrong if you code something like:

begin
insert...
exception
when dup_val_on_index then update...
end;


But it seems that there are many applications with this bad design, and Oracle has introduced an underscore parameter for them: “_suppress_identifiers_on_dupkey”. You won’t be surprised that this one is part of the long list of parameters required for SAP.

Let’s investigate this.

Insert – Exception – Update

So the idea is to try first an insert, rely on the unique constraint (primary key) to get an exception if the row exists, and in this case update the existing row. There are several flows with that.

The first problem, is that it is not as easy as it looks like. If a concurrent session deletes the row between you insert and update, then the update will fail. You have to manage this. The failed insert cannot leave a lock on the rows that was not inserted.

The second problem is that the SQL engine is optimized for transactions which commit. When the ‘dup_val_on_index’ on index occurs, you have already inserted the table row, updated some indexes, etc. And all that has to be rolled back when the exception occurs. This generates unnecessary contention on the index leaf block, and unnecessary redo.

Then the third problem, and probably the worst one, is that an exception is an error. And error management has lot of work to do, such as looking into the dictionary for the violated constraint name in order to give you a nice error message.

I’ve created the following table:

create table demo as select * from dual;
create unique index demo on demo(dummy);

And I’ve run 10 million inserts on it, all with duplicates:

exec for i in 1..1e7 loop begin insert into demo values('x'); exception when others then null; end; end loop;

Here is some extracts from the AWR on manual snapshots taked before and after.

Elapsed: 20.69 (mins)
DB Time: 20.69 (mins)

This has run for 20 minutes.


Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 100.00 In-memory Sort %: 100.00
Library Hit %: 100.00 Soft Parse %: 100.00
Execute to Parse %: 33.34 Latch Hit %: 100.00
Parse CPU to Parse Elapsd %: 92.31 % Non-Parse CPU: 94.90
Flash Cache Hit %: 0.00

The ‘Execute to Parse %’ show that 2/3 of statements are parsed each time.


SQL ordered by Gets DB/Inst: CDB1/CDB1 Snaps: 19-20
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> %Total - Buffer Gets as a percentage of Total Buffer Gets
-> %CPU - CPU Time as a percentage of Elapsed Time
-> %IO - User I/O Time as a percentage of Elapsed Time
-> Total Buffer Gets: 180,125,740
-> Captured SQL account for 127.7% of Total
 
Buffer Gets Elapsed
Gets Executions per Exec %Total Time (s) %CPU %IO SQL Id
----------- ----------- ------------ ------ ---------- ----- ----- -------------
1.80094E+08 1 1.800942E+08 100.0 1,239.8 99.5 .3 frvpzg5yubp29
Module: java@VM104 (TNS V1-V3)
PDB: PDB1
BEGIN for i in 1..1e7 loop begin insert into demo values('x'); exception when ot
hers then null; end; end loop; END;
 
1.60094E+08 10,000,000 16.0 88.9 983.1 100.3 .4 319ypa1z41aba
Module: java@VM104 (TNS V1-V3)
PDB: PDB1
INSERT INTO DEMO VALUES('x')
 
49,999,995 9,999,999 5.0 27.8 201.1 103.2 0 2skwhauh2cwky
PDB: PDB1
select o.name, u.name from obj$ o, user$ u where o.obj# = :1 and o.owner# = u.u
ser#
 
19,999,998 9,999,999 2.0 11.1 148.5 98.9 0 2jfqzrxhrm93b
PDB: PDB1
select /*+ rule */ c.name, u.name from con$ c, cdef$ cd, user$ u where c.con# =
cd.con# and cd.enabled = :1 and c.owner# = u.user#

My failed inserts have read on average 16 blocks for each attempt. that’s too much for doing nothing. And in addition to that, I see two expensive statements parsed and executed each time: one to get the object name and one to get the constraint name.
This is how we can retreive the error message which is:

 
ORA-00001: unique constraint (SCOTT.DEMO) violated
 

This is a big waste of resource. I did this test in PL/SQL but if you cumulate all worst practices and run those inserts row by row, then you will see those colors:
CaptureBreakReset

The Orange is ‘Log File Sync’ because you generate more redo than necessary.
The Green is ‘CPU’ because you read more blocks than necessary.
The read is ‘SQL*Net break/reset to client’ when the server process sends the error.

_suppress_identifiers_on_dupkey

When you set “_suppress_identifiers_on_dupkey” to true, Oracle will not return the name of the constraint which is violated, but only the information which is already there in the session context.

Here is the message that you get:

 
ORA-00001: unique constraint (UNKNOWN.obj#=73375) violated
 

Where 73375 is the OBJECT_ID of the index where the unique constraint exception has been violated.

You have less information, but it is faster:

Elapsed: 15.45 (mins)
DB Time: 15.48 (mins)

There is no Soft Parse overhead:

Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 100.00 In-memory Sort %: 100.00
Library Hit %: 100.00 Soft Parse %: 96.43
Execute to Parse %: 99.98 Latch Hit %: 100.00
Parse CPU to Parse Elapsd %: 90.38 % Non-Parse CPU: 99.95
Flash Cache Hit %: 0.00

Our statement is the only one using the CPU and reads less blocks:

SQL ordered by Gets DB/Inst: CDB1/CDB1 Snaps: 21-22
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> %Total - Buffer Gets as a percentage of Total Buffer Gets
-> %CPU - CPU Time as a percentage of Elapsed Time
-> %IO - User I/O Time as a percentage of Elapsed Time
-> Total Buffer Gets: 110,132,467
-> Captured SQL account for 81.8% of Total
 
Buffer Gets Elapsed
Gets Executions per Exec %Total Time (s) %CPU %IO SQL Id
----------- ----------- ------------ ------ ---------- ----- ----- -------------
1.10091E+08 1 1.100906E+08 100.0 926.2 98.8 1 frvpzg5yubp29
Module: java@VM104 (TNS V1-V3)
PDB: PDB1
BEGIN for i in 1..1e7 loop begin insert into demo values('x'); exception when ot
hers then null; end; end loop; END;
 
90,090,580 10,000,000 9.0 81.8 515.7 99.1 1.9 319ypa1z41aba
Module: java@VM104 (TNS V1-V3)
PDB: PDB1
INSERT INTO DEMO VALUES('x')

This parameter is a workaround for bad design, but not a solution.

Update – no rows – Insert

In order to avoid all this rollback and exception management overhead, there is another idea. Start with the update and, when no row was found, insert it. This is easy with the ROWCOUNT.

begin
update ...
if SQL%ROWCOUNT = 0 then insert ...

This is more efficient but still subject to a concurrent session inserting the row between your update and you insert. But at least, you manage the different scenario with a condition on ROWCOUNT rather than with an exception, which is more scalable.

So what?

Always use the database in the expected way. Exceptions and Errors are not for the normal scenario of the use-case. Exceptions should be unusual. The solution is to use the MERGE statement which has been implemented exactly for this reason: do an upsert without the error management overhead and with the statement isolation level which prevents errors in a multi-user environment.

 

Cet article “_suppress_identifiers_on_dupkey” – the SAP workaround for bad design est apparu en premier sur Blog dbi services.

Oracle Big Data Cloud Service – Compute Edition

Fri, 2017-09-29 03:00

In this blog post, we will see how to create a Big Data cluster through the Oracle Cloud Services. If you want more details about the Oracle Big Data cloud services offering, you can refer to my previous blog Introduction to Oracle Big Data.

First, you need to create your trial account through the following link: https://cloud.oracle.com/tryit. Note that, when you create your trial account, all information (phone number, address, credit card…), must be from the same country. Otherwise, you will get an error message.

Then you will get an email from Oracle with your connection information. The 4 main connection information are:

During the first connection you need to change your password and answer to 3 secret questions.

You are now login into the Oracle Cloud Services Dashboard. Select the “Big Data – Compute Edition” service to create your cluster.

BDCS-CE Dashboard

Click on “Service” and “Create Service”.

BDCS-CE Create Cluster

First, complete the service information. Cluster name, description… and click on “Next”.

BDCS-CE Cluster creation

Then, you enter the details of your Big Data cluster (configuration, credentials, storage…).

Cluster configuration:

Use the “full” deployment. It will provision a cluster with Spark, MapReduce, Zeppelin, Hive, Spark Thrift, Big Data File System.

Credentials:

Generate an ssh public key and insert it (see screenshot below). Update or keep the current Administrative user / password which is very important for the next operations.

Storage:

Oracle Public Cloud is working with Object storage container. Which means that, a storage container can be used by all cloud services. For the Big Data Service you need to use an existing storage container or create one. The storage container name must follow a specific syntax.

https://<identity_domaine>.storage.oraclecloud.com/v1/Storage-<identity_domaine>/<container_name>

Example: https://axxxxxx.storage.oraclecloud.com/v1/Storage-axxxxxx/dbistorage

You can find the complete configuration below.

BDCS-CE Configuration Overview

Confirm your cluster configuration and click on “Next”.

During the cluster deployment, you can take the time to read the documentation: https://docs.oracle.com/en/cloud/paas/big-data-cloud/index.html

Once your services has been deployed, you can access to the Big Data Cluster Console, to monitor your cluster and access it.

BDCS-CE Cluster Overview

 

BDCS-CE Cluster Console

OBDCS-CE Monitoring

You have now deployed an Big Data cluster composed by 3 nodes, based on HortonWorks distribution with the following tools:

  • HDFS = Hadoop Distributed FileSystem
  • YARN = Resources management for the cluster
  • Hive = Data Warehouse for managing large data sets using SQL
  • Spark= Data processing framework
  • Pig = High-level platform for creating programs that runs on Hadoop
  • ZooKeeper = Hadoop cluster scheduler
  • Zeppelin = Data scientist workbench, web based.
  • Alluxio = Memory speed virtual distributed storage
  • Tez = Framework for YARN-based, Data Processing Applications In Hadoop

Your Oracle Big Data cluster, through Oracle Big Data Cloud Service – Compute Edition is now ready to use.

Enjoy ;-)

 

Cet article Oracle Big Data Cloud Service – Compute Edition est apparu en premier sur Blog dbi services.

Introduction to Oracle Big Data Services

Fri, 2017-09-29 01:00

Since few years, Oracle decided to move forward in the Big Data area, as their main competitor. The goal of this blog post is to explain you, how the Oracle Big Data offering is composed.

As the Oracle Big Data offering is continuously improving, I’m always open to your feedback :-)

Oracle Big Data offering is split in 2 parts:

  • On-Premise
  • Public Cloud

Note: It’s important to know, that the 2 main Big Data distribution on the market are Cloudera and Hortonworks. We will see later how Oracle stands with this 2 main distributions.

On-premise: Oracle Big Data Appliance:

The main product of the Oracle Big Data offering is the Oracle Big Data Appliance. OBDA is an engineered systems based on the Cloudera distribution. The Big Data appliance offers you an easy-to-deploy solution with Cloudera manager for managing a Big Data cluster including a complete Hadoop ecosystem ready-to-use.

Oracle Big Data Appliance starts with a “Starter” rack of 6 nodes for a storage capacity of 96TB. Below the details configuration per nodes.

Oracle X6-2 server:

  • 2 × 22-Core Intel ® Xeon ® E5 Processors
  • 64GB Memory
  • 96TB disk space

Oracle Big Data Appliance is a combination of open source software and proprietary software from Oracle (i.e Oracle Big Data SQL). Below a high-level overview of Big Data Appliance software.

Screen Shot 2017-09-27 at 08.25.45

Oracle Big Data Cloud Machine:

On customer side, Oracle offers the Oracle Big Data Cloud Machine (BDCM). Fully managed by Oracle as it’s a PaaS service (Platform as a Service), based on customer infrastructures, designed to provide Big Data Cloud Service. The BDCM is a Big Data Appliance managed and operated by Oracle in customer’s data center.

The Big Data Cloud Machine starts with a “Starter Pack” of 3 nodes. Below the minimal configuration:

  • 3 nodes
  • 32 OCPU’s per node
  • 256GB RAM per node
  • 48TB disk space per node

Oracle Big Data Cloud Machine princing: https://cloud.oracle.com/en_US/big-data/cloudmachine/pricing

Oracle Public Cloud:

Oracle provides several deployment and services for Big Data:

  • Oracle Big Data Cloud Services
  • Oracle Big Data Cloud Services – Compute Edition
  • Event Hub Cloud Services (Kafka as a Service)
  • Oracle Big Data SQL Cloud Service

Oracle public cloud services, including Big Data, is available in two payment methods, metered and non-metered.

  • Metered: You are charged on the actual usage of the service resource :
    • OCPU/hour
    • Environment/hour
    • Host/hour
    • For the storage : GB or TB/month
  • Non-metered: Monthly or annual subscription for a service and it’s not depending on the resources usage. Charging is performed monthly.

For more information you can refer to the following links:

https://blogs.oracle.com/pshuff/metered-vs-un-metered-vs-dedicated-services

Oracle Big Data Cloud Services:

OBDCS is a dedicated Big Data Appliance in the public cloud. An engineered system managed and pre configured by Oracle. OBDCS is a large system from the start with Terabytes of storage.

The offering starts with a “Starter pack” of 3 nodes, including:

  • Platform as a Service
  • 2 payments methods: metered and non-metered
  • SSH connection to cluster nodes
  • Cloudera’s Distribution including Apache Hadoop, Enterprise Data Hub Edition
  • Oracle Big Data Connectors
  • Oracle Copy to Hadoop
  • Oracle Big Data Spatial and Graph

The cost entry is very high, that’s why this service is recommended for large and more mature business cases.

Pricing information: https://cloud.oracle.com/en_US/big-data/big-data/pricing

Oracle Big Data Cloud Services – Compute Edition:

OBDCS-CE provides you a dedicated Hadoop cluster based on Hortonworks distribution. The cost entry is smaller than Oracle Big Data Cloud Service, that’s why this service is more suitable for small business use case and proof and concept.

OBDCS-CE offering details:

  • Platform as a Service
  • 2 payments methods: metered and non-metered
  • Apache Hadoop cluster based on Hortonworks distribution
  • Free number of nodes for the deployment – 3 nodes is the minimum for a High Availability cluster, recommended for production. You can actually have one node clusters, but this is obviously not recommended.
  • Apache Zeppelin for Hive and Spark analytic
  • 3 access methods:
    • BDCS-CE console (GUI)
    • REST API
    • SSH

Pricing information: https://cloud.oracle.com/en_US/big-data-cloud/pricing

Summary Engineered systems PaaS On-Premise (customer side) - Big Data Appliance (BDA)- Big Data Cloud Machine (BDA managed by Oracle) Oracle Cloud Machine (OCM)  + BDCS – Compute edition Oracle Public Cloud Big Data Cloud Service (BDCS) – a BDA in Oracle public cloud – Cloudera distribution Big Data Cloud Service – Compute edition – Hortonworks distribution

More details about Oracle PaaS offering:

http://www.oracle.com/us/corporate/contracts/paas-iaas-public-cloud-2140609.pdf

I hope, this blog will help you to better understand the Oracle Big Data offering and products.

 

Cet article Introduction to Oracle Big Data Services est apparu en premier sur Blog dbi services.

Pages