Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 6 hours 30 min ago

MySQL 8.0.0 DMR – First impression

Thu, 2016-09-22 07:23

Last week (09/12/2016) was announced the new MySQL 8.0.0 DMR (Development Milestone Release)
It is now available for download at dev.mysql.com
I downloaded and installed it
Following are my first impression on some new features
First of all, when you remember the size of earlier versions, it is more & more gluttonous, see below
mysql-5.0.96-linux-x86_64-glibc23.tar.gz    121.2M
mysql-5.1.72-linux-x86_64-glibc23.tar.gz    127.8M
mysql-5.5.52-linux2.6-x86_64.tar.gz            177.2M
mysql-5.6.33-linux-glibc2.5-x86_64.tar.gz    299.7M
mysql-5.7.15-linux-glibc2.5-x86_64.tar.gz   611.8M
mysql-8.0.0-dmr-linux-glibc2.12-x86_64.tar.gz is around 1Gb and uncompressed 3.6Gb (Oups…)

At first sight , it seems that nothing has changed, same schemas and almost the same tables
– 2 new tables related to roles in the mysql schema : default_roles, role_edges
– 6 new tables in the information_schema & 6 also in the performance_schema

When you scan your Data directory (datadir), you will find new “.SDI” files (Serialized Dictionary Information) for every schema except for MySQL
mysql@MYSQL:/u01/mysqldata/mysqld9/ [mysqld9] ll *.SDI
-rw-r-----. 1 mysql mysql 215 Sep 13 21:34 employees_4.SDI
-rw-r-----. 1 mysql mysql 225 Sep 13 21:37 performance_sche_7.SDI
-rw-r-----. 1 mysql mysql 209 Sep 13 21:34 sys_6.SDI
-rw-r-----. 1 mysql mysql 209 Sep 13 21:34 TBS_3.SDI

If you have a look in one of these files, you will see that they all provide data dictionary information in a serialized form (JSON formatted text)
mysql@MYSQL:/u01/mysqldata/mysqld9/ [mysqld9] cat employees_4.SDI
{
"sdi_version": 1,
"dd_version": 1,
"dd_object_type": "Schema",
"dd_object": {
"name": "employees",
"default_collation_id": 8,
"created": 0,
"last_altered": 0
}

Old “.frm” files containing metadata  have been removed, the same for “.par” file (Partition definition), “.trg file” (Trigger parameter) and “.isl” file (InnoDB Symbolic Link)
When you create a new INNODB table, metadata are stored now in the InnoDB file (.ibd)
For all the other storage engine (memory, CSV,Blackhole, etc…), we get now a new “.SDI” file but more complex
mysql@MYSQL:/u01/mysqldata/mysqld9/Newfeatures/ [mysqld9] cat t3_617.SDI
{
"sdi_version": 1,
"dd_version": 1,
"dd_object_type": "Table",
"dd_object": {
"name": "t3",
"mysql_version_id": 80000,
"created": 20160914075838,
"last_altered": 20160914075838,
"options": "avg_row_length=0;key_block_size=0;keys_disabled=0;pack_record=0;stats_ auto_recalc=0;stats_sample_pages=0;",
"columns": [
{
"name": "c1",
"type": 29,
"is_nullable": true,
"is_zerofill": false,
"is_unsigned": false,
"is_auto_increment": false,
"is_virtual": false,
"hidden": false,
"ordinal_position": 1,
"char_length": 20,
"numeric_precision": 0,
"numeric_scale": 0,
"datetime_precision": 0,
"has_no_default": false,
"default_value_null": true,
"default_value": "",
"default_option": "",
"update_option": "",
"comment": "",
"generation_expression": "",
"generation_expression_utf8": "",
"options": "interval_count=0;",
"se_private_data": "",
"column_key": 1,
"column_type_utf8": "char(20)",
"elements": [],
"collation_id": 8
}
],
"schema_ref": "Newfeatures",
"hidden": false,
"se_private_id": 18446744073709551615,
"engine": "MEMORY",
"comment": "",
"se_private_data": "",
"row_format": 1,
"partition_type": 0,
"partition_expression": "",
"default_partitioning": 0,
"subpartition_type": 0,
"subpartition_expression": "",
"default_subpartitioning": 0,
"indexes": [],
"foreign_keys": [],
"partitions": [],
"collation_id": 8
}

It is possible now to create roles, something we were waiting for years, that means you can define a set of privileges as a role and assign it to a user
mysqld9-(root@localhost) [Newfeatures]>create role tstrole;
mysqld9-(root@localhost) [Newfeatures]>grant all on employees.* to tstrole;
mysqld9-(root@localhost) [Newfeatures]>select host, user from mysql.user;
+---------------+-------------+
| host | user |
+---------------+-------------+
| % | tstrole |

As you can see, roles are stored in the mysql.user table meaning that you can assign a user to another user
mysqld9-(root@localhost) [Newfeatures]>show grants for tstrole;
+----------------------------------------------------------+
| Grants for tstrole@% |
+----------------------------------------------------------+
| GRANT USAGE ON *.* TO `tstrole`@`%` |
| GRANT ALL PRIVILEGES ON `employees`.* TO `tstrole`@`%` |
+----------------------------------------------------------+
mysqld9-(root@localhost) [Newfeatures]>grant tstrole to 'sme'@'localhost';
mysqld9-(root@localhost) [Newfeatures]>show grants for 'sme'@'localhost' using tstrole;
+----------------------------------------------------------------------------------+
| Grants for sme@localhost |
+----------------------------------------------------------------------------------+
| GRANT USAGE ON *.* TO `sme`@`localhost` |
| GRANT ALL PRIVILEGES ON `employees`.* TO `sme`@`localhost` |
| GRANT `tstrole`@`%` TO `sme`@`localhost` |
+----------------------------------------------------------------------------------+

Now we connect as user “sme”
mysql -usme -p
mysql> use mysql
ERROR 1044 (42000): Access denied for user 'sme'@'localhost' to database 'mysql'

It seems to work, as user “sme” has only access to the employees schema

Data dictionary has been improved, almost all the system tables have been moved from MyISAM to the transactional InnoDB storage engine to increase reliability
Data dictionary tables are invisible,  they do not appear in the output of SHOW TABLES and cannot be accessed directly
but in most cases there are corresponding INFORMATION_SCHEMA tables that can be queried instead.

The new feature “Invisible Indexes” is really great.
You can toggle  now the visibility of an index as VISIBLE or INVISIBLE
We know that because of unused indexes, performance of modifications (insert, updates) are reduced
As they are also for the optimizer because they are taken in account for the plan selection
So first check for unused indexes
mysqld9-(root@localhost) [information_schema]>SELECT * FROM sys.schema_unused_indexes;
+------------------+-------------------+---------------+
| object_schema | object_name | index_name |
+------------------+---- --------------+---------------+
| employees | departments | dept_name |
| employees | dept_manager | dept_no |
+------------------+-------------------+---------------+

then
mysqld9-(root@localhost) [employees]>alter table departments alter index dept_name INVISIBLE;

Check in the data dictionary
mysqld9-(root@localhost) [employees]>SELECT * FROM information_schema.statistics WHERE is_visible='NO'\G
*************************** 1. row ***************************
TABLE_CATALOG: def
TABLE_SCHEMA: employees
TABLE_NAME: departments
NON_UNIQUE: 0
INDEX_SCHEMA: employees
INDEX_NAME: dept_name
SEQ_IN_INDEX: 1
COLUMN_NAME: dept_name
COLLATION: A
CARDINALITY: NULL
SUB_PART: NULL
PACKED: NULL
NULLABLE:
INDEX_TYPE: BTREE
COMMENT:
INDEX_COMMENT:
IS_VISIBLE: NO

You can make now persistant global variables
select @@max_connections;
+--------------------------+
| @@max_connections |
+--------------------------+
| 151 |
+--------------------------+
SET GLOBAL max_connections=200;

If you restart your Instance, such setting is lost. So now with
SET PERSIST max_connections=200;
This setting now will remain after an instance restart
select @@max_connections;
+--------------------------+
| @@max_connections |
+--------------------------+
| 200 |
+--------------------------+

Last but not least, be carefull when using MySQL 8.0.0.0 in a multi instance environment,
in my case 9 instances with different versions
I found a bug in mysqld_multi when you want to restart your instance 8.0.0.0
First I corrected the still not fixed Bug #77946 (https://bugs.mysql.com/bug.php?id=77946)
in order to be able to stop my instance properly with mysqld_multi
I stopped it and then tried several times to restart it
No way
2016-09-20T23:42:41.466540Z 1 [ERROR] InnoDB: Operating system error number 2 in a file operation.
2016-09-20T23:42:41.466562Z 1 [ERROR] InnoDB: The error means the system cannot find the path specified.
2016-09-20T23:42:41.466568Z 1 [ERROR] InnoDB: If you are installing InnoDB, remember that you must create directories yourself, InnoDB does not create them.
2016-09-20T23:42:41.466574Z 1 [ERROR] InnoDB: File ./ibdata1: 'open' returned OS error 71. Cannot continue operation
2016-09-20T23:42:41.466582Z 1 [ERROR] InnoDB: Cannot continue operation.

As a temporary workaround, you can either start it manually or use the mysqld_multi from a previous version
I reported this bug  to MySQL (https://bugs.mysql.com/83081)

 

Cet article MySQL 8.0.0 DMR – First impression est apparu en premier sur Blog dbi services.

Oracle Open World 2016 – Day 3

Wed, 2016-09-21 02:39

On day 3 I visited various sessions about In-Memory and its many new features in 12gR2. Yesterday I already listed some new features of In-Memory. In this post I want to explain the new features more in detail:

 

In-Memory Join Groups:

Columns are specified, which are used to join tables:

CREATE INMEMORY JOIN GROUP V_name_jg (VEHICLES(NAME), SALES(NAME));

Those columns share the compression dictionary. My understanding on how this works is as follows: The joins occur on dictionary values rather than data. E.g. a dictionary with distinct values of car brands (column NAME in VEHICLES and SALES) may look as follows:

BMW 1
VW 2
Mercedes 3
Ford 4
Fiat 5
Dodge 6

The dictionary values are stored in the compression units instead of the real data:

1
1
2
3
3
3
4
5
5
6

Doing that for the 2 columns allows the joins to happen on the dictionary values rather than real values. Oracle talks about a speedup of a factor 2-3 when using Join Groups.

 

In-Memory expressions:

SQL expression results (NET in the example below) can now be stored as additional in-memory columns:

CREATE TABLE SALES (
PRICE NUMBER, TAX NUMBER, ...
NET AS (PRICE + PRICE * TAX )
)
INMEMORY;

All in-memory optimizations (e.g. vector processing or storage indexes) apply to the expression columns as well. When a SQL contains the expression then it can be taken from the column store without computing it:

SELCT SUM(NET) FROM SALES WHERE NET > 50;

Different types of expressions are possible:

  • Arithmetic expression
  • Logical expression (e.g. DECODE)
  • Type conversion (e.g. UPPER, TO_CHAR)
  • PL/SQL expressions

Two modes to define the expressions to populate (Manual and Auto) can be used:

Manual: Declare virtual columns for the desired expression (see example above). The parameter INMEMORY_VIRTUAL_COLUMNS defines if virtual columns are considered for inmemory:
INMEMORY_VIRTUAL_COLUMNS
= ENABLE: All user-defined virtual columns on the table or partition enabled for in-memory will be populated
= MANUAL: User-defined virtual columns must explicitly marked for INMEMORY (default)
= DISABLE: No user-defined column will ever be populated

Auto: Oracle detects frequent expressions automatically. I.e. the optimizer stores “hot” expressions (based on frequency and cost) in the expression statistics store (ESS). With the procedures IME_CAPTURE and IME_POPULATE in the package DBMS_INMEMORY the expressions are captured and declared as hidden in-memory virtual columns on the appropriate table. Expressions in the ESS can be checked by querying ALL|DBA|USER_EXPRESSION_STATISTICS.

 

In-Memory JSON

JSON columns and expressions on JSON columns can now be populated in-memory.

 

In-Memory on Active Data Guard

According Oracle, In-Memory on Active Data Guard was the most wanted feature from customers for In-Memory for 12gR2. I.e. the In-Memory column store can now be used on the Active Data Guard instance as well. The standby database can have different data (columns) stored in-memory than the primary. The decision on what columns to populate in-memory on the different instances is based on a service. I.e. on primary the following DDL may be used:

ALTER TABLE SALES INMEMORY DISTRIBUTE FOR SERVICE A;
ALTER TABLE SHIPMENTS INMEMORY DISTRIBUTE FOR SERVICE B;

Table SALES will be populated In-Memory on that instance, which has service A enabled and SHIPMENTS will be populated on the instance, which has service B enabled.

 

In-Memory Columnar Flash

On Exadata the In-Memory format can be used on the Flash Cache. I.e. an In-Flash Column store can be created. The advantage is that there is usually much more Flash than RAM and hence more data can be cached for In-Memory on Flash scans. To do that the CELLMEMORY segment attribute is available:

ALTER TABLE sales CELLMEMORY;
CREATE TABLE trades (...) CELLMEMORY MEMCOMPRESS FOR QUERY;

Restrictions in comparison to normal In-Memory:

  • The MEMCOMPRESS sub-clause only supports “FOR QUERY LOW” and “FOR CAPACITY LOW”.
  • There is no PRIORITY sub-clause.

 

Automatic Data Optimization (ADO) extended to In-Memory

The heatmap can now be used to define how long data should remain In-Memory or when In-Memory compression should be changed. I.e. in the maintenance window or on manual request (using the procedure DBMS_ILM.EXECUTE_ILM) Oracle checks if a policy has been met to e.g. flush data from the In-Memory column store.

ALTER TABLE sales ILM ADD POLICY ... NO INMEMORY AFTER 10 DAYS OF NO ACCESS;

That allows sliding windows of column store content based on access-time or time of creation. A customized policy function can also be created in PLSQL.

 

In-Memory Fast-Start

The In-Memory column store content can now be checkpointed to Securefile Lobs. When the DB restarts the population is faster (between 2-5x) as the data can immediately be loaded in memory without CPU-intensive compression activity. To activate In-Memory Fast-Start the procedure DBMS_INMEMORY_ADMIN.FASTSTART_ENABLE with a tablespace-name as parameter has to be executed. The tablespace should be 2 times the size of the in-memory column store.

 

Cet article Oracle Open World 2016 – Day 3 est apparu en premier sur Blog dbi services.

Ravello

Tue, 2016-09-20 18:46

I’m at Oracle Open World and I’m rather focused on database rather than all those Cloud topics, but one product caught my attention at the Oracle ACED Briefing. Oracle bought Ravallo systems at the beginning of the year. What is Ravello? You have virtualized your applications on VMware and want to move them to the Cloud… without any changes.

Move your application

An application is not a single VM. It’s an ecosystem of VMs working together, linked by one or multiple network interfaces. Moving the virtual machines is easy, thanks to virtualization. But do you change the IP addresses everywhere? This is where you need another layer of virtualization.

Let’s see how it works. With Ravallo you can import your VMs easily. It can connect to your VSphere or you just import an ova. Once they are imported into Ravallo, you can start to build your application with a simple drag and drop of the concerned VMs.

The magic follows. Ravallo parses the VMs metadata and data to discover the ecosystem. Here his the result after having importing a database, two weblogic server servers, and a windows client:

2016-09-20 10.15.04

Not only the VMs are there. there is a network here with switches, routers, DNS, DHCP, a fenced network private to my application, all virtualized.

And there’s more: all IP addresses are the same in that private network than on my on-premises VMWare farm. That’s the point. Changing network interfaces, IP addresses, DNS, etc is a nightmare and you don’t have to do it.

This is only the design of the application. Once that application is defined, you can publish it to the Cloud: Oracle Public Cloud, Google one, or AWS.

And this is easy. You can choose that resources are optimized for cost or performance, and you even have an idea of the price:

2016-09-20 10.13.31 2016-09-20 10.13.13

Time to run our application. As it runs in an isolate environment, you can even run multiple instances of it. But how do you connect to it?

First, you can access the console to each VM from Ravallo though VNC.
For external access you create services that set NAT rules and allocate public IP addresses.

Ravallo is nested virtualization: two levels of hypervisors.

 

Cet article Ravello est apparu en premier sur Blog dbi services.

Oracle 12cR2 Optimizer Adaptive Statistics

Tue, 2016-09-20 10:13

When 12.1 came out, the major issue we encountered after migration was related to the new adaptive features of the optimizer: automatic reoptimization, SQL Plan Directives and the resulting dynamic sampling. Of course, Oracle product managers listen to feedbacks, ensure to provide workarounds or fixes and make things better for next release. Let’s see what has been announce on this topic for 12.2

Fixing vs. workarounds

Unfortunately, in most case, when a problem is encountered people put priority on it only until the issue appears as “solved”, and then close the problem. However for stability and reliability, this is not enough. There are two phases in problem resolution:

  1. Make broken things working again as soon as possible
  2. Ensure that the solution addresses the root cause and is in the same scope as the problem

If you stop after the first point, you don’t have a solution. You have a workaround, and two things will happen soon or later:

  • The problem will come back again
  • New problems will appear as side effects
12.1

So, when you upgrade to 12c from 11.2.0.4 for example, the easiest way to fix a regression is to set optimizer_features_enable=’11.2.0.4′. But when you do that, you did only the first step. Nothing is fixed. Actually, when doing that you didn’t even finish your upgrade job.
I’ve already blogged about how to fix an adaptive statistics issue and keep the fix in the same scope as the problem by:

so there are many combinations that depend on your context.

One parameter do not fit all

It’s easy to disable all new adaptive features and claim that the 12c optimizer is full of bugs. However there are two things that I’m sure:

  • The developers of the optimizer know their stuff at least 100x better than I do
  • They want to bring nice features rather than trying to break things

And they do something very nice: each individual feature can be enabled or disabled by a parameter. So there are lot of parameters. Some of them are undocumented just because at release time they don’t think they should have a different value other than default, except special situations guided by the support. But one set of default value cannot fit all environments. Are you doing OLTP or BI? OLTP likes stability, BI likes adaptive optimization. And probably your database has both OLTP and reporting workloads, and maybe at the same time. This is the first reason why one set of parameter cannot fit all. There’s another one you should think about before blaming the optimizer. Maybe they bring features that helps to make good applications even better. Maybe the set of default value is not chosen to fit the worst application design…
Let’s come back to the OLTP vs. BI example. Adaptive features are enabled by default for BI. You may spend more time on parsing in order to get the optimal execution plan. But then you complain that your OLTP spends more time on parsing… But you are not supposed to parse on OLTP! The overhead of adaptive features should not be a blocking problem if you parse your queries once and then execute them.

I tend to blog on encountered issues rather that on thinks that do not raise any problem. Because my job is to solve problems rather that stay looking at what works well. I’ve encountered a lot of issues with those adaptive features. But I have seen lot of application that had no problem at all when upgraded to 12c. When you disable the adaptive features, do you workaround an optimizer problem, or your application design problem?

12.2

In 12.1 only optimizer_adaptive_features is documented but it disables too many features. You may want to disable SQL Plan Directive and their consequences. But you probably want to keep adaptive plans as they are awesome and less prone of bad side effects. So in 12.2 this parameter has been split into two parameters: OPTIMIZER_ADAPTIVE_PLANS and OPTIMIZER_ADAPTIVE_STATISTICS

In addition to that, only OPTIMIZER_ADAPTIVE_PLANS is set to true by default. OPTIMIZER_ADAPTIVE_STATISTICS is false so by default you will not have the following 12c features: SQL Plan Directives, Statistics feedback, Performance feedback, Adaptive dynamic sampling for parallel query.

Here are the features enabled by OPTIMIZER_ADAPTIVE_PLANS:

optimizer_adaptive_plans TRUE FALSE _optimizer_nlj_hj_adaptive_join TRUE FALSE _px_adaptive_dist_method CHOOSE OFF _optimizer_strans_adaptive_pruning TRUE FALSE

If you want more information about them, I’ve written articles about adaptive join, adaptive PX distribution and adaptive star transformation bitmap pruning

Here are the features enabled by OPTIMIZER_ADAPTIVE_STATISTICS:

optimizer_adaptive_statistics FALSE TRUE _optimizer_dsdir_usage_control 0 126 _optimizer_use_feedback_for_join FALSE TRUE _optimizer_ads_for_pq FALSE TRUE

As you can see there is no “_optimizer_gather_feedback” here so the cardinality feedback coming from 11g is still there when you disable adaptive statistics. You may like it or not, and maybe want to disable cardinality feedback as well if you don’t want plans that change.

What if you already have some SPDs? as “_optimizer_dsdir_usage_control” is 0 they will not be used. And they will be dropped automatically after 53 weeks of no usage.

 

Cet article Oracle 12cR2 Optimizer Adaptive Statistics est apparu en premier sur Blog dbi services.

DOAG 2016, Schulungstag: Oracle Grid Infrastructure

Tue, 2016-09-20 04:11

Wie auch in den letzten Jahren werden wir auch dieses Jahr wieder einen Schlungstag an der DOAG ausrichten. Dieses mal wird sich alles um die Oracle Clusterware (“Infrastruktur & Middelware” –> “Oracle Grid Infrastructure”) drehen: Los geht es mit den Anforderungen an Netzwerk, Speichersysteme, Betriebssystem und Kernel Parameter, denn nur wenn die Basis auch stimmt arbeitet eine Cluster-Lösung auch verlässlich. Wie immer bei uns wird es zahlreiche Live Demos geben, denn zeigen ist immer besser als nur erzählen. Es geht dann weiter mit der Architektur, der Installation, Konfiguration und dem Betrieb der Lösung. Natürlich gehen wir auch detailliert auf die Fehlersuche und Analyse ein. Am Ende des Tages sollte jedem klar sein was genau die Oracle Clusterware ist, wofür man sie einsetzen kann und sollte und auf was genau zu achten ist. Oracle selbst setzt die Clusterware bei fast allen Engineered Systems sowieso schon ein, also lieber gleich wissen worauf es ankommt.
Wer dann immer noch das Gefühl hat mehr Praxis zu benötigen kann sich gerne unseren Workshop zum selben Thema ansehen.

Wir freuen uns jetzt schon auf die zahlreichen Besucher an der DOAG 2016.

 

Cet article DOAG 2016, Schulungstag: Oracle Grid Infrastructure est apparu en premier sur Blog dbi services.

Oracle Open World 2016 – Day 2

Tue, 2016-09-20 03:10

Day 2 is the first official day of the Oracle Open World (OOW) 2016. As a parallel event the Oak Table Network organized his Oak Table World (OTW) close to the Open World. I attended several sessions from OOW and OTW:

Jeremy Schneider: Ansible

Ansible is a very light software written in Python, which allows automation in deploying applications and managing systems. The main goal of the developer of Ansible Michael DeHaan was to make the process of deploying software and managing systems much simpler than it is with Puppet or Chef. Jeremy Schneider showed a demo on how easy it is to manage many systems with such an easy and still very effective tool, which uses ssh to connect to the servers and execute commands. It was impressive to see and I feel I have to check with my customer running on Redhat, if Ansible could not be an alternative to Puppet.

Frits Hoogland: Drilldown into logwriter mechanics

What happens when a user executes a “COMMIT;” ? Yes, we know that the logwriter is triggered to write the data to disk. But what happens in detail? By using the debugger dbg on Linux and setting break points in various modules, Frits could show in a very technical session what exactly happens when the foreground process runs in polling mode (i.e. instead of having the Log-Writer triggering the foreground process that the data is persistent on disk, the foreground process is polling SCN-information to detect that the data is on disk). See here.

Wei Hu, John Kanagaraj: High Availability and Sharding Deep Dive with Next-Generation Oracle Database

What is Sharding? According Oracle there is a need for World Scale applications to have a farm of independent databases. I.e. instead of 1 giant DB, the DB is partitioned into many smaller DBs (shards) running on different hosts. In contrast to RAC the data is not shared, i.e. every machine has its own disks (shared nothing hardware). So each shard stores a subset of the data. For performance reasons read-mostly data (e.g. a lookup table) is replicated between the shards. The applications have to be designed in a way that workload can be routed to a specific database in the farm. There is a choice of the sharding method: System managed (consistent hash), Composite (range-hash, list-hash) and User defined (range, list). Based on a sharding key (in the example below the CustId) the data is distributed among the shards:


CREATE SHARDED TABLE
...
PARTITION BY CONSISTENT HASH (CustId)
...;

In case a new shard is added, the data needs to be redistributed. That move of data is initiated automatically or manually (by the DBA). When it’s automated the system uses RMAN Incremental Backups and Transportable Tablespaces to redistribute the data.

Enhancements with Active Data Guard:

Improved Data Protection:

  • High speed DB compare (whole DB or individual data files) between Primary and Standby. Validates that primary and standby is physically the same, by doing a compare at the physical level.
  • RMAN validation and repair of blocks on standby that were invalidated by non-logged operations (e.g. direct path loads) on the primary.
  • Improved automatic block repair.

Active Data Guard enhancements:

  • During switchover/failover the read-only sessions remain connected and become read/write after the Active Data Guard becomes
  • Primary. That avoids a reconnect strorm after the switchover/failover.
  • In-memory DB now possible on Active Data Guard. The columns defined for In-Memory can be different on Primary and STandby.
  • Support of Diagnostic Pack (AWR), Tuning Pack and SQL Plan Analyzer on the Standby System.
  • Passwd file on standby is automatically updated.
  • Standby-first encryption: Can encrypt tablespaces on standby first, switchover and then encrypt on old primary, so that encryption can be enabled without much downtime.

 

Nigel Bayliss: Upgrading to Oracle 12c without Pain.

Nigel talked about new 12.2 Optimizer features. In 12cR1 the Optimizer adaptive features (adaptive plans and adaptive statistics with SQL Plan directives) were controlled by 1 parameter: OPTIMIZER_ADAPTIVE_FEATURES. In 12cR2 the adaptive plans and adaptive statistics have their own parameters: OPTIMIZER_ADAPTIVE_PLANS (default TRUE) and OPTIMIZER_ADAPTIVE STATISTICS (default FALSE). I.e. in 12cR2 adaptive plans are enabled by default. SQL Plan Directives will still be created, but are not used by default. So the new default settings are closer to the 11g behavior as SQL Plan Directives are not used.
For migrations to 12cR2 Nigel provided specific cook books to follow to avoid plan changes initially (use SQL Plan Baselines to fix e.g. 11g plans) and then evolve to 12c.

Juan Loaiza: Oracle Database In-Memory: What’s New and What’s Coming

New features are:

  • Faster In-Memory joins: Join Groups can be defined. They specify columns, which are frequently used to join tables. The columns share the compression dictionary. The join occurrs on dictionary values rather than data: CREATE INMEMORY JOIN GROUP V_name_jg (VEHICLES(NAME), SALES(NAME)); That should bring up to 2.3 times faster join processing.
  • Precomputed expressions (e.g. Price + Price * Tax) can be cached In-Memory as user defined virtual columns. I.e. using such an expression in the query can gain from In-Memory data access.
  • In-Memory on Active Data Guard (see above).
  • In-Memory columnar format can be used on Exadata Flash Cache.
  • Data lifecycle management of In-memory data. I.e. policeis can be defined to evict cold data from the In_memory Column Store.
  • On engineered systems the IM column data can be persisted on storage. After a restart the data can be loaded faster in memory.
 

Cet article Oracle Open World 2016 – Day 2 est apparu en premier sur Blog dbi services.

Oracle 12cR2 Long Identifiers

Tue, 2016-09-20 01:14

This morning during Gerald Venzl presentation of “What’s New for Developers in the Next Generation of Oracle Database” at Oracle Open World, one feature has been acclaimed by a full room: 12.2 show the end of identifiers limited to 30 characters.

12.1

We knew it would happen because in 12.1 all data dictionary views have 128 bytes length character strings:

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
 
SQL> desc dba_objects
Name Null? Type
----------------------------------------- -------- ----------------------------
OWNER VARCHAR2(128)
OBJECT_NAME VARCHAR2(128)
SUBOBJECT_NAME VARCHAR2(128)

but that’s only the dictionary metadata. Impossible to reach that limit:

SQL> create table "ThisIsAVeryLongNameThatIsAllowedInTwelveTwoC" as select * from dual;
create table "ThisIsAVeryLongNameThatIsAllowedInTwelveTwoC" as select * from dual
*
ERROR at line 1:
ORA-00972: identifier is too long

It is only annoying as the default column format do not fit in the screen:

SQL> select owner,object_name from dba_objects where object_type='TABLE';
 
OWNER
------------------------------------------------------------------------------------------------------------------------
OBJECT_NAME
------------------------------------------------------------------------------------------------------------------------

12.2

In 12.2 you can create longer identifiers:

SQL> create table "ThisIsAVeryLongNameThatIsAllowedInTwelveTwoC" as select * from dual;
Table created.
SQL> alter table "ThisIsAVeryLongNameThatIsAllowedInTwelveTwoC" add XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX number;
Table altered.

But be careful, the limit is in bytes – not in characters. If we have multibytes characters, the limit can be reached earlier:

SQL> alter table "ThisIsAVeryLongNameThatIsAllowedInTwelveTwoC" add X€XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX number;
alter table "ThisIsAVeryLongNameThatIsAllowedInTwelveTwoC" add X€XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX number
  *
ERROR at line 1:
ORA-00972: identifier is too long

So what?

The goal is not to abuse and put the longest names possible. But allowing more that 30 characters can be easier to match table names with Java class names for example.

Oracle 7

I wanted to show that those 30 characters limits was very old, so I ran a 7.3.3 oracle version that lies on my laptop and displayed the same describe of DBA_OBJECTS:
CaptureLongIdentifier
Identifiers were 30 characters there. But look at object name: those 128 bytes are there for more than 20 years!

 

Cet article Oracle 12cR2 Long Identifiers est apparu en premier sur Blog dbi services.

Oracle Open World 2016 – Day 1

Tue, 2016-09-20 00:29

Oracle Open World 2016 started on Sunday. The first day is always reserved for presentations provided by members of the various Oracle User Groups. However, the last part of the day is the first Keynote from Larry Elison. But let’s start by some of the sessions I visited Sunday:

Ludovica Caldara: Migrating to Oracle Databases 12c: 300 databases in 300 days.

Ludovica provided a presentation about the experience he gained during the migration project of 300 databases to 12c: Issues they faced and things to consider when migrating to 12c.
The main recommendations to avoid performance issues after the migration to 12c were related to SQL Plan Directives. I do not want to go into too much detail on the topic, but one issue Ludovica’s customer faced in terms of SQL Plan Directives was related to high CPU utilization at specific, fixed times when a system-job called the function DBMS_FEATURE_AWR. For details on this subject see here.
Other things to consider are the Oracle recommended patches from MOS-Note 2034610.1 (Things to Consider to Avoid Poor Performance or Wrong Results on 12.1.0.2).

Franck Pachot: Single-Tenant Oracle Database 12c: Multitenant Features for All Editions

In his excellent session Franck presented about some good reasons to go to Single-Tenant databases instead of running the deprecated Non-Container databases (i.e. the only way DBs could be run up to 11gR2 with 1 DB per Instance). Among other things the possibility to transport a database to another machine by un-plug and plug-in and 12.2.-security features were mentioned (use of the new lockdown profile, which can e.g. disable database options for a pluggable database).

Mauro Pagano: SQLd360: SQL Tuning Diagnostics Made Easy

Being very excited about the free software SQLd360 Mauro developed (see here), he enthusiastically presented the history of tuning single SQL-statements during the various phases of available tools and database features:

  • 1998, when not much data was available about the history of a SQL-statement
  • 2007, when SQLTXPLAIN became very popular
  • today with the availability of SQL_MONITOR and Active Session History

Bottomline is that SQLd360 should be the tool to use today to tune a single SQL-statement, because of various advantages compared to SQLT(XPLAIN) (took the following list from here):

  • not necessary to install something on the database
  • SQLd360 is 100% free software, while SQLT requires a My Oracle Support (MOS) account, and some consultants and third parties may struggle to obtain a valid login for MOS or would have to use their client’s credentials.
  • SQLd360 makes use of newer technologies including Google Charts, presenting the metadata in a very “Wow!” way, while SQLT focuses on HTML tables.
  • SQLd360 is 13X smaller in terms of lines of code, and it pretty much does the same job as SQLT
  • SQLd360 is much faster to execute than SQLT.

Sunday Keynote by Larry Ellison:

Remark: I do not talk about the Cloud here (almost all of the Keynote was news about Oracle’s Cloud), as I want to concentrate on what’s coming on the DB-side.

First info on Oracle DB 12c Release 2:

  • Multitenant
    Agility with on-line clones and on-line relocate
    4095 PDBs possible (255 in 12.1)
  • Sharding: A shared nothing architecture where databases on different nodes form a logical database.
    It provides elastic scalability with native sharding for global-scale applications.
  • In-Memory
    Column-Store on Active Dataguard
    Increased performance (up to 60x faster than 12.1)
 

Cet article Oracle Open World 2016 – Day 1 est apparu en premier sur Blog dbi services.

Oracle Database 12.2 – PDBaaS

Mon, 2016-09-19 10:27

It’s official, Larry Ellison made the annoucement at first keynote and the database product team at oracle has released the version and the documentation publicly. Oracle Database Exadata Express Cloud Service is the ‘Cloud First’ environment for 12.2

Documentation

Documentation is there: Cloud > Platform > Data Management > Exadata Express
The 12.2 new features available in that ‘Cloud First’ are documented here

Cloud First

We knew it, 12.2 comes ‘Cloud First’ which means that you cannot download it but you can use it on a Cloud Service. This is in my opinion a very good idea. We will not upgrade all our databases to 12.2 so it’s beter to test it and cloud services are good for that.
However the way it is released is quite limited:

  • There is no free trial. You have to pay for the minimum service to test it (175$/month)
  • Features are very limited because the service is a PDB, not a full database
PDBaaS

This Oracle Database Exadata Express Cloud Service is a fully managed service, wich means that you are not the database administrator. Oracle manages the system, creates and administrate the database. You are a user.
Actually, when you create a service, a Pluggable Database is provisioned for you and you access only this PDB. It addition to that, for security reason, all features that may interact with the other PDBs or the system, are locked down. For example, you cannot use Data Pump because it writes files on the server. All limitations are documented here.
If you wonder how those limitations are implemented, it’s a new 12.2 multitenant feature called lockdown profiles, and resource manager that can isolate PDB memory. I presented that yesterday at Oracle Open World and there is more information about it in new book to come.

Options

Features are limited but you have most of options available: In-Memory, Data Mining, Advanced Compression and Hybrid Columnar Compression, Data Redaction, etc. And it’s an Exadata behind so you have SmartScan.

You can think of it as the ‘Schema as a Service’, but with a PDB instead of a schema.

You access to it only through SQL*Net (encrypted) and can move data to and from using SQLDeveloper.

Shapes

When you see ‘Exadata’, ‘In-Memory’, and all those options, you probably think about a service for very big database and high CPU resources. But it is not. This service is for evaluation of the 12.2, testing, developement, training on very small databases (few hundred of GB). And only one OCPU (which is an intel core with two threads). It’s hard to imagine more than one user on this. Maximum memory being 5GB it’s also hard to imagine In-Memory here.

So the goal is clearly to test feature, not to run workloads. You can go live with it only if your production is not critical at all (database is backed up daily).

Express

The ‘Express’ part is the simplicity. Prices are easy to calculate:

  • 175$/month for 20GB of storage and one OCPU. This is ‘X20′ service.
  • Next level is the ‘X50′ service at 750$/month, so x2.5 times the storage for x4.2 the price. Still one OCPU.
  • Highest level is ‘X50IM’ at 950$/month, which is the same but with larger memory.
Non-Metered

It is a non-metered service: whether you use it or not you pay per month. But don’t think you can do whatever you want within that month as the transfer of data is limited. You can transfer the volume of the database only a few times per month.

So what?

The utilization is simple: you don’t need a DBA. This is the main point: automation and fast provisioning.
Developers will love that. Giving them full options is a good marketing idea. Once the application is designed to use In-Memory, Compression, etc. theses options will be required for production as well.

Today, developers need more agility and are often slowed down by the operations. And that’s a major reason why they go to other products that they can install and use themselves easily: Postgres, Cassandra, MongoDB, etc. Oracle Database is to fat for that: look at the time you need to create a database, catalog, catproc, etc. A first answer was the Oracle XE edition which is easy to install anywhere. Now with this Express Cloud Service Oracle gives to possibility to provision a small database in minutes which requires no further administration.
Actually, this is the whole idea behind the multitenant architecture: consolidate all those system objects created by catalog/catprocg into a common location (CDB$ROOT) and have light PDBs with only user data.

Final remark. Currently 12.2 is available on on that service but there are no doubts that a full 12.2 will come within the next months.

 

Cet article Oracle Database 12.2 – PDBaaS est apparu en premier sur Blog dbi services.

SQL Server 2016: Dynamic Data Masking and database role

Mon, 2016-09-19 10:19

Last week, dbi services organized an event named “SQL Server 2016: what’s new?” in Lausanne, Basel and Zurich. I would take the opportunity to say again a big thank you to everyone which joined us.
During my session some questions concerning the new functionality Dynamic Data Masking were asked. In fact data are masked for some roles and not for some others.
Let’s try to clarify that.

I will use the same script I used during the event to initialize the database, create the table and fill-in my table with some rows:

-- Create database
USE MASTER
GO
CREATE DATABASE DDM_TEST
GO

-- Create the Confidential table
USE [DDM_TEST]
GO

CREATE TABLE [dbo].[Confidential](
[ID] [int] NULL,
[Name] [nvarchar](70)NULL,
[CreditCard] [varchar](16)NULL,
[Salary] [int] NULL,
[Email] [nvarchar](60)NULL
)ON [PRIMARY]

-- Insert some rows
INSERT INTO [dbo].[Confidential] VALUES ('1','Stephane','3546748598467584',113459,'sts@dbi-services.com')
INSERT INTO [dbo].[Confidential] VALUES ('2','David','3546746598450989',143576,'dab@dbi-services.com')
INSERT INTO [dbo].[Confidential] VALUES ('3','Nathan','3890098321457893',118900,'nac@dbi-services.com')
INSERT INTO [dbo].[Confidential] VALUES ('4','Olivier','3564890234785612',98000,'olt@dbi-services.com')
INSERT INTO [dbo].[Confidential] VALUES ('5','Alain','9897436900989342',85900,'ala@dbi-services.com')
INSERT INTO [dbo].[Confidential] VALUES ('6','Fabrice','9083234689021340',102345,'fad@dbi-services.com')

I will create some masks for confidential columns with the following script:

-- create data masking for confidential columns with masking functions
USE DDM_TEST
GO

ALTER Table Confidential
ALTER COLUMN Salary ADD MASKED WITH (FUNCTION = 'random(1, 150000)')
ALTER Table Confidential
ALTER COLUMN CreditCard ADD MASKED WITH (FUNCTION='partial(1,"XXXX",2)')
ALTER Table Confidential
ALTER COLUMN Email ADD MASKED WITH (FUNCTION='email()')

I create now a user and add him to db_datareader role:

-- Create a user named TestDemo and add this user to the db_datareader role
USE DDM_TEST
go
CREATE USER TestDemo WITHOUT LOGIN
USE [DDM_TEST]
GO
ALTER ROLE [db_datareader] ADD MEMBER [TestDemo]
GO

Let’s try to visualize the Confidential table with my user:

DDM_res1

Member of the db_datareader role are able to query the Confidential table but cannot see masked columns without masks.
Let’s try to add my user to the db_datawriter role and rerun my query:

DDM_res2

Same, and it’s normal as db_datawriter cannot change read possibility, but the user is now able to insert data in the Condifendial table like this for example:

DDM_res3

But, it cannot view in clear text data that it has inserted.
The only way to visualize data without masks is to be a member of the db_owner role:

DDM_res4

Conclusion is clear, in order to be able to view masked columns user has to be member of the db_owner role in the database containing the table.
Happy masking ;-)

 

Cet article SQL Server 2016: Dynamic Data Masking and database role est apparu en premier sur Blog dbi services.

Oracle 12cR2 SQL new feature: LISTAGG overflow

Mon, 2016-09-19 02:00

LISTAGG was a great feature introduced in 11g: put rows into line with a simple aggregate function. 12cR2 adds an overflow clause to it.

What happens when you have so many rows that the LISTAGG result is too long?

SQL> select listagg(rownum,',')within group(order by rownum) from xmltable('1 to 10000');
select listagg(rownum,',')within group(order by rownum) from xmltable('1 to 10000')
*
ERROR at line 1:
ORA-01489: result of string concatenation is too long

An error at runtime, and we don’t like runtime errors.

If you want to manage the overflow, it’s not easy: run a first query that sums the length and then calculate how much can fit

SQL> select v.*,4000-size_current from (
2 select n,
3 sum(length(n||',')) over(order by n rows between unbounded preceding and current row)-1 size_current,
4 sum(length(n||',')) over(order by n rows between unbounded preceding and 1 following)-1 size_next
5 from (select rownum n from xmltable('1 to 10000'))
6 ) v
7 where size_current between 4000-50 and 4000;
 
N SIZE_CURRENT SIZE_NEXT 4000-SIZE_CURRENT
---------- ------------ ---------- -----------------
1012 3952 3957 48
1013 3957 3962 43
1014 3962 3967 38
1015 3967 3972 33
1016 3972 3977 28
1017 3977 3982 23
1018 3982 3987 18
1019 3987 3992 13
1020 3992 3997 8
1021 3997 4002 3
 
9 rows selected.

Here you can see that values above 1020 will not fit in a VARCHAR2(4000).

In 12.2 you can manage the overflow in two ways

You can choose to raise a runtime error:

SQL> select listagg(rownum, ',' on overflow error)within group(order by rownum) from xmltable('1 to 10000');
select listagg(rownum, ',' on overflow error)within group(order by rownum) from xmltable('1 to 10000')
*
ERROR at line 1:
ORA-01489: result of string concatenation is too long

But you can also choose to truncate the result:

SQL> select listagg(rownum, ',' on overflow truncate '' without count)within group(order by rownum) from xmltable('1 to 10000');
 
LISTAGG(ROWNUM,','ONOVERFLOWTRUNCATE''WITHOUTCOUNT)WITHINGROUP(ORDERBYROWNUM)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,
103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,
178,179,180,181,182,183,184,185,186,187,188,189,190,191,192,193,194,195,196,197,198,199,200,201,202,203,204,205,206,207,208,209,210,211,212,213,214,215,216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,232,233,234,235,236,237,238,239,240,241,242,243,244,245,246,247,248,249,250,251,252,
253,254,255,256,257,258,259,260,261,262,263,264,265,266,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,289,290,291,292,293,294,295,296,297,298,299,300,301,302,303,304,305,306,307,308,309,310,311,312,313,314,315,316,317,318,319,320,321,322,323,324,325,326,327,
328,329,330,331,332,333,334,335,336,337,338,339,340,341,342,343,344,345,346,347,348,349,350,351,352,353,354,355,356,357,358,359,360,361,362,363,364,365,366,367,368,369,370,371,372,373,374,375,376,377,378,379,380,381,382,383,384,385,386,387,388,389,390,391,392,393,394,395,396,397,398,399,400,401,402,
403,404,405,406,407,408,409,410,411,412,413,414,415,416,417,418,419,420,421,422,423,424,425,426,427,428,429,430,431,432,433,434,435,436,437,438,439,440,441,442,443,444,445,446,447,448,449,450,451,452,453,454,455,456,457,458,459,460,461,462,463,464,465,466,467,468,469,470,471,472,473,474,475,476,477,
478,479,480,481,482,483,484,485,486,487,488,489,490,491,492,493,494,495,496,497,498,499,500,501,502,503,504,505,506,507,508,509,510,511,512,513,514,515,516,517,518,519,520,521,522,523,524,525,526,527,528,529,530,531,532,533,534,535,536,537,538,539,540,541,542,543,544,545,546,547,548,549,550,551,552,
553,554,555,556,557,558,559,560,561,562,563,564,565,566,567,568,569,570,571,572,573,574,575,576,577,578,579,580,581,582,583,584,585,586,587,588,589,590,591,592,593,594,595,596,597,598,599,600,601,602,603,604,605,606,607,608,609,610,611,612,613,614,615,616,617,618,619,620,621,622,623,624,625,626,627,
628,629,630,631,632,633,634,635,636,637,638,639,640,641,642,643,644,645,646,647,648,649,650,651,652,653,654,655,656,657,658,659,660,661,662,663,664,665,666,667,668,669,670,671,672,673,674,675,676,677,678,679,680,681,682,683,684,685,686,687,688,689,690,691,692,693,694,695,696,697,698,699,700,701,702,
703,704,705,706,707,708,709,710,711,712,713,714,715,716,717,718,719,720,721,722,723,724,725,726,727,728,729,730,731,732,733,734,735,736,737,738,739,740,741,742,743,744,745,746,747,748,749,750,751,752,753,754,755,756,757,758,759,760,761,762,763,764,765,766,767,768,769,770,771,772,773,774,775,776,777,
778,779,780,781,782,783,784,785,786,787,788,789,790,791,792,793,794,795,796,797,798,799,800,801,802,803,804,805,806,807,808,809,810,811,812,813,814,815,816,817,818,819,820,821,822,823,824,825,826,827,828,829,830,831,832,833,834,835,836,837,838,839,840,841,842,843,844,845,846,847,848,849,850,851,852,
853,854,855,856,857,858,859,860,861,862,863,864,865,866,867,868,869,870,871,872,873,874,875,876,877,878,879,880,881,882,883,884,885,886,887,888,889,890,891,892,893,894,895,896,897,898,899,900,901,902,903,904,905,906,907,908,909,910,911,912,913,914,915,916,917,918,919,920,921,922,923,924,925,926,927,
928,929,930,931,932,933,934,935,936,937,938,939,940,941,942,943,944,945,946,947,948,949,950,951,952,953,954,955,956,957,958,959,960,961,962,963,964,965,966,967,968,969,970,971,972,973,974,975,976,977,978,979,980,981,982,983,984,985,986,987,988,989,990,991,992,993,994,995,996,997,998,999,1000,1001,10
02,1003,1004,1005,1006,1007,1008,1009,1010,1011,1012,1013,1014,1015,1016,1017,1018,1019,1020,1021,

You may want to add some characters to show that it has been truncated:


SQL> select listagg(rownum, ',' on overflow truncate '...' without count)within group(order by rownum) from xmltable('1 to 10000');
 
LISTAGG(ROWNUM,','ONOVERFLOWTRUNCATE'...'WITHOUTCOUNT)WITHINGROUP(ORDERBYROWNUM)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,
103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,
178,179,180,181,182,183,184,185,186,187,188,189,190,191,192,193,194,195,196,197,198,199,200,201,202,203,204,205,206,207,208,209,210,211,212,213,214,215,216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,232,233,234,235,236,237,238,239,240,241,242,243,244,245,246,247,248,249,250,251,252,
253,254,255,256,257,258,259,260,261,262,263,264,265,266,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,289,290,291,292,293,294,295,296,297,298,299,300,301,302,303,304,305,306,307,308,309,310,311,312,313,314,315,316,317,318,319,320,321,322,323,324,325,326,327,
328,329,330,331,332,333,334,335,336,337,338,339,340,341,342,343,344,345,346,347,348,349,350,351,352,353,354,355,356,357,358,359,360,361,362,363,364,365,366,367,368,369,370,371,372,373,374,375,376,377,378,379,380,381,382,383,384,385,386,387,388,389,390,391,392,393,394,395,396,397,398,399,400,401,402,
403,404,405,406,407,408,409,410,411,412,413,414,415,416,417,418,419,420,421,422,423,424,425,426,427,428,429,430,431,432,433,434,435,436,437,438,439,440,441,442,443,444,445,446,447,448,449,450,451,452,453,454,455,456,457,458,459,460,461,462,463,464,465,466,467,468,469,470,471,472,473,474,475,476,477,
478,479,480,481,482,483,484,485,486,487,488,489,490,491,492,493,494,495,496,497,498,499,500,501,502,503,504,505,506,507,508,509,510,511,512,513,514,515,516,517,518,519,520,521,522,523,524,525,526,527,528,529,530,531,532,533,534,535,536,537,538,539,540,541,542,543,544,545,546,547,548,549,550,551,552,
553,554,555,556,557,558,559,560,561,562,563,564,565,566,567,568,569,570,571,572,573,574,575,576,577,578,579,580,581,582,583,584,585,586,587,588,589,590,591,592,593,594,595,596,597,598,599,600,601,602,603,604,605,606,607,608,609,610,611,612,613,614,615,616,617,618,619,620,621,622,623,624,625,626,627,
628,629,630,631,632,633,634,635,636,637,638,639,640,641,642,643,644,645,646,647,648,649,650,651,652,653,654,655,656,657,658,659,660,661,662,663,664,665,666,667,668,669,670,671,672,673,674,675,676,677,678,679,680,681,682,683,684,685,686,687,688,689,690,691,692,693,694,695,696,697,698,699,700,701,702,
703,704,705,706,707,708,709,710,711,712,713,714,715,716,717,718,719,720,721,722,723,724,725,726,727,728,729,730,731,732,733,734,735,736,737,738,739,740,741,742,743,744,745,746,747,748,749,750,751,752,753,754,755,756,757,758,759,760,761,762,763,764,765,766,767,768,769,770,771,772,773,774,775,776,777,
778,779,780,781,782,783,784,785,786,787,788,789,790,791,792,793,794,795,796,797,798,799,800,801,802,803,804,805,806,807,808,809,810,811,812,813,814,815,816,817,818,819,820,821,822,823,824,825,826,827,828,829,830,831,832,833,834,835,836,837,838,839,840,841,842,843,844,845,846,847,848,849,850,851,852,
853,854,855,856,857,858,859,860,861,862,863,864,865,866,867,868,869,870,871,872,873,874,875,876,877,878,879,880,881,882,883,884,885,886,887,888,889,890,891,892,893,894,895,896,897,898,899,900,901,902,903,904,905,906,907,908,909,910,911,912,913,914,915,916,917,918,919,920,921,922,923,924,925,926,927,
928,929,930,931,932,933,934,935,936,937,938,939,940,941,942,943,944,945,946,947,948,949,950,951,952,953,954,955,956,957,958,959,960,961,962,963,964,965,966,967,968,969,970,971,972,973,974,975,976,977,978,979,980,981,982,983,984,985,986,987,988,989,990,991,992,993,994,995,996,997,998,999,1000,1001,10
02,1003,1004,1005,1006,1007,1008,1009,1010,1011,1012,1013,1014,1015,1016,1017,1018,1019,1020,...

And you may even show the number of values that are not displayed:


SQL> select listagg(rownum, ',' on overflow truncate '...' with count)within group(order by rownum) from xmltable('1 to 10000');
 
LISTAGG(ROWNUM,','ONOVERFLOWTRUNCATE'...'WITHCOUNT)WITHINGROUP(ORDERBYROWNUM)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,
103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,
178,179,180,181,182,183,184,185,186,187,188,189,190,191,192,193,194,195,196,197,198,199,200,201,202,203,204,205,206,207,208,209,210,211,212,213,214,215,216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,232,233,234,235,236,237,238,239,240,241,242,243,244,245,246,247,248,249,250,251,252,
253,254,255,256,257,258,259,260,261,262,263,264,265,266,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,289,290,291,292,293,294,295,296,297,298,299,300,301,302,303,304,305,306,307,308,309,310,311,312,313,314,315,316,317,318,319,320,321,322,323,324,325,326,327,
328,329,330,331,332,333,334,335,336,337,338,339,340,341,342,343,344,345,346,347,348,349,350,351,352,353,354,355,356,357,358,359,360,361,362,363,364,365,366,367,368,369,370,371,372,373,374,375,376,377,378,379,380,381,382,383,384,385,386,387,388,389,390,391,392,393,394,395,396,397,398,399,400,401,402,
403,404,405,406,407,408,409,410,411,412,413,414,415,416,417,418,419,420,421,422,423,424,425,426,427,428,429,430,431,432,433,434,435,436,437,438,439,440,441,442,443,444,445,446,447,448,449,450,451,452,453,454,455,456,457,458,459,460,461,462,463,464,465,466,467,468,469,470,471,472,473,474,475,476,477,
478,479,480,481,482,483,484,485,486,487,488,489,490,491,492,493,494,495,496,497,498,499,500,501,502,503,504,505,506,507,508,509,510,511,512,513,514,515,516,517,518,519,520,521,522,523,524,525,526,527,528,529,530,531,532,533,534,535,536,537,538,539,540,541,542,543,544,545,546,547,548,549,550,551,552,
553,554,555,556,557,558,559,560,561,562,563,564,565,566,567,568,569,570,571,572,573,574,575,576,577,578,579,580,581,582,583,584,585,586,587,588,589,590,591,592,593,594,595,596,597,598,599,600,601,602,603,604,605,606,607,608,609,610,611,612,613,614,615,616,617,618,619,620,621,622,623,624,625,626,627,
628,629,630,631,632,633,634,635,636,637,638,639,640,641,642,643,644,645,646,647,648,649,650,651,652,653,654,655,656,657,658,659,660,661,662,663,664,665,666,667,668,669,670,671,672,673,674,675,676,677,678,679,680,681,682,683,684,685,686,687,688,689,690,691,692,693,694,695,696,697,698,699,700,701,702,
703,704,705,706,707,708,709,710,711,712,713,714,715,716,717,718,719,720,721,722,723,724,725,726,727,728,729,730,731,732,733,734,735,736,737,738,739,740,741,742,743,744,745,746,747,748,749,750,751,752,753,754,755,756,757,758,759,760,761,762,763,764,765,766,767,768,769,770,771,772,773,774,775,776,777,
778,779,780,781,782,783,784,785,786,787,788,789,790,791,792,793,794,795,796,797,798,799,800,801,802,803,804,805,806,807,808,809,810,811,812,813,814,815,816,817,818,819,820,821,822,823,824,825,826,827,828,829,830,831,832,833,834,835,836,837,838,839,840,841,842,843,844,845,846,847,848,849,850,851,852,
853,854,855,856,857,858,859,860,861,862,863,864,865,866,867,868,869,870,871,872,873,874,875,876,877,878,879,880,881,882,883,884,885,886,887,888,889,890,891,892,893,894,895,896,897,898,899,900,901,902,903,904,905,906,907,908,909,910,911,912,913,914,915,916,917,918,919,920,921,922,923,924,925,926,927,
928,929,930,931,932,933,934,935,936,937,938,939,940,941,942,943,944,945,946,947,948,949,950,951,952,953,954,955,956,957,958,959,960,961,962,963,964,965,966,967,968,969,970,971,972,973,974,975,976,977,978,979,980,981,982,983,984,985,986,987,988,989,990,991,992,993,994,995,996,997,998,999,1000,1001,10
02,1003,1004,1005,1006,1007,1008,1009,1010,1011,1012,1013,1014,1015,...(8985)

The nice thing is that the truncation is adapted to the information displayed:


SQL> select listagg(rownum, ',' on overflow truncate 'blah blah blah...' with count)within group(order by rownum) from xmltable('1 to 10000');
 
LISTAGG(ROWNUM,','ONOVERFLOWTRUNCATE'BLAHBLAHBLAH...'WITHCOUNT)WITHINGROUP(ORDERBYROWNUM)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,
103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,
178,179,180,181,182,183,184,185,186,187,188,189,190,191,192,193,194,195,196,197,198,199,200,201,202,203,204,205,206,207,208,209,210,211,212,213,214,215,216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,232,233,234,235,236,237,238,239,240,241,242,243,244,245,246,247,248,249,250,251,252,
253,254,255,256,257,258,259,260,261,262,263,264,265,266,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,289,290,291,292,293,294,295,296,297,298,299,300,301,302,303,304,305,306,307,308,309,310,311,312,313,314,315,316,317,318,319,320,321,322,323,324,325,326,327,
328,329,330,331,332,333,334,335,336,337,338,339,340,341,342,343,344,345,346,347,348,349,350,351,352,353,354,355,356,357,358,359,360,361,362,363,364,365,366,367,368,369,370,371,372,373,374,375,376,377,378,379,380,381,382,383,384,385,386,387,388,389,390,391,392,393,394,395,396,397,398,399,400,401,402,
403,404,405,406,407,408,409,410,411,412,413,414,415,416,417,418,419,420,421,422,423,424,425,426,427,428,429,430,431,432,433,434,435,436,437,438,439,440,441,442,443,444,445,446,447,448,449,450,451,452,453,454,455,456,457,458,459,460,461,462,463,464,465,466,467,468,469,470,471,472,473,474,475,476,477,
478,479,480,481,482,483,484,485,486,487,488,489,490,491,492,493,494,495,496,497,498,499,500,501,502,503,504,505,506,507,508,509,510,511,512,513,514,515,516,517,518,519,520,521,522,523,524,525,526,527,528,529,530,531,532,533,534,535,536,537,538,539,540,541,542,543,544,545,546,547,548,549,550,551,552,
553,554,555,556,557,558,559,560,561,562,563,564,565,566,567,568,569,570,571,572,573,574,575,576,577,578,579,580,581,582,583,584,585,586,587,588,589,590,591,592,593,594,595,596,597,598,599,600,601,602,603,604,605,606,607,608,609,610,611,612,613,614,615,616,617,618,619,620,621,622,623,624,625,626,627,
628,629,630,631,632,633,634,635,636,637,638,639,640,641,642,643,644,645,646,647,648,649,650,651,652,653,654,655,656,657,658,659,660,661,662,663,664,665,666,667,668,669,670,671,672,673,674,675,676,677,678,679,680,681,682,683,684,685,686,687,688,689,690,691,692,693,694,695,696,697,698,699,700,701,702,
703,704,705,706,707,708,709,710,711,712,713,714,715,716,717,718,719,720,721,722,723,724,725,726,727,728,729,730,731,732,733,734,735,736,737,738,739,740,741,742,743,744,745,746,747,748,749,750,751,752,753,754,755,756,757,758,759,760,761,762,763,764,765,766,767,768,769,770,771,772,773,774,775,776,777,
778,779,780,781,782,783,784,785,786,787,788,789,790,791,792,793,794,795,796,797,798,799,800,801,802,803,804,805,806,807,808,809,810,811,812,813,814,815,816,817,818,819,820,821,822,823,824,825,826,827,828,829,830,831,832,833,834,835,836,837,838,839,840,841,842,843,844,845,846,847,848,849,850,851,852,
853,854,855,856,857,858,859,860,861,862,863,864,865,866,867,868,869,870,871,872,873,874,875,876,877,878,879,880,881,882,883,884,885,886,887,888,889,890,891,892,893,894,895,896,897,898,899,900,901,902,903,904,905,906,907,908,909,910,911,912,913,914,915,916,917,918,919,920,921,922,923,924,925,926,927,
928,929,930,931,932,933,934,935,936,937,938,939,940,941,942,943,944,945,946,947,948,949,950,951,952,953,954,955,956,957,958,959,960,961,962,963,964,965,966,967,968,969,970,971,972,973,974,975,976,977,978,979,980,981,982,983,984,985,986,987,988,989,990,991,992,993,994,995,996,997,998,999,1000,1001,10
02,1003,1004,1005,1006,1007,1008,1009,1010,1011,1012,blah blah blah...(8988)

The “,blah blah blah…()” takes 20 characters, the count may take up to 24 characters, so the truncated value cannot be larger than 4000-20-24=3956. From the first query we run we see that we have to truncate after the value “2012”. There’s no dynamic evaluation of the count size.

If all the values fit, then it’s not truncated. In the first query we have seen that values up to 1021 takes 3997 characters:


SQL> select listagg(rownum, ',' on overflow truncate 'blah blah blah...' with count)within group(order by rownum) from xmltable('1 to 1021');
 
LISTAGG(ROWNUM,','ONOVERFLOWTRUNCATE'BLAHBLAHBLAH...'WITHCOUNT)WITHINGROUP(ORDERBYROWNUM)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,
103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,
178,179,180,181,182,183,184,185,186,187,188,189,190,191,192,193,194,195,196,197,198,199,200,201,202,203,204,205,206,207,208,209,210,211,212,213,214,215,216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,232,233,234,235,236,237,238,239,240,241,242,243,244,245,246,247,248,249,250,251,252,
253,254,255,256,257,258,259,260,261,262,263,264,265,266,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,289,290,291,292,293,294,295,296,297,298,299,300,301,302,303,304,305,306,307,308,309,310,311,312,313,314,315,316,317,318,319,320,321,322,323,324,325,326,327,
328,329,330,331,332,333,334,335,336,337,338,339,340,341,342,343,344,345,346,347,348,349,350,351,352,353,354,355,356,357,358,359,360,361,362,363,364,365,366,367,368,369,370,371,372,373,374,375,376,377,378,379,380,381,382,383,384,385,386,387,388,389,390,391,392,393,394,395,396,397,398,399,400,401,402,
403,404,405,406,407,408,409,410,411,412,413,414,415,416,417,418,419,420,421,422,423,424,425,426,427,428,429,430,431,432,433,434,435,436,437,438,439,440,441,442,443,444,445,446,447,448,449,450,451,452,453,454,455,456,457,458,459,460,461,462,463,464,465,466,467,468,469,470,471,472,473,474,475,476,477,
478,479,480,481,482,483,484,485,486,487,488,489,490,491,492,493,494,495,496,497,498,499,500,501,502,503,504,505,506,507,508,509,510,511,512,513,514,515,516,517,518,519,520,521,522,523,524,525,526,527,528,529,530,531,532,533,534,535,536,537,538,539,540,541,542,543,544,545,546,547,548,549,550,551,552,
553,554,555,556,557,558,559,560,561,562,563,564,565,566,567,568,569,570,571,572,573,574,575,576,577,578,579,580,581,582,583,584,585,586,587,588,589,590,591,592,593,594,595,596,597,598,599,600,601,602,603,604,605,606,607,608,609,610,611,612,613,614,615,616,617,618,619,620,621,622,623,624,625,626,627,
628,629,630,631,632,633,634,635,636,637,638,639,640,641,642,643,644,645,646,647,648,649,650,651,652,653,654,655,656,657,658,659,660,661,662,663,664,665,666,667,668,669,670,671,672,673,674,675,676,677,678,679,680,681,682,683,684,685,686,687,688,689,690,691,692,693,694,695,696,697,698,699,700,701,702,
703,704,705,706,707,708,709,710,711,712,713,714,715,716,717,718,719,720,721,722,723,724,725,726,727,728,729,730,731,732,733,734,735,736,737,738,739,740,741,742,743,744,745,746,747,748,749,750,751,752,753,754,755,756,757,758,759,760,761,762,763,764,765,766,767,768,769,770,771,772,773,774,775,776,777,
778,779,780,781,782,783,784,785,786,787,788,789,790,791,792,793,794,795,796,797,798,799,800,801,802,803,804,805,806,807,808,809,810,811,812,813,814,815,816,817,818,819,820,821,822,823,824,825,826,827,828,829,830,831,832,833,834,835,836,837,838,839,840,841,842,843,844,845,846,847,848,849,850,851,852,
853,854,855,856,857,858,859,860,861,862,863,864,865,866,867,868,869,870,871,872,873,874,875,876,877,878,879,880,881,882,883,884,885,886,887,888,889,890,891,892,893,894,895,896,897,898,899,900,901,902,903,904,905,906,907,908,909,910,911,912,913,914,915,916,917,918,919,920,921,922,923,924,925,926,927,
928,929,930,931,932,933,934,935,936,937,938,939,940,941,942,943,944,945,946,947,948,949,950,951,952,953,954,955,956,957,958,959,960,961,962,963,964,965,966,967,968,969,970,971,972,973,974,975,976,977,978,979,980,981,982,983,984,985,986,987,988,989,990,991,992,993,994,995,996,997,998,999,1000,1001,10
02,1003,1004,1005,1006,1007,1008,1009,1010,1011,1012,1013,1014,1015,1016,1017,1018,1019,1020,1021

In summary

After the delimiter string you can add:

  • ON OVERFLOW ERROR which is the default. Same behavior as in previous releases.
  • ON OVERFLOW TRUNCATE with a string that is added in case of truncation (default: ‘…’) and optionally WITH COUNT (or WITHOUT COUNT which is the default)

The full syntax is in the documentation

 

Cet article Oracle 12cR2 SQL new feature: LISTAGG overflow est apparu en premier sur Blog dbi services.

Les 24 heures du PASS (24HOP) – édition francophone

Mon, 2016-09-19 01:46

24hour_pass

Je n’ai pas encore blogué sur le sujet mais il est temps de le faire. Cette année, aura lieu la première édition francophone des 24h du PASS  les 20 et 21 septembre 2016. L’idée est plutôt simple: proposer une série de 24 webinaires gratuits de 10h jusqu’à 22h (heure française)  pendant 2 jours. C’est l’occasion de recevoir et d’échanger les dernières informations autour de l’administration et du développement des bases de données, des nouvelles tendances côté Business Intelligence et du Cloud.

Qui dit événement Francophone dit aussi speakers de la communauté francophone avec du contenu en Français. Pour ma part, c’est avec plaisir que j’échangerais avec vous autour de 2 sujets: tempdb et bonnes pratiques ainsi que des columnstore et leur implication dans les nouvelles tendances d’architecture BI avec Thoi Dung TSP Microsoft Switzerland.

C’est aussi l’occasion de remercier les sponsors et Isabelle sans qui ce type d’événement n’aurait certainement pas lieu.

Aucune raisons de ne pas vous inscrire!

Au plaisir d’échanger avec vous au cours de cet événement!

 

Cet article Les 24 heures du PASS (24HOP) – édition francophone est apparu en premier sur Blog dbi services.

Le programme Oracle ACE

Sat, 2016-09-17 19:55

Venant de passer les deux derniers jours au siège d’Oracle pour le ACE Director briefing, j’en profite pour expliquer ce que représente le programme Oracle ACE car il y a peu de ressources en français.

On ne le sent pas toujours, mais Oracle est à l’écoute des utilisateurs. Oracle encourage les spécialiste de la communauté d’utilisateurs à partager et communiquer sur ses produits. Ce n’est pas pour faire simplement de la publicité commerciale, mais d’échanger et de faire connaitre la technologie. C’est l’OTN (Oracle Technology Network) qui gère le programme ACE.

Les titres de ACE Associate, ACE, et ACE Director sont une reconnaissance du travail effectué par les spécialistes au travers de différent canaux d’information: blogs, articles, présentation, organisation des groupes d’utilisateurs, écriture de livres techniques, etc.

Mais c’est aussi le moyen pour Oracle d’être à l’écoute des utilisateurs finaux, en dehors du réseau commercial. Les ACEs ont un contact privilégié avec les Product Manager pour tester les produits, parfois en beta, ouvrir et suivre les bugs rencontrés et faire un retour sur l’utilisation des fonctionnalités.

Les ACE directors se réunissent au siège juste avant l’Oracle Open World pour un briefing sur ce qui va être annoncé à l’Open World, et sur la Road Map des futures releases. Cela permet aux products managers de mieux présenter les nouveautés en fonction de ce qu’attendent les utilisateurs, et aux ACEs de pouvoir répondre aux questions des utilisateurs qui apprennent ces nouveautés.

Pour donner une idée du nombre de ACE Directors présents au siège, voici l’équipe:
2016-09-16 17.08.58

Aujourd’hui, Oracle couvre un domaine beaucoup plus large que la base de donnée. Il y a du hardware, de la virtualisation, des systèmes d’exploitation, plusieurs bases de données, de la BI, ERP… L’objectif aujourd’hui d’Oracle est de s’imposer face à Amazon sur le IaaS, d’offrir la plus grande agilité en PaaS et une gamme très large en SaaS.

Mais heureusement, les bâtiments d’Oracle Corporation montrent toujours d’où ils sont partis: la base de donnée, stockée sur disque dur, toujours représentée par des cylindres rotatifs:

2016-09-16 07.35.12

Aujourd’hui, les disques ne ressemblent plus à cela et de sont plus l’élément critique de l’IT car on ne dépend plus de la latence mécanique. Mais le symbole reste.

Si vous êtes à l’Oracle Open World cette année, n’hésitez pas à passer à l’OTN Lounge au Moscone South. Vous pourrez vous renseigner sur la communauté autour de la technologie Oracle, et sur le programme ACE.

Si vous ne maîtrisez pas l’anglais, pas de problème la Délégation Française propose plusieurs points de rencontre.

Et n’hésitez pas à venir voir ma session sur le Multitenant et j-interviens aussi pour parler Transportable Tablespace dans la session de l’EMEA User Group Community (EOUC)

 

Cet article Le programme Oracle ACE est apparu en premier sur Blog dbi services.

SQL Server 2016 : Basic availability groups may fit to your HA needs

Fri, 2016-09-16 13:22

In this blog let’s talk about SQL Server and availability groups in standard Edition. I had some customer questions about existing limitations and this is a good opportunity to review them on this blog post. First of all, let’s say that Microsoft has already documented the restrictions here and from my side, I already written a blog post on the subject at the time of the CTP version of SQL Server 2016. In the meantime, some changes occurred and are now included in the RTM version.

As a reminder, the Standard Edition offers limited capabilities of availability groups and it tends to be a replacement of previous feature called mirroring. This is a good news for customers that may finally upgrade their old environment.

The first main limitation concerns the number of replicas capped to only 2 and with no possibility to use the secondary replica for Reporting and backups offloading as well. But we want to use a Standard Edition and it is supposed to be sold at a lower cost right?

Let’s just have a quick check by using the T-SQL script. Generally speaking, I’m not a big fan of the GUI! So if I try to create an availability group with 3 replicas, I will obviously face the following error message:

Msg 35233, Level 16, State 0, Line 2
Cannot create an availability group containing 3 availability replica(s). 
The maximum number of availability replicas in an availability group  with (basic) is 2. 
Reenter your CREATE AVAILABILITY GROUP command specifying fewer availability replicas.

Let’s try to use a secondary replica as read-only replica (ALLOW_CONNECTIONS = ALL)

-- primary replica
CREATE AVAILABILITY GROUP [testGrpStd]   
FOR  
REPLICA ON
N'WIN20121SQL16\SQL16STD'
WITH (ENDPOINT_URL = N'TCP://WIN20121SQL16.dbi-services.test:5023',  
    FAILOVER_MODE = AUTOMATIC,  
    AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,   
    BACKUP_PRIORITY = 50,   
    SECONDARY_ROLE(ALLOW_CONNECTIONS = ALL),   
    SEEDING_MODE = MANUAL)
,
N'WIN20124SQL16\SQL16STD'
WITH (ENDPOINT_URL = N'TCP://WIN20124SQL16.dbi-services.test:5022',   
    FAILOVER_MODE = AUTOMATIC,   
    AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,   
    BACKUP_PRIORITY = 50,   
    SECONDARY_ROLE(ALLOW_CONNECTIONS = ALL),   
    SEEDING_MODE = MANUAL);   
GO

Well another error message I got here:

Msg 41199, Level 16, State 8, Line 2
The specified command is invalid because the Always On Availability Groups allow_connections feature
is not supported by this edition of SQL Server. For information about features supported by the editions
of SQL Server, see SQL Server Books Online.

However, I noticed a big change compared to my first test with a CTP version concerning the possibility to add a listener to my availability group. From my point of view, this is a very good improvement compared to the mirroring feature because no changes will be required at the application layer with additional options at the connection string level. Let me prove it with the following script:

USE [master]
GO
ALTER AVAILABILITY GROUP [testGrpStd]
ADD LISTENER N'Test-GrpSTD' (
WITH IP
((N'192.168.5.122', N'255.255.255.0')
)
, PORT=1433);
GO

We can check the listener creation by using DMVs to get the listener configuration as shown below:

blog 103 - STD AG - listener 1

According to my discussions with some customers at our last SQL Server 2016 event, availability groups in Standard Edition may also suggest that more complex infrastructures like multi-subnet availability groups or new special features like direct seeding will not be supported. A first look at the Microsoft documentation doesn’t provide any clues. You can argue that the Microsoft documentation is black-list oriented and if it is not written explicitly in the documentation it should work and you are probably right. But you know, I’m curious and I was interesting to check myself on my lab environment!

For the both suppositions, I can confirm that I didn’t face any issue when creating an availability group with a different setup scenario on the top of multi-subnet WSFC or when using direct seeding. Here the script used to create my availability group with a listener and two IP addresses on different subnet. I also have enabled the direct seeding feature by configuring the SEEDING_MODE parameter to AUTOMATIC as shown below.

-- Availability groups with SEEDING_MODE = AUTOMATIC
CREATE AVAILABILITY GROUP [testGrpStd]   
FOR  
REPLICA ON
N'WIN20121SQL16\SQL16STD'
WITH (ENDPOINT_URL = N'TCP://WIN20121SQL16.dbi-services.test:5023',  
    FAILOVER_MODE = AUTOMATIC,  
    AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,   
    BACKUP_PRIORITY = 50,   
    SECONDARY_ROLE(ALLOW_CONNECTIONS = NO),   
    SEEDING_MODE = AUTOMATIC)
,
N'WIN20124SQL16\SQL16STD'
WITH (ENDPOINT_URL = N'TCP://WIN20124SQL16.dbi-services.test:5022',   
    FAILOVER_MODE = AUTOMATIC,   
    AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,   
    BACKUP_PRIORITY = 50,   
    SECONDARY_ROLE(ALLOW_CONNECTIONS = NO),   
    SEEDING_MODE = AUTOMATIC);   
GO

-- Listener with two IP addresses on a different subnet 
ALTER AVAILABILITY GROUP [testGrpStd]
ADD LISTENER N'Test-GrpSTD' (
WITH IP
((N'192.168.5.122', N'255.255.255.0'),
 (N'192.168.50.122', N'255.255.255.0')
)
, PORT=1433);
GO

Now let’s have a look at the availability group configuration and direct seeding parameter value

blog 103 - STD AG - automatic seeding

Here the listener configuration retrieved from the sys.availability_group_listeners DMV:

blog 103 - STD AG - listener 2

Let’s take this opportunity to change the replication type to ASYNCHRONOUS. In fact, I had in mind the mirroring feature when using the asynchronous replication that was only available with Enterprise edition. This is another interesting point and now we may benefit from both asynchronous and synchronous replication mode regardless the edition. By the way this feature is clearly documented in the BOL.

blog 103 - STD AG - async replication

Finally let’s check if direct seeding works as expected. If you don’t know what direct seeding is please refer to my previous blog. Just as reminder, we must first configure direct seeding on the secondary by granting the correct permission to the availability group itself.

ALTER AVAILABILITY GROUP [testGrpStd] GRANT CREATE ANY DATABASE 
GO

Here we go. Let’s finally add the database to my existing availability and let’s see if direct seeding comes into play

ALTER AVAILABILITY GROUP testGrpStd ADD DATABASE test_db;  
GO

To get data from direct seeding session states we may use DMVs like sys.dm_hadr_automatic_seeding and sys.dm_hadr_physical_seeding_stats as well.

blog 103 - STD AG - direct seeding

Sounds good right?

Bottom line

Basic availability groups feature is shipped with limited features. Yes, but in a meantime it will still respond to a large number of customer scenarios where we have to find a balance between high availability options and costs. Referring to some discussions I had with some customers, they don’t need special features provided by availability groups like readable secondaries or offloading backups. They just want to benefit from high availability for only some critical applications. I’m looking forward to see basic availability groups at customer shops!

 

 

 

 

Cet article SQL Server 2016 : Basic availability groups may fit to your HA needs est apparu en premier sur Blog dbi services.

The dbi Open Infrastructure Technology Division

Fri, 2016-09-16 07:37

As you already might know dbi (on a technical level) is organized into several technology divisions. Traditionally there is one for Oracle, one for Microsoft and one for Middleware products. Almost a year ago a new technology division was born to reflect the new requirements we see on the market. This does not mean that we did not provide expertise in these areas before but the goal was to more clearly organize ourself for being able to rapidly adopt new technologies and provide the best expertise for our current and future customers. In this post I’ll take a look at our Open Infrastructure Technology Division. What is it about, what do we believe in and what you can expect now and in the future.

To start with: What do we call a technology division? As the name already adumbrates it is all about technology. You could also say: The technology division are the heart of dbi. This is where people work on the different technologies, organize technical events, do researching, define and plan new workshops, adopt new products and have lots of fun. Each of these technology divisions is led by a so called technology leader who is responsible for his technology division. Currently dbi supports four of these divisions which reflect the areas we work in:

to-organization-overview

On the top there is Hervé who is our CTO and then we have the four technology divisions:

  • We have Franck who is responsible for all aspects around the Oracle database
  • We have Stéphane who is responsible for the Microsoft stuff
  • We have Gérard who is responsible for everything around the middleware products we support
  • And finally we have the Open Infrastructure Technology Division which I am happy to represent

A very important point to notice here is that each of these divisions is cross connected to every other. We are not working in isolated silos but do work together wherever we can. Nowadays you can never be sure what happens and things have recently happened you never thought would be possible in the past, e.g.

… just to name a few. Do you know if you need a SQL Server expert on Linux tomorrow? Are you running Oracle on Windows? Are you planning to run Docker on Windows? These are just a few examples where you might need expertise which goes beyond the traditional “I only do Windows” or “I only do Oracle” way of working. This is what the technology divisions are there for: Build expertise inside the divisions but even more important build a kind of “backbone” where everyone can get help from others (inside and outside of dbi). Share what you learned, be open, be communicative, be mindful and have an open eye on what is going on around you. No matter which technology division you take: All of them share their knowledge by writing blogs, attending, organizing and speaking at events (e.g. DOAG 2016, Oracle Open World 2016, IT-Tage 2016, SWISS PGDay, PASS SQLSaturdays and many more), writing articles, creating hands-on workshops and even come together twice a year for exchanging knowledge across the technology divisions.

So far for the overview (which is far longer that it intended to be, sorry). Lets focus on the Open Infrastructure Division from now on. Probably the most important point in any division is what it stands for, so what do we stand for? As you can see from the name the word “Open” plays a central role. We do not only believe in open source software but even more important believe that being and thinking open is key to success. There are plenty of wonderful open source projects out there, each of them having an active community and each of them supporting business critical applications. Our idea is not to favor one over another but rather selecting the right technology for the right requirement. This might range from one specific product to a whole set of products. This could even result in extending (or interfacing) a proprietary solution by open source solutions when it makes sense (remember the cross connected divisions described above).

On the other hand we can not support the whole open source world because this is just not possible so we have to select a kind of technology set we want to focus on (more on that later). This set is and will be changing over time, we are absolutely aware of that. New technologies will come in, others will go out and some will stay. That’s the way of life. Important for us is the “Open Infrastructure” because when things change you need to adopt. When you keep your infrastructure open it is much more easy to adopt than when you have to adopt your closed systems. When you focus on real open standards, open formats and open software you are free to decide in which direction you want to go and even if it was the wrong decision you are not on a …

traffic-sign-6727_960_720

… you are free to adjust the direction :) This is more or less what we believe in and work on every day: Making infrastructures more open, easier to maintain, easier to adapt, faster to profit from and more fun to work with. Yes, work can be fun when you have the right tools and the right people to work with.

Going deeper into our division what do we actually provide? We had various internal discussion about that and it was not easy to come to a final conclusion. Indeed there is no final set or conclusion but we do provide what we call a “core” set. This are our core technologies we currently work on, provide best practices, provide trainings for (some), provide consulting (all), provide SLAs (some), implement (all) and do reviews or feasibility studies (all). So, finally here they are (bottom up):

On the Linux level:

logo_redhat logo_centos logo_suse logo_debian logo_ubuntu

On the rdbms level:

logo_mariadb logo_postgresql logo_mysql

On the NoSQL level:

logo_mongodb logo_cassandra

On the provisioning and deployment level:

logo_ansible

On the monitoring level:

logo_nagios

On the DevOps and OpenStack level:

logo_atomichost logo_docker logo_mirantis

This are quite a few but not too much technologies. As already stated somewhere above we do not provide everything for every product but we are constantly extending our knowledge and best practices so that products might be fully supported by us in the future which are not today. Other products we are working on are not even listed above (e.g. the E(astic search) L(ogstash) K(ibana) stack). Believe me: It is really hard to select the right tools for the right job: This is what we are here for.

 

Cet article The dbi Open Infrastructure Technology Division est apparu en premier sur Blog dbi services.

Result cache side effects on number of calls

Tue, 2016-09-13 04:38

During the execution of a SQL statement, you cannot guess how many times an operation, a predicate, or a function will be executed. This depends on the execution plan, on some caching at execution, and some other execution time decisions. Here is an example where result cache may bring some overhead by calling a function multiple times.

Here is my function:
SQL> create or replace function F return number is
2 begin
3 dbms_lock.sleep(5);
4 dbms_output.put_line('Hello World');
5 return 255;
6 end;
7 /
Function created.

The function displays ‘Hello World’ so that I can check how many times it is executed (I’ve set serveroutput on).

Obviously, on a one row table, it is called only once:
SQL> select f from dual;
 
F
----------
255
 
Hello World

Query result cache miss

I’ll run now the same query but with the result cache hint. The first execution will have to execute the query because the cache is empty at that point:

SQL> exec dbms_result_cache.flush;
PL/SQL procedure successfully completed.
 
SQL> select /*+ result_cache */ f from dual;
 
F
----------
255
 
Hello World
Hello World

Here is what I wanted to show: ‘Hello World’ is displayed two times instead of one. If your function is an expensive one, then the first execution, or every cache miss, will have a performance overhead.

Query result cache hit

Now that the result is in the cache:

SQL> select id, type, status, name from v$result_cache_objects;
 
ID TYPE STATUS NAME
---------- ---------- --------- ------------------------------------------------------------
33 Dependency Published DEMO.F
34 Result Published select /*+ result_cache */ f from dual

and the table has not changed (it’s DUAL here :; ) further executions do not call the function anymore, which is the expected result.

SQL> select /*+ result_cache */ f from dual ;
 
F
----------
255

Bug or not?

Bug 21484570 has been opened for that and closed as ‘Not a bug’. There is no guarantee that the function is evaluated once, twice, more or never.
Ok, why not. That’s an implementation decision. Just think about it if you want to workaround an expensive function called for each row, then query result cache may not be the right solution (except if all tables are static and you always have cache hits).

Note that if the function is declared as deterministic, it is executed only once.

You can workaround the issue by using result cache at function level (in place, or in addition to query result cache if you need it).

SQL> create or replace function F return number RESULT_CACHE is
2 begin
3 dbms_lock.sleep(5);
4 dbms_output.put_line('Hello World');
5 return 255;
6 end;
7 /
Function created.
 
SQL> select /*+ result_cache */ f from dual;
 
F
----------
255
 
Hello World
 
SQL> select id, type, status, name from v$result_cache_objects;
 
ID TYPE STATUS NAME
---------- ---------- --------- ------------------------------------------------------------
64 Dependency Published DEMO.F
66 Result Published "DEMO"."F"::8."F"#e17d780a3c3eae3d #1
65 Result Published select /*+ result_cache */ f from dual

So, not a big problem. Just something to know. And anyway, the right design is NOT to call a function for each row because it’s not scalable. Pipeline functions should be used for that.

 

Cet article Result cache side effects on number of calls est apparu en premier sur Blog dbi services.

Sharding with PostgreSQL

Mon, 2016-09-12 14:02

In this article we are going to talk about sharding in PostgreSQL. What is sharding, Sharding is like partitioning. The difference is that with traditional partioning, partitions are stored in the same database while sharding shards (partitions) are stored in different servers.
Below is an example of sharding configuration we will use for our demonstration

sharding
PostgreSQL does not provide built-in tool for sharding. We will use citus  which extends PostgreSQL capability to do sharding and replication.
We will use 3 servers
pgshard0: 192.168.1.50
pgshard1: 192.168.1.51
pgshard2: 192.168.1.52
First let’s install PostgreSQL + citus extension on all servers (pgshard0, pgshard1, pgshard2). We suppose of course that network is configured so that all server can communicate
[root@pgshard0 ~]# yum install -y https://download.postgresql.org/pub/repos/yum/9.5/redhat/rhel-7-x86_64/pgdg-oraclelinux95-9.5-2.noarch.rpm
[root@pgshard0 ~]# yum install -y citus_95.x86_64
Now let’s  to initialize a new database system on all servers (pgshard0, pgshard1, pgshard2)
[root@pgshard0 ~]# /usr/pgsql-9.5/bin/postgresql95-setup initdb
Initializing database ... OK

To load citus extension we have to edit the /var/lib/pgsql/9.5/data/postgresql.conf   file and add the following line
shared_preload_libraries = ‘citus’
[root@pgshard0 data]# grep shared_preload_libraries /var/lib/pgsql/9.5/data/postgresql.conf
#shared_preload_libraries = '' # (change requires restart)
shared_preload_libraries = 'citus'
[root@pgshard0 data]#

To allow postgreSQL connection between servers we have to configure two configuration files /var/lib/pgsql/9.5/data/postgresql.conf  and /var/lib/pgsql/9.5/data/pg_hba.conf
[root@pgshard0 data]# grep listen postgresql.conf
listen_addresses = '*' # what IP address(es) to listen on;
[root@pgshard0 data]#

[root@pgshard0 data]# grep trust pg_hba.conf
# METHOD can be "trust", "reject", "md5", "password", "gss", "sspi",
host all all 127.0.0.1/32 trust
host all all 192.168.1.0/24 trust
host all all ::1/128 trust
[root@pgshard0 data]#

Let’s now start database on all servers (pgshard0, pgshard1, pgshard2)

[root@pgshard0 data]# service postgresql-9.5 start
Redirecting to /bin/systemctl start postgresql-9.5.service
[root@pgshard0 data]#

 We need to inform the master (pgshard0) about its workers (pgshard1 and pgshard2). For this we have to create a configuration file pg_worker_list.conf  like this

[root@pgshard0 data]# pwd
/var/lib/pgsql/9.5/data
[root@pgshard0 data]# cat pg_worker_list.conf
pgshard1 5432
pgshard2 5432
[root@pgshard0 data]#

Let’s now reload the master pgshard0  to take into account the modification

[root@pgshard0 ~]# service postgresql-9.5 reload
Redirecting to /bin/systemctl reload postgresql-9.5.service
[root@pgshard0 ~]#

Very important:  we must create citus extension on both servers

postgres=# create extension citus;
CREATE EXTENSION
postgres=# \dx
List of installed extensions
Name | Version | Schema | Description
---------+---------+------------+------------------------------
citus | 5.2-1 | pg_catalog | Citus distributed database
plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language
(2 rows)

The last step before the sharding is now to verify that the master is ready. This SELECT command we will  run in the psql shell should output the worker nodes mentioned in the pg_worker_list.conf file.

postgres=# SELECT * FROM master_get_active_worker_nodes();
node_name | node_port
-----------+-----------
pgshard2 | 5432
pgshard1 | 5432
(2 rows)
postgres=#

Every thing is going fine until now, so we can create on the master the table to be sharded. Let’s say table sales

postgres=# CREATE TABLE sales
postgres-# (deptno int not null,
postgres(# deptname varchar(20),
postgres(# total_amount int,
postgres(# CONSTRAINT pk_sales PRIMARY KEY (deptno)) ;
CREATE TABLE

We need have inform Citus that data of table sales will be distributed among pghard1 and pgshard2. In our case we choose a hash distribution.

postgres=# SELECT master_create_distributed_table('sales', 'deptno', 'hash');
master_create_distributed_table
---------------------------------
(1 row)

In our example we are going to create one shard on each worker. We will  Specify
the table name :  sales
total shard count : 2
replication factor :  1  –No replication
postgres=# SELECT master_create_worker_shards('sales', 2, 1);
master_create_worker_shards
-----------------------------
(1 row)
postgres=#

And guys it’s all. Sharding is done. Connecting to pgshard1 and pgshard2 we can verify that shards were created with the same structure than the base table.

[postgres@pgshard1 bin]$ psql -c "\dt"
List of relations
Schema | Name | Type | Owner
--------+--------------+-------+----------
public | sales_102026 | table | postgres
(1 row)

[postgres@pgshard2 ~]$ psql -c "\dt"
List of relations
Schema | Name | Type | Owner
--------+--------------+-------+----------
public | sales_102027 | table | postgres
(1 row)

Now let’s insert some rows in the table from the master
postgres=# insert into sales (deptno,deptname,total_amount) values (1,'french_dept',10000);
INSERT 0 1
postgres=# insert into sales (deptno,deptname,total_amount) values (2,'german_dept',15000);
INSERT 0 1
postgres=# insert into sales (deptno,deptname,total_amount) values (3,'china_dept',21000);
INSERT 0 1
postgres=# insert into sales (deptno,deptname,total_amount) values (4,'gambia_dept',8750);
INSERT 0 1
postgres=# insert into sales (deptno,deptname,total_amount) values (5,'japan_dept',12010);
INSERT 0 1
postgres=# insert into sales (deptno,deptname,total_amount) values (6,'china_dept',35000);
INSERT 0 1
postgres=# insert into sales (deptno,deptname,total_amount) values (7,'nigeria_dept',10000);
INSERT 0 1
postgres=# insert into sales (deptno,deptname,total_amount) values (8,'senegal_dept',33000);
INSERT 0 1
postgres=#

We can see that data are distributed between pgshard1 and pgshard2

[postgres@pgshard1 bin]$ ./psql
psql (9.5.4)
Type "help" for help.
postgres=# select * from sales_102026;
deptno | deptname | total_amount
--------+--------------+--------------
1 | french_dept | 10000
3 | india_dept | 21000
4 | gambia_dept | 8750
5 | japan_dept | 12010
7 | nigeria_dept | 10000
8 | senegal_dept | 33000
(6 rows)

[postgres@pgshard2 bin]$ ./psql
psql (9.5.4)
Type "help" for help.
postgres=# select * from sales_102027;
deptno | deptname | total_amount
--------+-------------+--------------
2 | german_dept | 15000
6 | china_dept | 35000
(2 rows)

If we do an explain from the master pgshard0, we note that query is routed to different shard depending of data
postgres=# explain verbose select * from sales where deptno=5;
QUERY PLAN
--------------------------------------------------------------------------------------------------------------
Distributed Query
Executor: Router
Task Count: 1
Tasks Shown: All
-> Task
Node: host=pgshard1 port=5432 dbname=postgres
-> Index Scan using pk_sales_102026 on public.sales_102026 sales (cost=0.15..8.17 rows=1 width=66)
Output: deptno, deptname, total_amount
Index Cond: (sales.deptno = 5)
(9 rows)

postgres=# explain verbose select * from sales where deptno=6;
QUERY PLAN
--------------------------------------------------------------------------------------------------------------
Distributed Query
Executor: Router
Task Count: 1
Tasks Shown: All
-> Task
Node: host=pgshard2 port=5432 dbname=postgres
-> Index Scan using pk_sales_102027 on public.sales_102027 sales (cost=0.15..8.17 rows=1 width=66)
Output: deptno, deptname, total_amount
Index Cond: (sales.deptno = 6)
(9 rows)

Conclusion:
In this article we show that PostgreSQL allows to do many interesting things. Use of extensions can considerably extend PostgreSQL capability

 

Cet article Sharding with PostgreSQL est apparu en premier sur Blog dbi services.

What the hell are these template0 and template1 databases in PostgreSQL?

Sun, 2016-09-11 00:45

When people start to work with PostgreSQL, especially when they are used to Oracle, some things might be very confusing. A few of the questions we usually get asked are:

  • Where is the listener and how can I configure it?
  • When you talk about a PostgreSQL cluster where are the other nodes?
  • Why do we have these template databases and what are they used for?
  • …and some others…

In this post we’ll look at the last point: Why do we have two template databases (template0 and template1) and additionally a database called “postgres”? That makes three databases by default. In Oracle we only have one, well two when you use pluggable databases (the root and pdb$seed). Why does PostgreSQL need three by default? Isn’t that only overhead? Lets see …

To begin with: Assuming these databases are really not required we can drop them, can’t we?

(postgres@[local]:5432) [postgres] > \l
                                  List of databases
   Name    |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges   
-----------+----------+----------+-------------+-------------+-----------------------
 postgres  | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | 
 template0 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres
 template1 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres

Can we drop the “postgres” database?

(postgres@[local]:5432) [postgres] > drop database postgres;
ERROR:  cannot drop the currently open database
Time: 1.052 ms

Ok, this is the first point to remember: You can not drop a database which users are currently connected to (in this case it is my own connection). So lets try to connect to template1 and then drop the “postgres” database:

postgres@pgbox:/home/postgres/ [PG960] psql template1
psql (9.6rc1 dbi services build)
Type "help" for help.

(postgres@[local]:5432) [template1] > drop database postgres;
DROP DATABASE
Time: 489.287 ms
(postgres@[local]:5432) [template1] > 

Uh, our default “postgres” database is gone. Does it matter? Not really from a PostgreSQL perspective but probably all your clients (pgadmin, monitoring scripts, …) will have a problem now:

postgres@pgbox:/home/postgres/ [postgres] psql
psql: FATAL:  database "postgres" does not exist

The “postgres” database is meant as a default database for clients to connect to. When you administer a PostgreSQL instance which runs under the postgres operating system user the default database that is used for a connection is the same as the username => postgres. Now that this database does not exist anymore you can not longer connect if you do not provide a database name in you connection request. But we can still connect to “template1″:

postgres@pgbox:/home/postgres/ [postgres] psql template1
psql (9.6rc1 dbi services build)
Type "help" for help.

Second thing to remember: The “postgres” database is meant as a default database for connections. It is not required, you can drop it but probably a lot of tools you use will need to be adjusted because they assume that the “postgres” database is there by default.

Luckily we can easy recover from that:

postgres@pgbox:/home/postgres/ [postgres] psql template1
psql (9.6rc1 dbi services build)
Type "help" for help.

(postgres@[local]:5432) [template1] > create database postgres;
CREATE DATABASE
(postgres@[local]:5432) [template1] > \l
                                  List of databases
   Name    |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges   
-----------+----------+----------+-------------+-------------+-----------------------
 postgres  | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | 
 template0 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres
 template1 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres
(3 rows)

What happened? We connected to template1 again and re-created the “postgres” database. Of course everything we added to the “postgres” database before we dropped it is not any more available. This brings us to the next question: When we create a new database what or who defines the initial contents?

Third thing to remember: When you create a new database by using the syntax “create database [DB_NAME]” you get an exact copy of template1.

Really? What happens when I modify template1? Lets add one table and one extension:

postgres@pgbox:/home/postgres/ [postgres] psql template1
psql (9.6rc1 dbi services build)
Type "help" for help.

(postgres@[local]:5432) [template1] > create table my_test_tab ( a int );
CREATE TABLE
(postgres@[local]:5432) [template1] > create extension hstore;
CREATE EXTENSION
Time: 123.722 ms
(postgres@[local]:5432) [template1] > \dx
                           List of installed extensions
  Name   | Version |   Schema   |                   Description                    
---------+---------+------------+--------------------------------------------------
 hstore  | 1.4     | public     | data type for storing sets of (key, value) pairs
 plpgsql | 1.0     | pg_catalog | PL/pgSQL procedural language
(2 rows)

If the above statement is true every new database created with the above syntax should contain the table and the extension, right?

(postgres@[local]:5432) [postgres] > \c db_test
You are now connected to database "db_test" as user "postgres".
(postgres@[local]:5432) [db_test] > \d
            List of relations
 Schema |    Name     | Type  |  Owner   
--------+-------------+-------+----------
 public | my_test_tab | table | postgres
(1 row)

(postgres@[local]:5432) [db_test] > \dx
                           List of installed extensions
  Name   | Version |   Schema   |                   Description                    
---------+---------+------------+--------------------------------------------------
 hstore  | 1.4     | public     | data type for storing sets of (key, value) pairs
 plpgsql | 1.0     | pg_catalog | PL/pgSQL procedural language
(2 rows)

Whatever you put into template1 will be available in a new database if you use the following syntax: “create database [DB_NAME];” This can simplify your deployments a lot if you rely on pre-installed objects for e.g. monitoring or development.

Ok, I got it, but what is template0 for then? For this to understand we first take a look at pg_database, especially two columns: datallowconn and datistemplate:

(postgres@[local]:5432) [db_test] > select datname,datallowconn,datistemplate from pg_database order by 3;
  datname  | datallowconn | datistemplate 
-----------+--------------+---------------
 postgres  | t            | f
 db_test   | t            | f
 template1 | t            | t
 template0 | f            | t
(4 rows)

When you take a look at “datallowcon” the only database that has set this to false is “template0″. Do you remember the beginning of this post when we tried to delete the “postgres” database? You can only delete a database when there are no connections. But: You can only create a database from another database if there are no connections to the source, too. Really? Why then can I create a new database when I am connected to template1 when template1 is the source for the new database?

(postgres@[local]:5432) [template1] > create database db_test_2;
CREATE DATABASE

Confusing? This does not work anymore if there is another session connected to template1.

Lets try to create another new database but this time we we use db_test as the source. Yes, this is possible if you slightly adjust the syntax. But before we create the database we create a another connection to db_test:

(postgres@[local]:5432) [template1] > \q
postgres@pgbox:/home/postgres/ [PG960] psql db_test
psql (9.6rc1 dbi services build)
Type "help" for help.

In another session we try to create new database with db_test as the source:

postgres@pgbox:/home/postgres/ [PG960] psql postgres
psql (9.6rc1 dbi services build)
Type "help" for help.

(postgres@[local]:5432) [postgres] > create database db_test_3 template db_test;
ERROR:  source database "db_test" is being accessed by other users
DETAIL:  There is 1 other session using the database.

Fourth point to remember: For creating new databases you can use whatever database you like as the source when you specify then template explicitly.
Fifth point to remember: When you want to drop a database or when you want to create a database there must be no connections to the database you either want to drop or you want to create a new database from.

Coming back to the “datistemplate” and “datallowcon” settings: template0 is the only database that has “datallowcon” set to false, why? Because template0 is meant as the default unmodifiable database. You never should make any changes there. In a brand new PostgreSQL instance template0 and template1 are exactly the same. But why do I need both of them then? Assume you messed up template1 somehow (installed too many objects, for example). Using template one you still can recover from that easily:

postgres@pgbox:/home/postgres/ [PG960] psql postgres
psql (9.6rc1 dbi services build)
Type "help" for help.
(postgres@[local]:5432) [postgres] > update pg_database set datistemplate = false where datname = 'template1';
UPDATE 1
(postgres@[local]:5432) [postgres] > drop database template1;
DROP DATABASE
(postgres@[local]:5432) [postgres] > create database template1 template template0;
CREATE DATABASE
(postgres@[local]:5432) [postgres] > update pg_database set datistemplate = true where datname = 'template1';
UPDATE 1

What happened here? I modified template1 to not being a template anymore because you can not drop a database flagged as a template. Then I dropped and re-created the template1 database by using template0 as the template. Now my template1 is an exact copy of template0 again and all the things I messed up are gone. Another use case for this is: Image you modified template1 to include the stuff you rely on but at some point in the future you need a new database which shall be without your modifications (e.g. restoring a dump). For this you always can use template0 as a template because template0 is always clean. And this is why connections are not allowed to template0.

Of course you can create your own template database(s) by setting the “datistemplate” to true:

(postgres@[local]:5432) [postgres] > update pg_database set datistemplate = true where datname = 'db_test';
UPDATE 1
(postgres@[local]:5432) [postgres] > drop database db_test;
ERROR:  cannot drop a template database

What about the overhead:

(postgres@[local]:5432) [postgres] > select * from pg_size_pretty ( pg_database_size ( 'template0' ));
 pg_size_pretty 
----------------
 7233 kB
(1 row)

(postgres@[local]:5432) [postgres] > select * from pg_size_pretty ( pg_database_size ( 'template1' ));
 pg_size_pretty 
----------------
 7233 kB
(1 row)

(postgres@[local]:5432) [postgres] > select * from pg_size_pretty ( pg_database_size ( 'postgres' ));
 pg_size_pretty 
----------------
 7343 kB
(1 row)

Should not really be an issue on todays hardware. Hope this puts some light on these default databases.

 

Cet article What the hell are these template0 and template1 databases in PostgreSQL? est apparu en premier sur Blog dbi services.

Securing your connections to PostgreSQL by using SSL

Fri, 2016-09-09 13:30

Security is a big topic today and in the news almost every day. As the database usually holds sensitive data this data must be well protected. In most cases this is done by encrypting critical data inside the database and decrypt only when requested. But this is not all: When a client reads the data it is decrypted inside the database and then send back over the network unencrypted. What do you win with such a setup? The only risk it protects you from is a theft of either your disks or the whole server. Even more important is that all the connections to your database are encrypted so the traffic from and to your database can not be read be someone else. In this post we’ll look at how you can do this with PostgreSQL.

Obviously, for securing the connections to the database by using SSL we’ll need a server certificate. As I am on Linux this can be generated very easy by using openssl to create a self signed certificate. Be aware that your PostgreSQL binaries need to be compiled with “–with-openssl” for the following to work. You can check this by using using pg_config:

postgres@pgbox:/u01/app/postgres/local/dmk/ [PG960] pg_config | grep CONFIGURE
CONFIGURE = '--prefix=/u01/app/postgres/product/96/db_0' '--exec-prefix=/u01/app/postgres/product/96/db_0' '--bindir=/u01/app/postgres/product/96/db_0/bin' '--libdir=/u01/app/postgres/product/96/db_0/lib' '--sysconfdir=/u01/app/postgres/product/96/db_0/etc' '--includedir=/u01/app/postgres/product/96/db_0/include' '--datarootdir=/u01/app/postgres/product/96/db_0/share' '--datadir=/u01/app/postgres/product/96/db_0/share' '--with-pgport=5432' '--with-perl' '--with-python' '--with-tcl' '--with-openssl' '--with-pam' '--with-ldap' '--with-libxml' '--with-libxslt' '--with-segsize=2' '--with-blocksize=8' '--with-wal-segsize=16' '--with-extra-version= dbi services build'

To create a self signed certificate with openssl simple execute the following command:

postgres@pgbox:/home/postgres/ [PG960] openssl req -new -text -out server.req

This creates a new certificate request based on the information you provide. The only important point here (for the scope of this post) is that the “Common Name” must match the server name where your PostgreSQL is running on, e.g.:

Generating a 2048 bit RSA private key
............................................................................................................................................+++
..................................................................................................+++
writing new private key to 'privkey.pem'
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:CH
State or Province Name (full name) []:BS
Locality Name (eg, city) [Default City]:Basel
Organization Name (eg, company) [Default Company Ltd]:dbi services
Organizational Unit Name (eg, section) []:dba
Common Name (eg, your name or your server's hostname) []:pgbox
Email Address []:xx@xx@com

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:

This created two files in the directory where you executed the command (the first one is the certificate request and the second one is the private key):

-rw-r--r--. 1 postgres postgres  3519 Sep  9 13:24 server.req
-rw-r--r--. 1 postgres postgres  1821 Sep  9 13:24 privkey.pem

If you want your PostgreSQL instance to start automatically you should remove the pass phrase from the generated private key:

postgres@pgbox:/home/postgres/ [PG960] openssl rsa -in privkey.pem -out server.key
Enter pass phrase for privkey.pem:
writing RSA key
postgres@pgbox:/home/postgres/ [PG960] rm privkey.pem

The password which is asked for is the one you provided when you generated the certificate request above. The new key is now in “server.key”. Now you can create your certificate:

postgres@pgbox:/home/postgres/ [PG960] openssl req -x509 -in server.req -text -key server.key -out server.crt

If everything went well your brand new certificate should be available:

postgres@pgbox:/home/postgres/ [PG960] ls -l server.crt 
-rw-r--r--. 1 postgres postgres 4473 Sep  9 13:32 server.crt
postgres@pgbox:/home/postgres/ [PG960] cat server.crt
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 12528845138836301488 (0xaddf6645ea37a6b0)
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: C=CH, ST=BS, L=Basel, O=dbi services, OU=dba, CN=pgbox/emailAddress=xx@xx@com
        Validity
            Not Before: Sep  9 11:32:42 2016 GMT
            Not After : Oct  9 11:32:42 2016 GMT
        Subject: C=CH, ST=BS, L=Basel, O=dbi services, OU=dba, CN=pgbox/emailAddress=xx@xx@com
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    00:cb:4f:d1:b7:81:c4:83:22:2f:fb:9f:4b:fa:6a:
                    16:77:fd:62:37:91:f1:09:cc:c4:e1:04:e1:de:f2:
                    3f:77:35:ec:e5:8f:5a:03:1d:7b:53:8e:5a:72:76:
                    42:2a:cb:95:9a:35:4a:98:1d:78:3c:21:85:3d:7c:
                    59:f6:e8:7b:20:d0:73:db:42:ff:38:ca:0c:13:f6:
                    cc:3e:bc:b0:8f:41:29:f1:c7:33:45:79:c7:04:33:
                    51:47:0b:23:f8:d6:58:68:2d:95:83:c9:ad:40:7c:
                    95:9a:0c:ff:92:bd:d6:4f:b2:96:6c:41:45:0d:eb:
                    19:57:b3:9a:fc:1c:82:01:9c:2d:e5:2e:1b:0f:47:
                    ab:84:fa:65:ed:80:e7:19:da:ab:89:09:ed:6a:2c:
                    3a:aa:fe:dc:ba:53:e5:52:3f:1c:db:47:4c:4a:d6:
                    e5:0f:76:12:df:f4:6c:fd:5a:fb:a5:70:b4:7b:06:
                    c3:0c:b1:4d:cf:04:8e:5c:b0:05:cb:f2:ac:78:a6:
                    12:44:55:07:f9:88:55:59:23:11:0f:dd:53:14:6a:
                    e8:c4:bb:6a:94:af:1e:54:e8:7d:4f:10:8a:e5:7e:
                    31:3b:cf:28:28:80:37:62:eb:5e:49:26:9d:10:17:
                    33:bc:a7:3f:2a:06:a4:f0:37:a5:b3:07:6d:ce:6a:
                    b7:17
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Subject Key Identifier: 
                EA:63:B1:7F:07:DF:31:3F:55:28:77:CC:FB:F2:1F:3A:D6:45:3F:55
            X509v3 Authority Key Identifier: 
                keyid:EA:63:B1:7F:07:DF:31:3F:55:28:77:CC:FB:F2:1F:3A:D6:45:3F:55

            X509v3 Basic Constraints: 
                CA:TRUE
    Signature Algorithm: sha256WithRSAEncryption
         18:2b:96:b6:01:d8:3e:7f:bb:35:0c:4b:53:c2:9c:02:22:41:
         25:82:d3:b6:a9:88:6e:0e:5d:5b:d3:ac:00:43:0a:04:f4:12:
         6e:22:fd:3f:77:63:0e:42:28:e3:09:6b:16:67:5f:b7:08:08:
         74:a3:55:1f:49:09:69:96:e8:f6:2e:9c:8a:d6:a0:e2:f7:d8:
         30:62:06:f0:5e:1a:85:fe:ff:2d:39:64:f7:f1:e9:ce:21:02:
         f3:86:5f:3b:f6:12:1d:61:cd:a8:bf:36:e2:98:d4:99:b6:95:
         5e:05:87:8d:ab:2f:30:38:b2:fe:68:ac:50:8d:98:fd:aa:4d:
         79:e2:f5:71:92:d6:e5:1d:59:42:02:49:7a:2e:e0:f3:ba:41:
         4d:f4:15:33:44:36:14:43:3b:7a:41:1b:61:6c:ff:78:fb:13:
         4a:a4:e0:96:6c:45:80:0e:30:e3:63:9d:dc:f1:77:16:22:9c:
         7a:c9:92:96:53:3b:62:87:ca:cb:e8:4a:a4:4f:69:a6:a0:5a:
         a9:eb:be:58:7f:c1:da:d4:d7:41:d4:54:06:fb:5b:8b:ea:46:
         68:f5:e6:1e:2b:6a:0b:65:f9:66:5a:a2:14:ec:eb:05:2f:99:
         46:bc:bb:d8:11:f6:3f:2e:6e:15:48:ac:70:1f:18:2d:e2:78:
         4b:a3:cb:ef
-----BEGIN CERTIFICATE-----
MIIDxTCCAq2gAwIBAgIJAK3fZkXqN6awMA0GCSqGSIb3DQEBCwUAMHkxCzAJBgNV
BAYTAkNIMQswCQYDVQQIDAJCUzEOMAwGA1UEBwwFQmFzZWwxFTATBgNVBAoMDGRi
aSBzZXJ2aWNlczEMMAoGA1UECwwDZGJhMQ4wDAYDVQQDDAVwZ2JveDEYMBYGCSqG
SIb3DQEJARYJeHhAeHhAY29tMB4XDTE2MDkwOTExMzI0MloXDTE2MTAwOTExMzI0
MloweTELMAkGA1UEBhMCQ0gxCzAJBgNVBAgMAkJTMQ4wDAYDVQQHDAVCYXNlbDEV
MBMGA1UECgwMZGJpIHNlcnZpY2VzMQwwCgYDVQQLDANkYmExDjAMBgNVBAMMBXBn
Ym94MRgwFgYJKoZIhvcNAQkBFgl4eEB4eEBjb20wggEiMA0GCSqGSIb3DQEBAQUA
A4IBDwAwggEKAoIBAQDLT9G3gcSDIi/7n0v6ahZ3/WI3kfEJzMThBOHe8j93Nezl
j1oDHXtTjlpydkIqy5WaNUqYHXg8IYU9fFn26Hsg0HPbQv84ygwT9sw+vLCPQSnx
xzNFeccEM1FHCyP41lhoLZWDya1AfJWaDP+SvdZPspZsQUUN6xlXs5r8HIIBnC3l
LhsPR6uE+mXtgOcZ2quJCe1qLDqq/ty6U+VSPxzbR0xK1uUPdhLf9Gz9WvulcLR7
BsMMsU3PBI5csAXL8qx4phJEVQf5iFVZIxEP3VMUaujEu2qUrx5U6H1PEIrlfjE7
zygogDdi615JJp0QFzO8pz8qBqTwN6WzB23OarcXAgMBAAGjUDBOMB0GA1UdDgQW
BBTqY7F/B98xP1Uod8z78h861kU/VTAfBgNVHSMEGDAWgBTqY7F/B98xP1Uod8z7
8h861kU/VTAMBgNVHRMEBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IBAQAYK5a2Adg+
f7s1DEtTwpwCIkElgtO2qYhuDl1b06wAQwoE9BJuIv0/d2MOQijjCWsWZ1+3CAh0
o1UfSQlpluj2LpyK1qDi99gwYgbwXhqF/v8tOWT38enOIQLzhl879hIdYc2ovzbi
mNSZtpVeBYeNqy8wOLL+aKxQjZj9qk154vVxktblHVlCAkl6LuDzukFN9BUzRDYU
Qzt6QRthbP94+xNKpOCWbEWADjDjY53c8XcWIpx6yZKWUztih8rL6EqkT2mmoFqp
675Yf8Ha1NdB1FQG+1uL6kZo9eYeK2oLZflmWqIU7OsFL5lGvLvYEfY/Lm4VSKxw
Hxgt4nhLo8vv
-----END CERTIFICATE-----

For PostgreSQL to accept the key when it starts up you’ll need to modify the permissions:

postgres@pgbox:/home/postgres/ [PG960] chmod 600 server.key
postgres@pgbox:/home/postgres/ [PG960] ls -l server.key
-rw-------. 1 postgres postgres 1675 Sep  9 13:30 server.key

Both files (server.key and server.crt) need to be copied to your data directory (you can adjust this by using the ssl_cert_file and ssl_key_file configuration parameters):

postgres@pgbox:/home/postgres/ [PG960] mv server.key server.crt $PGDATA/

Now you can turn on ssl…

(postgres@[local]:5432) [postgres] > alter system set ssl='on';
ALTER SYSTEM
Time: 5.427 ms

… and restart your instance:

postgres@pgbox:/home/postgres/ [PG960] pg_ctl -D $PGDATA restart -m fast

How can you test if SSL connections do work? Add the following line to pg_hba.conf for your instance:

hostssl  all             all             127.0.0.1/32            md5

Reload your server and then create a new connection:

postgres@pgbox:/u02/pgdata/PG1/ [PG960] psql -h localhost -p 5432 postgres
psql (9.6rc1 dbi services build)
SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)
Type "help" for help.

Works as expected. For everything which is not for testing you’ll need a real certificate, of course. Just in case you expected to configure another port: PostgreSQL listens for normal and SSL connections on the same port. When the client supports SSL then SSL connections will be established, otherwise normal connections. When you want to force the use of SSL connections you can do it by adjusting your pg_hba.conf (deny all connections which are not SSL).

 

Cet article Securing your connections to PostgreSQL by using SSL est apparu en premier sur Blog dbi services.

EDB Failover Manager 2.1, (two) new features

Fri, 2016-09-09 00:08

In the last post we upgraded EDB EFM from version 2.0 to 2.1. In this post we’ll look at the new features. Actually we’ll look only at two of the new features in this post:

  • Failover Manager now simplifies cluster startup with the auto.allow.hosts property
  • efm promote now includes a -switchover option; the -switchover option instructs Failover Manager to perform a failover, promoting a Standby to Master, and then, return the Master node to the cluster as a Standby node. For more information

Lets go …

My failover cluster status is still fine:

[root@edbbart efm-2.1]$ /usr/edb-efm/bin/efm cluster-status efm 
Cluster Status: efm
VIP: 192.168.22.250
Automatic failover is disabled.

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Witness     192.168.22.244       UP     N/A       
	Standby     192.168.22.243       UP     UP        
	Master      192.168.22.245       UP     UP        

Allowed node host list:
	192.168.22.244 192.168.22.243 192.168.22.245

Membership coordinator: 192.168.22.244

Standby priority host list:
	192.168.22.243

Promote Status:

	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      192.168.22.245       0/3C000220       
	Standby     192.168.22.243       0/3C000220       

	Standby database(s) in sync with master. It is safe to promote.

The first bit we’re going to change is the auto.allow.hosts on the database servers. According to the documentation this should eliminate the need to allow the hosts to join the cluster and registration should happen automatically. So, lets change it from “false” to “true” on all nodes:

[root@ppasstandby efm-2.1]$ grep allow.hosts efm.properties
auto.allow.hosts=true

… and then lets add all nodes to the efm.nodes files on the witness:

[root@edbbart efm-2.1]$ cat efm.nodes
# List of node address:port combinations separated by whitespace.
# The list should include at least the membership coordinator's address.
192.168.22.244:9998 192.168.22.243:9998 192.168.22.245:9998

When we now shutdown the EFM service on all hosts and bring it up again on the witness what is the result?

[root@edbbart efm-2.1]$ systemctl stop efm-2.1.service  # do this on all hosts

Lets start on the witness again:

[root@edbbart efm-2.1]$ systemctl start efm-2.1.service
[root@edbbart efm-2.1]$ /usr/edb-efm/bin/efm cluster-status efm 
Cluster Status: efm
VIP: 192.168.22.250
Automatic failover is disabled.

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Witness     192.168.22.244       UP     N/A       

Allowed node host list:
	192.168.22.244 192.168.22.243 192.168.22.245

Membership coordinator: 192.168.22.244

Standby priority host list:
	(List is empty.)

Promote Status:

Did not find XLog location for any nodes.

So far so good, all nodes are in the “Allowed” list. What happens when we start EFM on the current primary node:

[root@ppasstandby efm-2.1]$  systemctl start efm-2.1.service
[root@ppasstandby efm-2.1]$ 

We should see the node as a member now without explicitly allowing it to join:

[root@edbbart efm-2.1]$ /usr/edb-efm/bin/efm cluster-status efm 
Cluster Status: efm
VIP: 192.168.22.250
Automatic failover is disabled.

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Witness     192.168.22.244       UP     N/A       
	Master      192.168.22.245       UP     UP        

Allowed node host list:
	192.168.22.244 192.168.22.243 192.168.22.245

Membership coordinator: 192.168.22.244

Standby priority host list:
	(List is empty.)

Promote Status:

	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      192.168.22.245       0/3D000060       

	No standby databases were found.

Cool, same on the standby node:

[root@edbppas edb-efm]$ cat efm.nodes
# List of node address:port combinations separated by whitespace.
# The list should include at least the membership coordinator's address.
192.168.22.244:9998
[root@edbppas edb-efm]$  systemctl start efm-2.1.servic

What is the status:

[root@edbbart efm-2.1]$ /usr/edb-efm/bin/efm cluster-status efm 
Cluster Status: efm
VIP: 192.168.22.250
Automatic failover is disabled.

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Witness     192.168.22.244       UP     N/A       
	Master      192.168.22.245       UP     UP        
	Standby     192.168.22.243       UP     UP        

Allowed node host list:
	192.168.22.244 192.168.22.243 192.168.22.245

Membership coordinator: 192.168.22.244

Standby priority host list:
	192.168.22.243

Promote Status:

	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      192.168.22.245       0/3D000060       
	Standby     192.168.22.243       0/3D000060       

	Standby database(s) in sync with master. It is safe to promote.

Perfect. Makes it a bit easier and fewer things to remember to bring up a failover cluster.

Coming to the “big” new feature (at least in my opinion): Switching to the standby and making the old master automatically a new standby which follows the new master. According to the docs all we need to do is this:

[root@edbbart efm-2.1]$ /usr/edb-efm/bin/efm promote efm -switchover

Does it really work?

[root@edbbart efm-2.1]$ /usr/edb-efm/bin/efm promote efm -switchover
Promote/switchover command accepted by local agent. Proceeding with promotion and will reconfigure original master. Run the 'cluster-status' command for information about the new cluster state.

Hm, lets check the status:

[root@edbbart efm-2.1]$ /usr/edb-efm/bin/efm cluster-status efm 
Cluster Status: efm
VIP: 192.168.22.250
Automatic failover is disabled.

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Witness     192.168.22.244       UP     N/A       
	Master      192.168.22.243       UP     UP        
	Standby     192.168.22.245       UP     UP        

Allowed node host list:
	192.168.22.244 192.168.22.245 192.168.22.243

Membership coordinator: 192.168.22.244

Standby priority host list:
	192.168.22.245

Promote Status:

	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      192.168.22.243       0/410000D0       
	Standby     192.168.22.245       0/410000D0       

	Standby database(s) in sync with master. It is safe to promote.

It really worked! And backwards:

[root@edbbart ~]$ /usr/edb-efm/bin/efm promote efm -switchover
Promote/switchover command accepted by local agent. Proceeding with promotion and will reconfigure original master. Run the 'cluster-status' command for information about the new cluster state.

[root@edbbart ~]$ /usr/edb-efm/bin/efm cluster-status efm
Cluster Status: efm
VIP: 192.168.22.250
Automatic failover is disabled.

    Agent Type  Address              Agent  DB       Info
    --------------------------------------------------------------
    Witness     192.168.22.244       UP     N/A       
    Standby     192.168.22.243       UP     UP        
    Master      192.168.22.245       UP     UP        

Allowed node host list:
    192.168.22.244 192.168.22.245 192.168.22.243

Membership coordinator: 192.168.22.244

Standby priority host list:
    192.168.22.243

Promote Status:

    DB Type     Address              XLog Loc         Info
    --------------------------------------------------------------
    Master      192.168.22.245       0/480001A8       
    Standby     192.168.22.243       0/480001A8       

    Standby database(s) in sync with master. It is safe to promote.

Cool, that is really a great new feature.

 

Cet article EDB Failover Manager 2.1, (two) new features est apparu en premier sur Blog dbi services.

Pages