Skip navigation.

Yann Neuhaus

Syndicate content
All blog entries from
Updated: 8 weeks 6 days ago

SQL Server 2016 : availability groups and automatic failover enhancements

Thu, 2015-06-11 11:07

Let’s continue with this study of the new availability group enhancements. Others studies are available here:

This time we’ll talk about the possibility to enrol a third replica for automatic failover. It implies of course to configure synchronous replication between the 2 pairs of replicas and this is at the cost of degrading the overall performance. But it seems that in this area we can expect to have also some improvements. So maybe another future study.

First of all, my feeling is that this enhancement will be very interesting in terms of availability but unfortunately introducing a third replica in this case will not be affordable for some customers in terms of budget. So, the final package is surely not yet defined and this would lead me to draw conclusions based on inaccurate information. So let’s focus only on the technical aspect of this feature for the moment:

I have included a third replica (SQL163) to my existing availability group 2016Grp:




In parallel, my cluster quorum is configured as follows:












Basically, this is a windows failover cluster CLUST-2021 on a single subnet that includes three nodes (SQL161, SQL162 and SQL163) and configured to use a file share witness as well as dynamic quorum capability.

I simulated a lot of test failures in my lab environment (shutdown of a replica, turn off of a replica, lost a database file, disable the network cards and so on) and the automatic failover on 2 pairs of replicas was successful in each case. However, this raised the following question: which secondary replica will be chosen by the system? I didn’t see a configuration setting that controls the “failover priority order list” and I believe it could be a good adding value here. After performing others tests and after discussing with some other MVPS like Christophe Laporte (@Conseilit), I noticed that the failover order seems to be related to the order of the preferred owner of the related availability group cluster role. Moreover, according to this very interesting article from Vijay Rodrigues, this order is set and changed dynamically by SQL Server itself, so changing the order directly from the cluster itself seems to be a very bad idea. Next, I decided to configure directly the order at the creation step of the availability during the adding operation of the replicas and it seems to be the good solution.

To illustrate this point here the initial configuration I wanted to achieve:


  • SQL161 primary replica
  • SQL163 secondary replica (first failover partner)
  • SQL162 secondary replica (second failover partner)

After adding this replica in the correct order from the wizard here the inherited order I get from the related cluster role:




This order seems to be preserved according to the current context of the availability group.

The same test with a different order like:

  • SQL163 primary replica
  • SQL162 secondary replica (first failover partner)
  • SQL161 secondary replica (second failover partner)

… Givesus a different result and once again this order is preserved regardless the context changes:




This idea of controlling the failover replicas order comes from a specific scenario where you may have two secondary replicas across multiple sites. You may decide to failover first on the secondary replica on the same datacenter and then the one located on the remote site.





But wait a minute… do you see the weakness in the above architecture? Let’s deal with the node weights (in red). You may noticed that you will have to introduce another replica in order to avoid losing the quorum in case of the datacenter 1 failure. Indeed, you won’t get the majority with the current architecture if it remains nothing but the file share witness and the replica on the datacenter 2. So the new architecture may be the following:




In this case we may or may not decide to use this additional node as a SQL Server replica on the datacenter 2 but it is at least mandatory in the global cluster architecture to provide automatic failover capability for the availability group layer in case of the datacenter 1 failure. This is why I said earlier that introducing this new availability group capability may not be affordable for all of the customers assuming that this additional replica must be licenced.

See you

SQL Server 2016: native support for JSON

Thu, 2015-06-11 10:00

A lot of discussions and most important, a feature requests in the Microsoft connect site here with more than 1000 votes is the origin of this new feature in SQL Server 2016.


A lot of NoSQL have already this function and PostgreSQL for example have the json_extract_path_text functionality and you can at every time ask my colleague Daniel Westermann one of our expert in PostgreSQL of this subject.

using dbms_server_alert in combination with a custom monitoring solution

Thu, 2015-06-11 06:35
Lot's of companies do not use Grid- or Cloud Control for monitoring their Oracle databases for various reasons but rather use open source tools like nagios. And lot of those either implemented custom script frameworks or rely on plug-ins available for the monitoring solution of choice. This post shall show on how you can let Oracle check a lot of its own metrics and you only need one script for alerting various metrics.

draft release notes for PostgreSQL 9.5 online

Thu, 2015-06-11 00:02
Bruce Momjian, one of the PostgreSQL Core members, just compiled the first draft version of the release notes for the upcoming PostgreSQL 9.5

Some of the goodies that will show up: If you want to test any of the new features check this post.

Can you have pending system statistics?

Wed, 2015-06-10 09:08

Your system statistics seems to be wrong and you want to gather or set more relevant ones. But you don't want to see all your application execution plans changing between nested loops and hash joins. For object statistics, we can gather statistics in a pending mode, test them in a few sessions, and publish them when we are ok with them. But for system statistics, can you do the same? It can be risky to try it, so I've done it for you in my lab.

Test case in 11g


SQL> select banner from v$version where rownum=1;

Oracle Database 11g Enterprise Edition Release - Production

SQL> create table DEMO as
           select rownum id , ora_hash(rownum,10) a , ora_hash(rownum,10) b , lpad('x',650,'x') c 
           from xmltable('1 to 100000');

Table created.

Here are my system statistics:

SQL> select '' savtime,sname,pname,pval1,pval2 from sys.aux_stats$ where pval1 is not null or pval2
is not null order by 1,2 desc,3;

SAVTIME              SNAME            PNAME           PVAL1 PVAL2
-------------------- ---------------- ---------- ---------- --------------------
                     SYSSTATS_MAIN    CPUSPEEDNW       2719
                     SYSSTATS_MAIN    IOSEEKTIM          10
                     SYSSTATS_MAIN    IOTFRSPEED       4096
                     SYSSTATS_INFO    DSTART                06-10-2015 08:11
                     SYSSTATS_INFO    DSTOP                 06-10-2015 08:11
                     SYSSTATS_INFO    FLAGS               0
                     SYSSTATS_INFO    STATUS                COMPLETED

I check a full table scan cost:

SQL> set autotrace trace explain
SQL> select * from DEMO DEMO1;

Execution Plan
Plan hash value: 4000794843

| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT  |      | 88550 |    30M|  2752   (1)| 00:00:34 |
|   1 |  TABLE ACCESS FULL| DEMO | 88550 |    30M|  2752   (1)| 00:00:34 |

No surprise here. I've 10000 blocks in my tables, SREATDIM= IOSEEKTIM + db_block_size / IOTFRSPEED= 12 ms and MREADTIM= IOSEEKTIM + db_block_size * MBRC / IOTFRSPEED = 26 ms. Then the cost based on a MBRC of 8 is ( 26 * 10000 / 8 ) / 12 = 2700


Pending stats in 11g

I set 'PUBLISH' to false in order to have pending statistics:

SQL> exec dbms_stats.SET_GLOBAL_PREFS('PUBLISH', 'FALSE') ;

PL/SQL procedure successfully completed.

Then I set some system statistics manually to simulate a fast storage:

17:14:38 SQL> exec dbms_stats.set_system_stats('IOSEEKTIM',1);

PL/SQL procedure successfully completed.

17:14:38 SQL> exec dbms_stats.set_system_stats('IOTFRSPEED','204800');

PL/SQL procedure successfully completed.

and I run the same explain plan:

Execution Plan
Plan hash value: 4000794843

| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT  |      | 88550 |    30M|  1643   (2)| 00:00:02 |
|   1 |  TABLE ACCESS FULL| DEMO | 88550 |    30M|  1643   (2)| 00:00:02 |

The cost is better. I'm not using pending statistics, which means that the published stats have been changed - despie the PUBLISH global preference set to FALSE:

SQL> select '' savtime,sname,pname,pval1,pval2 from sys.aux_stats$ where pval1 is not null or pval2 i
s not null order by 1,2 desc,3;

SAVTIME              SNAME            PNAME           PVAL1 PVAL2
-------------------- ---------------- ---------- ---------- --------------------
                     SYSSTATS_MAIN    CPUSPEEDNW       2719
                     SYSSTATS_MAIN    IOSEEKTIM 1
                     SYSSTATS_MAIN    IOTFRSPEED 204800
                     SYSSTATS_INFO    DSTART                06-10-2015 08:14
                     SYSSTATS_INFO    DSTOP                 06-10-2015 08:14
                     SYSSTATS_INFO    FLAGS               1
                     SYSSTATS_INFO    STATUS                COMPLETED

As you see, the SYS-AUX_STATS$ show my modified values (note that the date/time did not change by the way). So be careful, when you set or gather or delete system statistics in 11g you don't have the pending/publish mechanism. It's the kind of change that may have a wide impact changing all your execution plans.


With the values I've set the SREADTIM is near 1 ms and MREADTIM is about 1.3 ms so the cost is ( 1.3 * 10000 / 8 ) / 1 = 1625 which is roughly what has been calculated by the CBO on my new not-so-pending statistics.


If you look at 12c you will see new procedures in dbms_stats which suggest that you can have pending system statistics:

SQL> select banner from v$version where rownum=1;

Oracle Database 12c Enterprise Edition Release - 64bit Production

SQL> select procedure_name from dba_procedures where object_name='DBMS_STATS' and procedure_name like '%PENDIN


but be careful, they are not documented. Let's try it anyway. I start as I did above, with a demo table and default statistics:

SQL> select '' savtime,sname,pname,pval1,pval2 from sys.aux_stats$ where pval1 is not null or pval2 is not nul
l order by 1,2 desc,3;

SAVTIME              SNAME            PNAME           PVAL1 PVAL2
-------------------- ---------------- ---------- ---------- --------------------
                     SYSSTATS_MAIN    CPUSPEEDNW       2725
                     SYSSTATS_MAIN    IOSEEKTIM          10
                     SYSSTATS_MAIN    IOTFRSPEED       4096
                     SYSSTATS_INFO    DSTART                06-10-2015 17:25
                     SYSSTATS_INFO    DSTOP                 06-10-2015 17:25
                     SYSSTATS_INFO    FLAGS               0
                     SYSSTATS_INFO    STATUS                COMPLETED

SQL> set autotrace trace explain
SQL> select * from DEMO DEMO1;

Execution Plan
Plan hash value: 4000794843

| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT  |      | 80500 |    28M|  2752   (1)| 00:00:01 |
|   1 |  TABLE ACCESS FULL| DEMO | 80500 |    28M|  2752   (1)| 00:00:01 |

I set PUBLISH to false and set manual system stats:

SQL> exec dbms_stats.SET_GLOBAL_PREFS('PUBLISH', 'FALSE') ;

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.set_system_stats('IOSEEKTIM',1);

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.set_system_stats('IOTFRSPEED','204800');

PL/SQL procedure successfully completed.

and I check the SYS.AUX_STATS$ table:

SQL> select '' savtime,sname,pname,pval1,pval2 from sys.aux_stats$ where pval1 is not null or pval2 is not nul
l order by 1,2 desc,3;

SAVTIME              SNAME            PNAME           PVAL1 PVAL2
-------------------- ---------------- ---------- ---------- --------------------
                     SYSSTATS_MAIN    CPUSPEEDNW       2725
                     SYSSTATS_MAIN    IOSEEKTIM          10
                     SYSSTATS_MAIN    IOTFRSPEED       4096
                     SYSSTATS_INFO    DSTART                06-10-2015 17:25
                     SYSSTATS_INFO    DSTOP                 06-10-2015 17:25
                     SYSSTATS_INFO    FLAGS               0
                     SYSSTATS_INFO    STATUS                COMPLETED

Good ! I still have the previous values here. The new stats have not been published.


The pending stats are stored in the history table, with a date in the future:

SQL> select savtime,sname,pname,pval1,pval2 from sys.wri$_optstat_aux_history where pval1 is not null or pval2
 is not null and savtime>sysdate-30/24/60/60 order by 1,2 desc,3;

SAVTIME              SNAME            PNAME           PVAL1 PVAL2
-------------------- ---------------- ---------- ---------- --------------------
01-dec-3000 01:00:00 SYSSTATS_MAIN    CPUSPEEDNW       2725
01-dec-3000 01:00:00 SYSSTATS_MAIN IOSEEKTIM 10
01-dec-3000 01:00:00 SYSSTATS_MAIN IOTFRSPEED 204800
01-dec-3000 01:00:00 SYSSTATS_INFO    DSTART                06-10-2015 17:29
01-dec-3000 01:00:00 SYSSTATS_INFO    DSTOP                 06-10-2015 17:29
01-dec-3000 01:00:00 SYSSTATS_INFO    FLAGS               1
01-dec-3000 01:00:00 SYSSTATS_INFO    STATUS                COMPLETED

That's perfect. It seems that I can gather system statistics without publishing them. And I don't care about the Y3K bug yet.


12c use pending stats = true

First, I'll check that a session can use the pending stats if chosen explicitly:

SQL> alter session set optimizer_use_pending_statistics=true;

Session altered.

the I run the query:

SQL> set autotrace trace explain
SQL> select * from DEMO DEMO2;

Execution Plan
Plan hash value: 4000794843

| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT  |      | 80500 |    28M|  1308   (1)| 00:00:01 |
|   1 |  TABLE ACCESS FULL| DEMO | 80500 |    28M|  1308   (1)| 00:00:01 |

Cost is lower. This is exacly what I expected with my new - unpublished - statistics. Good. I don't know what it's lower than in 11g. Maybe the formula has changed. This is another place for comments ;)


12c use pending stats = false

Ok I checked that the published statistics are the same as before, but let's try to use them:

SQL> alter session set optimizer_use_pending_statistics=false;

Session altered.

and once again run the same query:

SQL> set autotrace trace explain

SQL> select * from DEMO DEMO3;

Execution Plan
Plan hash value: 4000794843

| Id  | Operation         | Name | Rows  | Bytes | Cost  |
|   0 | SELECT STATEMENT  |      | 80500 |    28M|  1541 |
|   1 |  TABLE ACCESS FULL| DEMO | 80500 |    28M|  1541 |

   - cpu costing is off (consider enabling it)

Oh. There is a problem here. 'cpu costing is off' means that there are no system statistics. The cost has been calculated as it were in old versions whithout system statistics. This is bad. I have gathered pending statistics, not published, but all sessions have their costing changed now.



Just a look at the 10053 trace show that I have a problem:

System Stats are INVALID.
  Table: DEMO  Alias: DEMO3
    Card: Original: 80500.000000  Rounded: 80500  Computed: 80500.000000  Non Adjusted: 80500.000000
  Scan IO  Cost (Disk) =   1541.000000
  Scan CPU Cost (Disk) =   0.000000
  Total Scan IO  Cost  =   1541.000000 (scan (Disk))
                       =   1541.000000
  Total Scan CPU  Cost =   0.000000 (scan (Disk))
                       =   0.000000
  Access Path: TableScan
    Cost:  1541.000000  Resp: 1541.000000  Degree: 0
      Cost_io: 1541.000000  Cost_cpu: 0
      Resp_io: 1541.000000  Resp_cpu: 0
  Best:: AccessPath: TableScan
         Cost: 1541.000000  Degree: 1  Resp: 1541.000000  Card: 80500.000000  Bytes: 0.000000

It seems that with pending statistics the optimizer can't simply get the published values, and falls back as if there were no system statistics. This is a bug obviously. I've not used the undocumented new functions. They were used in the background, but it's totally supported to set PUBLISH to FALSE and the gather system statistics. The behavior should be either the same as in 11g - publishing the gathered stats - or gathering into pending stats only and session continue to use the published ones by default.



In 11g, be careful, system statistic changes are always published.

In 12c, don't gather system statistics when PUBLISH is set to false. We can expect that nice new feature in further versions, but for the moment it messes up everything. I'll not open an SR yet but hope it'll be fixed in future versions.


Further investigations done by Stefan Koehler on this twitter conversation:

@FranckPachot IOSEEKTIM=1 is not accepted/set. Reason for cost drop to 1308 in case of pending SYS stats … 1/2

— Stefan Koehler (@OracleSK) June 11, 2015

Flame Graph for quick identification of Oracle bug

Wed, 2015-06-10 04:28

Most of my performance stores start with a screenshot of Orachrome Lighty my preferred tool to have a graphical view of the database performance, in Standard and Enterprise Edition without any options:


quickly exchange code or text between workstations or teams

Tue, 2015-06-09 17:20
In a recent project I faced the following situation: One the one hand I had to execute scripts on a customer's workstation while on the other hand I had to integrate the results of these scripts into a report on my own workstation. The question was how to efficiently do this without sending dozens of mails to myself.

DOAG Middleware Day: WebLogic von allen Seiten beleuchtet

Tue, 2015-06-09 09:48


This year, I had the opportunity to participate at the Middleware Day organized by the “Deutche Oracle Anwendergruppe” in Düsseldorf. As you expect, all sessions were given in a foreign language - “deutsch sprachige”. I was surprised that all German courses provided by dbi services and offered to their employees to improve language knowledge was not a waste of time. I understood all the sessions, I suppose ;) On the other side, speak and communicate with other participants was more challenging.


Let’s get back to our concern, the DOAG Middleware Day. So, in this blog post, I will quickly describe the sessions from my point of view without going in the detail. To have more detail, just participate to the event


At the beginning, the “begrüssung und Neues von der DOAG” directly took my attention as there are two interesting other events in Q3 this year.


  • The 23rd of September: Usability and UX-Design mit ADF, Apex und Forms! Verträgt sich das mit Performance? - detail
  • The 24th of September: DOAG OpenStack Day 2015 - detail


And the technical and interesting session began.


The fist one gave us some tips and tricks for Tuning ADF – Web Application im WLS 12c. Some interesting information have been provided. The referent really knew all possibilities of ADF customization and optimization that can be made declaratively in JDeveloper.


The second session gave us some information regarding the Oracle Cloud offer with the Java Cloud service. The referent described what is the Oracle Cloud offering and also the pricing. He also made a demo of the usability and the features provided to an administrator for easily provision his cloud space and make a live demo on how it’s very simple to deploy JEE application from his Netbeans IDE to the Java Cloud Service. He also made an interesting demo of an HTML 5 application on a weblogic server setup in cluster. WebLogic direclty managed the synchronization of each nodes part of a cluster in case of having a websockets application hosted on the cluster.


The third session covers a practical SSO use case demonstrating the architecture and the integration of forms applications to a new CRM system. This covers the following components - Oracle Forms, Reports andDiscoverer, Oracle Access Management, Oracle Internet Directory, Kerberos authentication mechanism, Microsoft Active Directory ASO.


The next session also covered a practical use case but also drawback on implementing WebLogic auf ODA at a customer. He covered the physical architecture and showed use some performances test results.


The last session was concentrated on WebLogic cluster features and capabilities. Unfortunately the main part was a demo which had a problem. The presenter remains cool and was able to manage it quite well.


It was a quite good day in Germany with interesting presentation. 

SQL Server 2016 CTP2: Stretch database feature - Part 1

Tue, 2015-06-09 03:51

SQL Server 2016 CTP 2 has introduced some interesting new features such as Always Encrypted, Stretch database, the configuration of the tempdb in the SQL Server installation, aso...

Regarding the configuration of the tempdb in SQL Server 2016 CTP 2, I recommend you a good article called SQL Server 2016 CTP2 : first thoughts about tempdb database from David Barbarin.


In this article, I will focus on the Stretch database feature!


What is the Stretch Database feature?

This new feature allows to extend on-premise databases to Microsoft Azure. In other words, you can use the Microsoft cloud as an additional storage for your infrastructure.

This can be useful if you have some issues with your local storage, such as available space.



First, you need to enable the option on the server by running the stored procedure named 'sp_configure'. It requires at least serveradmin or sysadmin permissions.





Of course, you also need a valid Microsoft Azure subscription and your credentials. Be careful, a SQL Database server and an Azure storage will be used for this new feature.


Enable Stretch for a database

After enabling this feature at the server level, you need to enable it for the desired database.

It requires at least db_owner and CONTROL DATABASE permissions.

By default, it will create a SQL Database server with the Standard service tier and the S3 performance level. To fit your needs, you can change the level of the service afterwards.

Everything is done using a wizard in Management Studio. To open the wizard, proceed as follows:



Skip the 'Introduction' step to access to the 'Microsoft Sign-In' step:



You need your Microsoft Azure credentials to access to your subscription. Once this is done, you can click on 'next'.




You have to select an Azure location. Of course for better performances, you should select the closest location to your on-premise server.
You also need to provide credentials for the Azure SQL DB server which will be created through your wizard.
The last step is to configure the SQL Databases firewall in Azure to allow connection from your on-premise server. To do this, you must specify a custom IP range or use the current IP of your instance.

Then, click the 'next' button. A summary of all your configuration is displayed. Click the 'next' button again.



The configuration is now completed! The feature is enabled for your database.

With Visual Studio, you can connect to the SQL Database server which is in Azure. You can see the SQL Database server recently created:




At the moment, there is no table stored in Azure, because we do not have enabled the feature for a table. In my next blog, I will show you how to do this!

SQL Server 2016 : availability groups and the new potential support for standard edition

Mon, 2015-06-08 13:00

In my first blog about availability groups with SQL Server 2016, I talked quickly about the new interesting option: DB_FAILOVER. In this blog post, I will continue by introducing the new potential support of availability groups in a standard edition (based on the last Microsoft Ignit news). Yes, this sounds a great news because it will increase the scope of possible customers but bear in mind that it concerns potentially the standard edition (not in its last shape I guess) and we may expect some limitations. Let's have a look at the potential limitations in this blog post.

First of all, you’ll notice a new option called “Basic Availability Group” from the configuration wizard as shown below:




At this point we can wonder what "Basic Availability Group" means exactly? Let me speculate: this option allows us to simulate the availability groups feature in standard edition. I guess, this option will disappear with the first SQL Server 2016 RTM release. In addition, the word “Basic” tends to suggest some limitations, so let’s try to configure what I will call a BAG (Basic Availability Group) in this blog post.

The first thing I noticed is that the availability group will include only one database. In others words, adding a second database is not possible and you will face the following error from the GUI:




Ok, let’s continue. This time, the next limitation concerns the read-only capabilities on the secondary replicas which are not supported with BAGs. From the GUI, I have no other choices than "No".




Likewise, if I try to change the Readable Secondary value for the SQL162 instance, I will also face the following error message:




Msg 41199, Level 16, State 8, Line 1 The specified command is invalid because the AlwaysOn Availability Groups allow_connections feature is not supported by this edition of SQL Server. For information about features supported by the editions of SQL Server, see SQL Server Books Online.


Next, configuring backup preferences is not a possible option from the GUI. All parameters are greyed as shown below:




Go ahead and after installing my availability group, I noticed that the backup preferences policy was setup to Primary.

Finally, configuring a listener is also not supported on BAGs. Again, all configuration options are not available from the GUI. However, adding a listener after implementing the availability group, gives us the opportunity to enter the listener information but it will raise an error message at the final step:




What about adding a third replica with BAG?  In fact, we're limited to 2 replicas and we are not able to add another either from the GUI because the option is also greyed or from script because it will raise the following error message:


-- Adding a third replica ALTER AVAILABILITY GROUP [BAG] ADD REPLICA ON N'SQL163' WITH (        ENDPOINT_URL = N'TCP://SQL163.dbi-services.test:5022',        FAILOVER_MODE = MANUAL,        AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,        BACKUP_PRIORITY = 50,        SECONDARY_ROLE        (                   ALLOW_CONNECTIONS = NO        ) ); GO  
Msg 35223, Level 16, State 0, Line 21 Cannot add 1 availability replica(s) to availability group 'BAG'. The availability group already contains 2 replica(s), and the maximum number of replicas supported in an availability group is 2.


To summarize BAG comes with a lot of restrictions. So, when you create an availability group (on standard edition), you will able to benefit:

  • Only 2 replicas with either synchronous or asynchronous replication capabilities (the both are available with the current CTP2)
  • One and only one database per availability group
  • Backup capabilities only on the primary
  • New DB_FAILOVER option

However you will not able to use:

  • Failover capabilities by using the listeners (the listeners are not available with BAG)
  • Read-only capabilities (database snapshots are available with evaluation editions but is that going the case with the future standard edition?)


What about client failover capabilities in this case? Indeed, as said earlier, we cannot rely on the listener in order to switch over transparently to a new replica but in the same time, we are able to configure automatic failover for the availability group itself. A basic connectivity test (from a custom powershell script) after switching my availability group to a different replica raised the following error message:


The target database, 'bag', is participating in an availability group and is currently not accessible for queries. Either data movemen t is suspended or the availability replica is not enabled for read access. To allow read-only access to this and other databases in th e availability group, enable read access to one or more secondary availability replicas in the group. For more information, see the A LTER AVAILABILITY GROUP statement in SQL Server Books Online.


At this point, I expected to get at least the same failover mechanism provided with mirroring feature (assuming that DAG is the future replacement of DBM as said at the last Microsoft Init in Chicago). Does it mean that we’ll have to add the failover partner attribute in the connection string from the client side? Let's try by modifying the connection string of mypowershell script:


$dataSource = “SQL161"; $dataPartner = "SQL162"; $user = "sa”; $pwd = "xxxxx"; $database = "bag"; $connectionString = "Server=$dataSource;uid=$user; pwd=$pwd;Database=$database;Integrated Security=False;Failover Partner=$dataPartner;Initial Catalog=bag”;


- Test with SQL161 as the primary




- Test After switching my availability group from SQL161 to SQL162




Ok it seems to work correctly now.

In conclusion, the Basic Availability Group feature seems to be designed to replace the well-known mirroring feature, which is now deprecated, but with the limited-scope advantages of the availability groups. I believe we'll have other opportunities to discuss about this feature in the near future because at this point, it has a good chance to not yet be carved in stone.


SharePoint Governance? Why?

Mon, 2015-06-08 06:16

Companies are struggling with SharePoint. It’s been installed, and abandoned. Business stuff is not drove to make SharePoint succeed.
From this point you need to dress up a governance for SharePoint.
Governance focuses on the technology, business and human side of SharePoint.


What is GOVERNANCE? what

Governance is the set of:

  • policies
  • roles
  • responsibilities
  • processes

that help and drive Companie's IT Team and business divisions in order to get their GOALS.
Good governance is therefore establishing sufficiently robust and thorough processes to ensure that not only can those objectives be delivered but that they are delivered in an effective and transparent way.

Example: with permission governance, it's easy to manage who is authorized to get the EDIT permission which allows user to Contribute AND DELETE (list/Library).

In other words, we can equate Governance to something we see in our daily life.


  What happens with NO GOVERNANCE?

No Governance means nothing to be followed and let everything going in all ways!
Without a proper governance, be sure that business objectives won't be achieved, and at least the SharePoint implementation will failed.

Example: if there is no governance about "Site Creation", everybody would be able to create site, and probably on the wrong way. Imagine a SharePoint site without any permissions levels, etc...

You might meet a chaotic situation as depicted by the traffic jam below:


A Bad Governance will introduce:

  • Social Exclusion
  • Inefficiency
  • Red Tape
  • Corruption
How to start a Governance?

Step by step, define a Governance implementation:

1. The Governance Committee must be organised

A governance committee includes people from the Business & IT divisions of an organization.

2. Decide the SharePoint Elements to be covered

SharePoint Elements that can be governed:

  • Operational Management
  • Technical Operations
  • Site and Security Administration
  • Content Administration
  • Personal and Social Administration

3. Define and implement Rules & Policies

The implementation includes the good writing of Rules & Policies:

  • Setting up Rights & Permissions for Users & Groups
  • Restrict Site Collection creation
  • Setup content approval & routing
  • Setup Locks & Quotas
  • Set Document Versioning Policies
  • Set Retention / Deletion Policies
  • Restrict page customization & usage of SharePoint Designer
  • Setup workflows for automating approvals & processes (using SharePoint Tool or a third party tool)

Having a good communication/adoption with users of those elements will drive higher productivity and less support calls for issues.

4. Drive & Reinforce Governance

Regular meetings are conducted by the Governance Committee to review governance, any necessary change to the Rules & Policies is updated during this phase.

Use the Best practices for governance plans:

  • Determine initial principles and goals
  • Classify your business information
  • Develop an education strategy
  • Develop an ongoing plan

Technet source:


Governance and Teamwork is essential to smart implementation!


QlikView Tips & Tricks: The Link Table

Mon, 2015-06-08 01:00

In this blog, I will show you how to bypass a “Synthetic Key” table in QlickView.

Why bypassing a “Synthetic Key” table?

If you have multiples links between two tables, QlikView generates automatically a “Synthetic Key” table. (here “$Syn 1” table).

The QlikView best practice recommend to remove such kind of key table for a question of performance and “correctness” of the result.


How to bypass this “Synthetic key” table?

The “Link Table” is the solution to bypass the generation of a synthetic key table.

This table will contain two kind of fields:

  • A “foreign key”, made with the fields that are common to the two tables
  • The fields that have been used to create the new “foreign key”

This “Link Table” will have the following structure:


In our case, the structure of the “Link Table” will be the following:


How to proceed? Add the needed fields in the linked tables

Before creating the “Link Table”, we must add the fields in the tables that we should linked together.

Remark: A best practice to create this “Foreign_Key” field is to separate the different fields with “|”.

So, in our case, the fields in the table SALESDETAILS will be added as follow:


The fields in table BUDGET will be added as follow:


Create the “Link table”

The fields to create the “Link Table” are now added. So we can create the table as follow:

Click on “Tab / Add Tab” and name it “LINK_TABLE” (1).


Type the following script:

(1) The name of the table

(2)The names of the fields should be the same in each table

(3) Use the CONCATENATE instruction


Reload the data (1) and check the result (2)


The result should be like this:


An alternative to DBA_EXTENTS optimized for LMT

Sun, 2015-06-07 11:45

This is a script I have for several years, when tablespaces became locally managed. When we want to know to which segment a block (identified by file id, block id) belongs to, the DBA_EXTENTS view can be very long when you have lot of datafiles and lot of segments. This view using the underlying X$ tables and constrained by hints is faster when queried for one FILE_ID/BLOCK_ID. I did that in 2006 when having lot of corruptions on several 10TB databases with 5000 datafiles.

Since then, I've used it only a few times, so there is no guarantee that the plan is still optimal in current version, but the approach of starting to filter the segments that are in the same tablespace as the file_id makes it optimal for a search by file_id and block_id.

The script

Here is the creation of the DATAFILE_MAP view:

create or replace view datafile_map as
 l AS ( /* LMT extents indexed on ktfbuesegtsn,ktfbuesegfno,ktfbuesegbno */
  SELECT ktfbuesegtsn segtsn,ktfbuesegfno segrfn,ktfbuesegbno segbid, ktfbuefno extrfn, 
         ktfbuebno fstbid,ktfbuebno + ktfbueblks - 1 lstbid,ktfbueblks extblks,ktfbueextno extno 
  FROM sys.x$ktfbue
 d AS ( /* DMT extents ts#, segfile#, segblock# */
  SELECT ts# segtsn,segfile# segrfn,segblock# segbid, file# extrfn, 
         block# fstbid,block# + length - 1 lstbid,length extblks, ext# extno 
  FROM sys.uet$
 s AS ( /* segment information for the tablespace that contains afn file */
  SELECT /*+ materialized */
  f1.fenum afn,f1.ferfn rfn,s.ts# segtsn,s.FILE# segrfn,s.BLOCK# segbid ,s.TYPE# segtype,f2.fenum segafn, tsname,blocksize
  FROM sys.seg$ s, sys.ts$ t, sys.x$kccfe f1,sys.x$kccfe f2  
  WHERE s.ts#=t.ts# AND t.ts#=f1.fetsn AND s.FILE#=f2.ferfn AND s.ts#=f2.fetsn 
 m AS ( /* extent mapping for the tablespace that contains afn file */
SELECT /*+ use_nl(e) ordered */ 
 s.afn,s.segtsn,s.segrfn,s.segbid,extrfn,fstbid,lstbid,extblks,extno, segtype,s.rfn, tsname,blocksize
 FROM s,l e
 WHERE e.segtsn=s.segtsn AND e.segrfn=s.segrfn AND e.segbid=s.segbid
 SELECT /*+ use_nl(e) ordered */  
 s.afn,s.segtsn,s.segrfn,s.segbid,extrfn,fstbid,lstbid,extblks,extno, segtype,s.rfn, tsname,blocksize
 FROM s,d e
  WHERE e.segtsn=s.segtsn AND e.segrfn=s.segrfn AND e.segbid=s.segbid
 SELECT /*+ use_nl(e) use_nl(t) ordered */ 
 f.fenum afn,null segtsn,null segrfn,null segbid,f.ferfn extrfn,e.ktfbfebno fstbid,e.ktfbfebno+e.ktfbfeblks-1 lstbid,e.ktfbfeblks extblks,null extno, null segtype,f.ferfn rfn,name tsname,blocksize
 FROM sys.x$kccfe f,sys.x$ktfbfe e,sys.ts$ t
 WHERE t.ts#=f.fetsn and e.ktfbfetsn=f.fetsn and e.ktfbfefno=f.ferfn
 SELECT /*+ use_nl(e) use_nl(t) ordered */ 
 f.fenum afn,null segtsn,null segrfn,null segbid,f.ferfn extrfn,e.block# fstbid,e.block#+e.length-1 lstbid,e.length extblks,null extno, null segtype,f.ferfn rfn,name tsname,blocksize
 FROM sys.x$kccfe f,sys.fet$ e,sys.ts$ t
 WHERE t.ts#=f.fetsn and e.ts#=f.fetsn and e.file#=f.ferfn
 o AS (
  SELECT s.tablespace_id segtsn,s.relative_fno segrfn,s.header_block   segbid,s.segment_type,s.owner,s.segment_name,s.partition_name 
 afn file_id,fstbid block_id,extblks blocks,nvl(segment_type,decode(segtype,null,'free space','type='||segtype)) segment_type,
 owner,segment_name,partition_name,extno extent_id,extblks*blocksize bytes,
 tsname tablespace_name,rfn relative_fno,m.segtsn,m.segrfn,m.segbid
 FROM m,o WHERE extrfn=rfn and m.segtsn=o.segtsn(+) AND m.segrfn=o.segrfn(+) AND m.segbid=o.segbid(+)
 file_id+(select to_number(value) from v$parameter WHERE name='db_files') file_id,
 1 block_id,blocks,'tempfile' segment_type,
 '' owner,file_name segment_name,'' partition_name,0 extent_id,bytes,
  tablespace_name,relative_fno,0 segtsn,0 segrfn,0 segbid
 FROM dba_temp_files
Sample output
COLUMN   partition_name ON FORMAT   A16
COLUMN   segment_name ON FORMAT   A20
COLUMN   owner ON FORMAT   A16
COLUMN   segment_type ON FORMAT   A16

select file_id,block_id,blocks,segment_type,owner,segment_name,partition_name from datafile_map 
where file_id=1326 and 3782 between block_id and block_id + blocks - 1
SQL> /

-------- -------- ------- ---------------- ---------------- ---------------- ----------------
    1326     3781      32 free space

you identified free space block

select file_id,block_id,blocks,segment_type,owner,segment_name,partition_name from datafile_map 
where file_id=1326 and 3982 between block_id and block_id + blocks - 1
SQL> /

-------- -------- ------- ---------------- ---------------- -------------------- ----------------
    1326     3981       8 TABLE PARTITION  TESTUSER         AGGR_FACT_DATA       AFL_P_211

you identified a data block

select file_id,block_id,blocks,segment_type,owner,segment_name,partition_name from datafile_map 
where file_id=202 and 100 between block_id and block_id + blocks - 1
SQL> /

---------- ---------- ---------- ---------------- ---------------- -------------------- ---------------
       202          1       1280 tempfile                          C:O102TEMP02.DBF

you identified a tempfile file_id

select file_id,block_id,blocks,segment_type,owner,segment_name,partition_name from datafile_map 
where file_id=1 and block_id between 0 and 100 order by file_id,block_id;

---------- ---------- ---------- ---------------- ---------------- -------------------- ---------------
         1          9          8 ROLLBACK         SYS              SYSTEM
         1         17          8 ROLLBACK         SYS              SYSTEM
         1         25          8 CLUSTER          SYS              C_OBJ#
         1         33          8 CLUSTER          SYS              C_OBJ#
         1         41          8 CLUSTER          SYS              C_OBJ#
         1         49          8 INDEX            SYS              I_OBJ#
         1         57          8 CLUSTER          SYS              C_TS#
         1         65          8 INDEX            SYS              I_TS#
         1         73          8 CLUSTER          SYS              C_FILE#_BLOCK#
         1         81          8 INDEX            SYS              I_FILE#_BLOCK#
         1         89          8 CLUSTER          SYS              C_USER#
         1         97          8 INDEX            SYS              I_USER#

you mapped the first segments in system tablespace

Try it on a database with lot of segments and lot of datafiles, and compare with DBA_EXTENTS. Then you will know which one to choose in case of emergency.

SQL Server 2016: availability groups and the new option DB_FAILOVER

Wed, 2015-06-03 14:41

Let’s continue the SQL Server 2016’s discovery with one of my favourite topic: the AlwaysOn availability groups (AAGs). There are some improvements on this area.

First of all, let’s introduce one of them (DB_FAILOVER option) by remembering of the availability groups behaviour with the previous versions of SQL Server. One misconception that exists at different customer places is that an availability group doesn’t detect a database failure. No, this not a joke but the pure reality. AAGs are designed only to detect issues at the SQL Server level instance until the introduction of SQL Server 2016. You can verify by yourself by looking at the different possible failover conditions levels.

Indeed, SQL Server 2016 adds a new layer of failover capabilities with the new DB_FAILOVER option. According to the BOL, a new response will be taken by SQL Server when a database on the primary replica will be offline. Maybe, we must precise here what offline means: any status other than ONLINE for a database in the availability group will trigger an automatic failover. Keep in mind that this new feature is set on the group level. It means that all databases included to this group will be concerned by this option.

During my session at the previous event Journées SQL Server 2014 in Paris, I had the opportunity to show that no action was taken by the availability group for a database that had lost its transaction log. To be more precise, I simulated a scenario where the disk that contains the transaction log file has been lost. This is probably the most common scenario where we may encounter a database file failure. Let’s perform the same today with SQL Server 2016.

Let’s begin with the creation script of the availability group 2016Grp. I put only the interesting part below:




This is a basic availability group with two replicas (SQL161 and SQL162) configured with synchronous replication and automatic failover that includes two databases named killerdb and SSISDB. You may notice the new options at the top of the script: DB_FAILOVER and DTC support that are available from the GUI as well:





Let’s continue by creating a simple table dbo.t in the killerdb database and next, by inserting a bunch of data.




As expected the table contains 200 records.

Now, it’s time to unplug my disk from my computer that contains the killerdb transaction log file. After that, a new insertion of 200 additional rows raises the following error:


Msg 9001, Level 21, State 3, Line 7 The log for database 'killerdb' is not available. Check the event log for related error messages. Resolve any errors and restart the database. Msg 0, Level 20, State 0, Line 6 A severe error occurred on the current command. The results, if any, should be discarded.


And good news! The availability group 2016Grp has switched automatically to the SQL162 replica in my case. Bear in mind that for one database failure, all the group will failover. Let’s take a look at the AlwaysOn _health session:




You may see the response from the availability group against the error 1117 - I/O – related to my unplugged disk event. On the new secondary the killerdb database is no longer online with the recovery pending state.


SELECT        @@SERVERNAME as server_name,        name AS database_name,        state_desc FROM sys.databases WHERE name = 'killerdb'; GO




This first improvement is surely a good way to enhance the global availability of the AlwaysOn availability group. We'll discuss about the other interesting new options in the next blog posts.

Stay connected!



Oracle memory advisors: how relevant ?

Wed, 2015-06-03 11:35

Do you look at memory advisors? I usually don't but I still show them during the performance tuning workshop because there is no easy way to get an idea about memory sizing. This morning, while giving the training, I've shown on our workshop environment what the problem is: they are based on statistics cumulated from instance startup, which can cover months of heterogeneous activity, and then the result is probably meaningless.

If you want to trust advisors, then you need to know on which measures it is based. Let's check it.


Here is how we query the V$DB_CACHE_ADVISOR in order to see the estimated physical reads for different buffer cache size:

SELECT a.size_for_estimate "Buffer size MB",
       a.size_factor "Factor size",
       round(a.estd_physical_read_time/1000,2) "Estim. time (s)",
       a.estd_physical_read_factor "Estim. time factor",
       a.estd_physical_reads "Estim. nb physical read"
FROM  sys.v$db_cache_advice a
ORDER BY a.size_for_estimate;

Buffer size MB Factor size Estim. time (s) Estim. time factor Estim. physical read
-------------- ----------- --------------- ------------------ --------------------
            24       .0909            1.31             8.4871             23424349
            48       .1818            1.16             7.5612             20868825
            72       .2727             .37             2.3838              6579289
            96       .3636             .31             1.9787              5461235
           120       .4545             .26             1.6831              4645325
           144       .5455             .23             1.4912              4115679
           168       .6364             .21             1.3713              3784848
           192       .7273              .2             1.2564              3467715
           216       .8182             .18             1.1418              3151277
           240       .9091             .16             1.0568              2916629
           264           1             .16                  1              2759998
           288      1.0909             .15              .9351              2580935
           312      1.1818             .14              .8736              2411003
           336      1.2727             .13              .8291              2288418
           360      1.3636             .12              .7918              2185486
           384      1.4545             .12              .7537              2080272
           408      1.5455             .11              .7035              1941706
           432      1.6364              .1              .6479              1788252
           456      1.7273             .09              .6021              1661696
           480      1.8182             .09               .554              1529086

Look at the factor 1 - the current values. The advisor is based on 2.7 million physical reads. Let's see if it is based on statistics since instance startup or a shorter period.


I'll display the instance statistics (cumulative since instance startup) that measure physical reads:

SQL> select value,name from v$sysstat where name like 'physical reads %';

---------- ----------------------------------------------------------------
   2760403 physical reads cache
  86342292 physical reads direct
     33656 physical reads direct temporary tablespace
     76909 physical reads cache prefetch
     13105 physical reads prefetch warmup
         0 physical reads retry corrupt
      3428 physical reads direct (lob)
         0 physical reads for flashback new
         0 physical reads cache for securefile flashback block new
         0 physical reads direct for securefile flashback block new

Here it's clear: the advisor was based on the 2.7 million physical reads to cache. Those values are cumulated from instance startup. If the instance have been started a long time ago then there is nothing relevant here: activity is not regular, memory component have been resized several times, etc. And if the instance has been started recently, then the cache activity is not significant: you did lot of physical reads to load the cache.

AWR / Statspack

Ok. we know that V$ views are cumulative from instance start. When we want to look at statistics on a shorter period of time we have AWR or Statspack. Here is the Buffer Cache advisor section:

Buffer Pool Advisory                                DB/Inst: DB1/DB1  Snap: 61
-> Only rows with estimated physical reads >0 are displayed
-> ordered by Block Size, Buffers For Estimate

                                   Phys      Estimated                  Est
    Size for   Size      Buffers   Read     Phys Reads     Est Phys % DBtime
P    Est (M) Factor  (thousands) Factor    (thousands)    Read Time for Rds
--- -------- ------ ------------ ------ -------------- ------------ -------
D         24     .1            3    8.5         23,422            1  1307.0
D         48     .2            6    7.6         20,866            1  1164.0
D         72     .3            9    2.4          6,579            1   368.0
D         96     .4           12    2.0          5,461            1   306.0
D        120     .5           15    1.7          4,645            1   260.0
D        144     .5           18    1.5          4,116            1   231.0
D        168     .6           21    1.4          3,785            1   213.0
D        192     .7           24    1.3          3,468            1   195.0
D        216     .8           26    1.1          3,151            1   177.0
D        240     .9           29    1.1          2,917            1   164.0
D        264    1.0           32    1.0          2,760            1   155.0
D        288    1.1           35    0.9          2,581            1   145.0
D        312    1.2           38    0.9          2,411            1   136.0
D        336    1.3           41    0.8          2,289            1   129.0
D        360    1.4           44    0.8          2,186            1   123.0
D        384    1.5           47    0.8          2,080            1   118.0
D        408    1.5           50    0.7          1,942            1   110.0
D        432    1.6           53    0.6          1,788            1   101.0
D        456    1.7           56    0.6          1,662            1    94.0
D        480    1.8           59    0.6          1,529            1    87.0
this looks like the cumulative values from instance startup. But I want to be sure - not guess. The instance statistics section can show get the number of physical reads to cache during this period of time:
Instance Activity Stats                         DB/Inst: DB1/DB1  Snaps: 60-61
-> Ordered by statistic name

Statistic                                     Total     per Second     per Trans
-------------------------------- ------------------ -------------- -------------

physical reads                                1,973           22.9         140.9
physical reads cache                          1,863           21.7         133.1
only few thousand of them here. This confirms that the advisor is not based on delta values.


From what we see, the memory advisors are based on values cumulated since instance startup. I always advise to focus the performance analysis on a short period where activity is regular. Then I can't advise to use those advisors. You can look at it in the few following days after instance startup, just to have an idea, but don't rely only on that. In my opinion, that a bug. there is no reason to show cumulative values in a Statspack / AWR report. I think that the advisor can do similar estimations on delta values. Anyone volunteer to open a bug ?

Ways to improve your SharePoint Intranet

Wed, 2015-06-03 02:23
World today is changing in companies using different path to get better communication with their staff, to boost productivity, improve the effectiveness of the strategy and business direction.
SharePoint has become a reference's tool regarding this point, however, many organizations still struggling to get the most out of the features it offers and find it difficult to increase adoption.In this article, we will see the key points to improve your SharePoint environnement.

SharePoint 2013 have some great features that are often overlooked, for example the "follow" and "share" features are simple ways to track which documents are interesting and share information with others.
FOLLOW: users are notified when a change is made to anything they are following whether conversations or documents.
SHARE: allows individuals to share relevant sites and documents with colleagues they feel will benefit from the content. Both these functionalities are ways for people in the organisation to take control of getting up to date information and distributing what they have learnt to others.



It could be like a gossip.. ah ah .. Follow and Share!
MY SITE The SharePoint ‘My Site’ can be compared to a user’s Facebook page for the workplace. ‘My Sites’ serves as a user’s personal page for sharing information about their skills, expertise and interests with others in the company. It also gives a user a central location to manage and store contacts, documents and anything they want to refer back to easily. Users can also personalise their ‘My Sites’ for an element of uniqueness. - See more at: The SharePoint ‘My Site’ can be compared to a user’s Facebook page for the workplace. ‘My Sites’ serves as a user’s personal page for sharing information about their skills, expertise and interests with others in the company. It also gives a user a central location to manage and store contacts, documents and anything they want to refer back to easily. Users can also personalise their ‘My Sites’ for an element of uniqueness. - See more at:

SharePoint "My Site" can be compared to a user’s Facebook page for the workplace.
MY SITE: serves as a user’s personal page for sharing information about their skills, expertise and interests with others in the company. It also gives a user a central location to manage and store contacts, documents and anything they want.
A "My Site" is personalizable in order to be unique for each user.


Share carefully, "My Site" is a place to share professional information!


In this mobile age it’s fair to say the biggest improvement any organisation can make to improve usage of an intranet is to make it mobile compatible. Companies may want to consider rolling these features out slowly or even start with a test project in order to get feedback from business people.
The key is to get involve the users themselves to step by step improve adoption.


Access the information and documents needed from everywhere!


Companies that get the SharePoint implementation right, often start simply, with many of the features disabled.
Taking time to do the things using the BEST PRACTICES in the right way, step after step will drive SharePoint implementation to success, leave the "bells-and-whistles" until the last.


  • Event updates
  • Replacing slowly the existing Intranet
  • Create document management
  • Create orms management
  • Use business process and workflow management
  • Start sharing B.I dashboards and reports
  • ...


"The shortest way to do many things is to do only one thing at once"


To get the utilisation for SharePoint, provide training and support to staff. This is the key to sustainable adoption.
The session must be adapted to the different users case: business user, power user, site owner, site collection admin. The knowledge minimum should be Business User Level.
Use resources for knowledge as FAQ, checklists, etc...

Here are 2 points from Microsoft regarding Training and Support:


Support users by creating a training plan

  • Establish short, just-in-time training options for users
  • Ensure that your site owners are properly trained before giving them site ownership
  • Provide training to content contributors to ensure effective content management
  • Create a site owner community to enable users to help each other

Ensure ongoing success by creating a user support plan

  • Establish a contact person for every page
  • Establish a SharePoint Center of Excellence within your organization to provide high-end support for users
  • Survey users on a regular basis to gather feedback and establish metrics
  • Ensure content gets moved from legacy platforms to SharePoint in a planned manner

more details: SharePoint Adoption Guide - Microsoft



End user adoption is the key of your SharePoint Intranet success!

SQL Server 2016: Live query statistics

Tue, 2015-06-02 16:44

During my tests of the SQL Server 20126 CTP2, I noticed an additional icon on SQL Server Management studio which is close to the Actual Execution Plan button as shown below:




I decided to take a look at the SQL Server 2016 BOL and I found a topic about this feature. It seems to be a funny feature. So after that, I decided to test it with one of my financial query (developed for one of my customer) which usually takes a long time to run.

Let's go ahead and after running my long query, this first test was in fact inconclusive because after opening a new tab, I got stuck with the following message even after stopping my query execution:




Maybe my query is too complex and I have to investigate in a near future. Updated 04.06.2015: It seems that it is a bug with this current release of SQL Server 2016 (CTP2). So I will try later with maybe, the next CTP.

Let's continue by cutting out my big query into smaller pieces of code and the test ran successfully this time. Indeed, I was able to see a “progressive” query plan where we can see how long operators are taking as well as how far they are in their operations with some awesome animation.




Basically, we get an “enhanced” execution plan tree. The dotted lines point out the operations in progress with some additional information as the current duration for each operator. Likewise, we may retrieve all other usual information that concern a query execution plan.

However, I noticed during my tests that the percentage calculation didn't work on the operators if the row estimate is not correct (it will get stuck on 100% while the time keeps ticking away). At this point, I remembered that I faced the same problem with the DMV sys.dm_exec_query_profiles introduced by SQL Server 2014 (please take a look at my blog post here). Let’s perform the same test by using the sys.dm_exec_query_profiles DMV and as excepted I noticed the same duration issue when cardinality estimation was wrong.





I guess we will learn better in the future about this interesting feature. We're certainly on the good way in order to detect expensive operations just by looking at the pretty cool animation!



How to disable a SQL Plan Directive permanently

Tue, 2015-06-02 15:10

In 12c you will see a lot of SQL Plan Directives. Some are useful to get better execution plans, but some will trigger too much Adaptive Dynamic Sampling and that can become a big overhead, especially in Standard Edition. Then you don't want to drop them - or they will reappear. You can disable them, but what will happen after the retention weeks? Let's test it.

Disabled directive

A directive has been created which triggers too expensive dynamic sampling. You don't want that and you have disabled it one year ago with:

SQL> dbms_spd.alter_sql_plan_directive(14130932452940503769,'ENABLED','NO');
and everything is good. You're happy with that. Here is the directive:
SQL> select directive_id,state,last_used,auto_drop,enabled,extract(notes,'/spd_note/spd_text/text()') spd_text,extract(notes,'/spd_note/internal_state/text()') internal_state from dba_sql_plan_directives where directive_id in(select directive_id from dba_sql_plan_dir_objects where owner='DEMO' );

-------------------- ---------- --------- --------- ------- -------------------------------- ----------
14130932452940503769 SUPERSEDED 28-APR-14 YES       NO      {E(DEMO.DEMO_TABLE)[A, B, C, D]} HAS_STATS
The directive has not been used since April 2014 thanks to the 'enabled' set to NO.

If I run a query with a filter on those columns:

SQL> select count(*) Q1 from DEMO_TABLE where a+b=c+d;


23:10:32 SQL> select * from table(dbms_xplan.display_cursor(format=>'basic +note'));

select count(*) Q1 from DEMO_TABLE where a+b=c+d

Plan hash value: 1839825454

| Id  | Operation          | Name       |
|   0 | SELECT STATEMENT   |            |
|   1 |  SORT AGGREGATE    |            |

there is no dynamic sampling that this is exactly what I want.

Retention weeks

My retention is the default: 53 weeks. Let's see what happens after 53 weeks. I can call the 'auto drop' job with dbms_spd.drop_sql_plan_directive passing a null instead of a directive_id:

SQL> exec dbms_spd.drop_sql_plan_directive(null);

PL/SQL procedure successfully completed.

Run a few queries

Then let's have a few queries on those table columns:

SQL> select count(*) Q2 from DEMO_TABLE where a+b=c+d;


SQL> select count(*) Q3 from DEMO_TABLE where a+b=c+d;


and check the execution plan:

SQL> select * from table(dbms_xplan.display_cursor(format=>'basic +note'));

select count(*) Q3 from DEMO_TABLE where a+b=c+d

Plan hash value: 1839825454

| Id  | Operation          | Name       |
|   0 | SELECT STATEMENT   |            |
|   1 |  SORT AGGREGATE    |            |

   - dynamic statistics used: dynamic sampling (level=2)
   - 1 Sql Plan Directive used for this statement

A directive has been used:
 SQL> select directive_id,state,last_used,auto_drop,enabled,extract(notes,'/spd_note/spd_text/text()') spd_text,extract(notes,'/spd_note/internal_state/text()') internal_state from dba_sql_plan_directives where directive_id in(select directive_id from dba_sql_plan_dir_objects where owner='DEMO' );

-------------------- ---------- --------- --------- ------- -------------------------------- ----------
14130932452940503769 SUPERSEDED 15-MAY-15 YES       YES     {E(DEMO.DEMO_TABLE)[A, B, C, D]} HAS_STATS
Oh! The directive is back and enabled !

Auto Drop

Here are the criteria for auto-drop. SPD are considered to be dropped when AUTO_DROP is YES and either:

  • SPD is flagged as redundant
  • One of the tables has been dropped (in recycle_bin means dropped)
  • LAST_USAGE is from before the retention window
  • State is NEW (LAST_USED is null) and CREATED is before retention window
Do you see? Nothing about the ENABLE YES/NO there...


If you want to disable a SPD and be sure that it will never reappear then you have to do both of following:

SQL> exec dbms_spd.alter_sql_plan_directive(14130932452940503769,'ENABLED','NO');
SQL> exec dbms_spd.alter_sql_plan_directive(14130932452940503769,'AUTO_DROP','NO');
then because the AUTO DROP is disabled, the directive will never be deleted automatically.

Recap SharePoint Event Paris 2015

Tue, 2015-06-02 03:12



Guillaume Meunier and I went to SharePoint Event 2015 in Paris on Saturday 30, May.

This event was well organized and on time! We learned about the news features regarding SharePoint, related to B.I, SQL, governance, collaboration & Communication, Workflow and Process System.

The sessions we followed are:

How OneDrive Company revolutionized storing files in my business
Almost all companies use the SMB file system and deployment networks on workstations to store and centralize the documents produced by the company. With SharePoint 2013 it is OnPremise Online or you can reach this feature by providing mobility, the Offline mode and Full Web mode. As part of implementation of document management, the first step to store and organize documents. In addition, when audit quality, work for the listener is that easier. This session was presented by Trelohan kevin and Hans Brender.


Making SharePoint Governance work for Businesses, IT and Users
With every SharePoint implementation comes the issue of governance. We all need it, but no one wants to do it. This session show us - in a practical way - how to implement a good governance practice that will engage users, it and business users throughout the entire lifetime of your SharePoint Platform. We had an overview of the practical tools and methods to overcome most issues we have to deal with, and a complete framework for SharePoint governance was shared. This session was the latest revision of the SharePoint Governance session delivered by Anders Skjoenaa.


Integrating SharePoint into everyday working methods for a successful user adoption
Regarding SharePoint implementation, for any organization it means being aware of a significant investment: Human and Financial.
Once the platform deployment and content migration achieved, many companies face with the same issue: low usage by their business users.
AvePoint show us theirs solutions in order to enhanced Business Users daily methods: better management of internal meetings and projects directly in SharePoint, sharing content from SharePoint with external stakeholders, using Outlook/Office to register directly documents into SharePoint, a portal to access individualized services catalogs depending on the business needs.Top of Form
A session presented by Emmanuel Deletang from Avepoint.


High Availability & Disaster Recovery in SharePoint 2013 with SQL Server Always On Availability Groups!
SQL Server is really the brain of SharePoint; in this session, Serge Luca (SharePoint MVP) and Isabelle Van Campenhoudt (SQL Server MVP) gave us an overview of what any SharePoint consultant and DBA need to know regarding business continuity in SharePoint 2013. Of course SQL Server plays a major role in this story.
Topics covered:

  • Concepts of business continuity
  • SharePoint and Business continuity
  • Patterns and anti-patterns
  • SharePoint and SQL Server Always on Availability groups: what works, what doesn’t work (demos) (HA and DR)


If you need more information regarding SQL Server, please feel free to contact our exterts: Stephane Haby, David Barbarin & Stephane Savorgano.

Power BI 365
Power BI is THE BI brick Office 365. Power BI is THE BI brick Office 365.
Resolutely oriented Self-Service BI, it is intended for users who handle data in their trade. But how, when we are not specialist in Business Intelligence?
This session make a complete tour of the Power of BI functionalities: Power Query, Power Pivot, Power View, Power Map, Power Q & A Site Power BI, BI Power App. It makes us discover and understand the value added of the Self-service BI for Users.

A quick reminder: dbi services BI specialist is Matthieu Munch, do not hesitate to contact him if you need more information regarding BI tools and consulting.


Automate processes with SharePoint?
When talking about business process automation or BPM solutions, the SharePoint limits are often reached. Then there several possibilities: develop custom solutions, transform customer needs, and use third-party solutions (as Nintex for example). There is never a single right way to go, but, the most succeed process, I would say: the BEST PRACTICES for the Business Owner drives to satisfaction. We had a presentation of various methods and their impact on costs, the capabilities and constraints they induce.


The latest session was driven by all the MVP Team, it was a summary around MS Ignite Session relating the New Features for SharePoint 2016.

Please have a look at this blog regarding this subject: SharePoint 2016: What’s new? And What to expect?

SAP HANA SQL scripting optimization: the CE Functions

Tue, 2015-06-02 02:15

In SAP HANA, you have two possibilities to create the Calculated Views:

  • Using the graphical method
  • Using the scripting method with CE functions

In this blog, I will demonstrate that CE Functions can improve performances from a Calculated View.

First, I will give you some general information regarding the CE Functions. After that, I will show you the two different ways to create a Calculated View. And at the end, I will compare their performances using a SQL select.

If you want to have more information regarding the SAP HANA technology, don't hesitate to assist at the next dbi services event:

CE Functions

The CE Functions encapsulate data-transformation functionalities. They constitute an alternative to using SQL statements as their logic is directly implemented in the Sap HANA CALC engine. Direct use of the CALC engine allows implementers to influence the execution of a procedure or a query which, in some cases, is more efficient.

In the table below, you can find the list of the CE Functions and their use cases:


How to create a Calculation View

As I say at the beginning, you have two methods to create a Calculation View in Sap HANA:

  • Using the graphical method
  • Using the Scripting method
Using the graphical Method

Right click on a package and select “New / Calculation View”


Select the “Graphical” type


Create your view following the steps below

        1. Select type of operation you want to perform

        2. Select your sources

        3. Join your sources

        4. Select the object you want to use in your Calculation view

        5. Join your “Aggregation operation” to the “Aggregation” box


Create your Calculation View Layout

          1. Click on the “Aggregation” box

          2. Select the object you want to have in your layout

          3. Check if the object are selected and add, if needed, calculated columns


 Check the objects

          1. Click on “Semantics” box

          2. Select the type of the objects

          3. Validate and activate the view


Using the “SQL scripting” Method

Right click on a package and select “New / Calculation View”


Select the “SQL Script” type


Create your view following the steps below

          1. Click on the “Script View” box

          2. Type our SQL script

          3. Introduce the CE functions


Check the objects

          1. Click on “Semantics” box

          2. Select the type of the objects

          3. Validate and activate the view


SQL Performance comparison Goal of the test

In this part, I will compare the SQL performance from two calculated views that have been built with the two different methods:

  • “Graphical” method
  • “SQL scripting” method
Description of the test

The same SELECT query will be send to the database and we will check the Server time response. The two SELECT will use a different calculated view as data source:

  • CAL_AMOUNT (graphical method)
  • CAL_AMOUNT_CE (SQL Scripting method)
Test with the “Graphical” calculated view SQL Query




Test with the “SQL Scripting” calculated view SQL Query




Performance decoding

Why the use of these CE functions boost the performances of your queries? The explanation is located in the query execution plan and especially in the use of the CALC engine from the SAP HANA database.

When you send a “normal” SQL query in the SAP HANA database, the CALC engine is not used. The SQL parser send the query directly to the “Database optimizer” to optimize the execution of the query (1).


When your SELECT query uses a calclated view with CE functions, the CALC Engine optimizes the calculation model.


In our case, when we analyze the “normal” SQL query, the “calculation search” task has been split in 3 different sub-queries that can’t start at the same time.

But with the optimized calculated view, we can remark that there is only 2 sub-queries.



The use of CE functions in the creation of calculated views can significantly accelerate the execution of your SQL queries. The CALC engine from SAP HANA is optimize to use these functions.

There’s only one restriction using this kind of functions. The performance will dramatically reduce if you try to create a SQL query mixing “normal” and “optimized” calculated views.