Skip navigation.

Yann Neuhaus

Syndicate content
All blog entries from
Updated: 9 hours 32 min ago

List listeners and services from the instance

Fri, 2015-04-24 11:35

Want to know all your listeners - including scan listeners, and the services it listens for? It is possible from the instance, with the - undocumented - view V$LISTENER_NETWORK which is there since 11.2


Thu, 2015-04-23 06:52

In-Memory Column Store is amazing. It brings very good performance to full table scans. I't easy: just 'flip a switch' and you accelerate all reporting queries on your table, without thinking about what to index and how. But in this post, I would like to warn you about the consequences when you just flip that switch. The new full table scan plan will replace the old ones... even before the table is populated in memory...
I'm not sure that it is the expected behaviour. In my opinion the CBO should consider INMEMORY plans only once the population is done. But here is the exemple.

Test case

Here is the testcase. I have a table DEMO with bitmap indexes on its columns:

12:04:54 SQL> create table DEMO compress as
12:04:54   2  with M as (select substr(dbms_random.string('U',1),1,1) U from dual connect by 10>=level)
12:04:54   3  select M1.U U1, M2.U U2, M3.U U3, M4.U U4 from M M1,M M2, M M3, M M4, (select * from dual connect by 1000>=level)
12:04:54   4  /
Table created.

12:05:00 SQL> create bitmap index DEMO_U1 on DEMO(U1);
Index created.
12:05:01 SQL> create bitmap index DEMO_U2 on DEMO(U2);
Index created.
12:05:03 SQL> create bitmap index DEMO_U3 on DEMO(U3);
Index created.
12:05:04 SQL> create bitmap index DEMO_U4 on DEMO(U4);
Index created.
And my test query on those columns:
12:05:05 SQL> alter session set statistics_level=all;
Session altered.
12:05:05 SQL> select distinct * from DEMO where U1='A' and U2>'X' and U3 in ('A','E') and U4='B';
no rows selected
with its execution plan:
12:05:06 SQL> select * from table(dbms_xplan.display_cursor(format=>'iostats last'));

SQL_ID  64skw45ghn5a0, child number 0
select distinct * from DEMO where U1='A' and U2>'X' and U3 in ('A','E')
and U4='B'

Plan hash value: 3881032911

| Id  | Operation                      | Name    | Starts | E-Rows | A-Rows | Buffers |
|   0 | SELECT STATEMENT               |         |      1 |        |      0 |       2 |
|   1 |  HASH UNIQUE                   |         |      1 |      2 |      0 |       2 |
|   2 |   TABLE ACCESS BY INDEX ROWID  | DEMO    |      1 |   4070 |      0 |       2 |
|   3 |    BITMAP CONVERSION TO ROWIDS |         |      1 |        |      0 |       2 |
|   4 |     BITMAP AND                 |         |      1 |        |      0 |       2 |
|   5 |      BITMAP MERGE              |         |      1 |        |      0 |       2 |
|*  6 |       BITMAP INDEX RANGE SCAN  | DEMO_U2 |      1 |        |      0 |       2 |
|*  7 |      BITMAP INDEX SINGLE VALUE | DEMO_U1 |      1 |        |      0 |       0 |
|*  8 |      BITMAP INDEX SINGLE VALUE | DEMO_U4 |      1 |        |      0 |       0 |
|   9 |      BITMAP OR                 |         |      1 |        |      0 |       0 |
|* 10 |       BITMAP INDEX SINGLE VALUE| DEMO_U3 |      1 |        |      0 |       0 |
|* 11 |       BITMAP INDEX SINGLE VALUE| DEMO_U3 |      1 |        |      0 |       0 |

Predicate Information (identified by operation id):

   6 - access("U2">'X')
   7 - access("U1"='A')
   8 - access("U4"='B')
  10 - access("U3"='A')
  11 - access("U3"='E')

34 rows selected.
Good. I'm happy with that plan. But I've In-Memory option so probably I can get rid of those bitmap indexes.

alter table INMEMORY

Let's put that query in memory:

12:05:06 SQL> alter table DEMO inmemory priority none memcompress for query high;
Table altered.
and run that query again
12:05:06 SQL> select distinct * from DEMO where U1='A' and U2>'X' and U3 in ('A','E') and U4='B';
no rows selected

12:05:07 SQL> select * from table(dbms_xplan.display_cursor(format=>'iostats last'));

SQL_ID  64skw45ghn5a0, child number 0
select distinct * from DEMO where U1='A' and U2>'X' and U3 in ('A','E')
and U4='B'

Plan hash value: 51067428

| Id  | Operation                   | Name | Starts | E-Rows | A-Rows | Buffers | Reads  |
|   0 | SELECT STATEMENT            |      |      1 |        |      0 |   13740 |  13736 |
|   1 |  HASH UNIQUE                |      |      1 |      2 |      0 |   13740 |  13736 |
|*  2 |   TABLE ACCESS INMEMORY FULL| DEMO |      1 |   4070 |      0 |   13740 |  13736 |

Predicate Information (identified by operation id):

   2 - inmemory(("U2">'X' AND "U1"='A' AND "U4"='B' AND INTERNAL_FUNCTION("U3")))
       filter(("U2">'X' AND "U1"='A' AND "U4"='B' AND INTERNAL_FUNCTION("U3")))
Here is my problem. Now that I have defined the table to be populated into the In-Memory Column Store, then the CBO choose an In-Memory plan for my query.

This is a FULL TABLE SCAN because you can only do full table scans from the In-Memory Column Store. But I have a problem. The column store is not yet populated:

12:05:07 SQL> select segment_name,inmemory_size,bytes_not_populated from v$im_segments;
no rows selected
So the FULL TABLE SCAN occured on the row store. Look at the statistics above: 1370 logical reads from the buffer cache. And 13736 physical reads because that table is not in the buffer cache. I always used index access for it before, so the table blocks are not in buffer cache. And the full table scan has good change to be done in direct-path.
I still have a very good access from the bitmap indexes - which are still there - but now I'm now doing a very expensive full table scan.


Look at the same query two seconds later:

12:05:09 SQL> select distinct * from DEMO where U1='A' and U2>'X' and U3 in ('A','E') and U4='B';
no rows selected

12:05:09 SQL> select * from table(dbms_xplan.display_cursor(format=>'iostats last'));

SQL_ID  64skw45ghn5a0, child number 0
select distinct * from DEMO where U1='A' and U2>'X' and U3 in ('A','E')
and U4='B'

Plan hash value: 51067428

| Id  | Operation                   | Name | Starts | E-Rows | A-Rows | Buffers | Reads  |
|   0 | SELECT STATEMENT            |      |      1 |        |      0 |   11120 |  11117 |
|   1 |  HASH UNIQUE                |      |      1 |      2 |      0 |   11120 |  11117 |
|*  2 |   TABLE ACCESS INMEMORY FULL| DEMO |      1 |   4070 |      0 |   11120 |  11117 |

Predicate Information (identified by operation id):

   2 - inmemory(("U2">'X' AND "U1"='A' AND "U4"='B' AND INTERNAL_FUNCTION("U3")))
       filter(("U2">'X' AND "U1"='A' AND "U4"='B' AND INTERNAL_FUNCTION("U3")))
It is just a bit better: 11117 physical reads instead of 13736. This is because some In-Memory Compression Units are already there in the In-Memory Column Store:
12:05:10 SQL> select segment_name,inmemory_size,bytes_not_populated from v$im_segments;

---------- ------------- ---------- -------------------
DEMO             6815744  117440512            88973312
Among the 117440512 bytes (which is 14336 8k blocks) only 88973312 are not yet populated (10861 8k blocks). This is why a bit earlier the query still had to read 11120 blocks from buffer cache.

Let's wait 1 minute for population. Remember that during that time, the population uses a lot of CPU in order to read the row store blocs, put it in column, compress it and store it into the column store.

12:06:04 SQL> select distinct * from DEMO where U1='A' and U2>'X' and U3 in ('A','E') and U4='B';
no rows selected

12:06:04 SQL> select * from table(dbms_xplan.display_cursor(format=>'iostats last'));

SQL_ID  64skw45ghn5a0, child number 0
select distinct * from DEMO where U1='A' and U2>'X' and U3 in ('A','E')
and U4='B'

Plan hash value: 51067428

| Id  | Operation                   | Name | Starts | E-Rows | A-Rows | Buffers |
|   0 | SELECT STATEMENT            |      |      1 |        |      0 |       3 |
|   1 |  HASH UNIQUE                |      |      1 |      2 |      0 |       3 |
|*  2 |   TABLE ACCESS INMEMORY FULL| DEMO |      1 |   2546 |      0 |       3 |

Predicate Information (identified by operation id):

   2 - inmemory(("U1"='A' AND "U4"='B' AND "U2">'X' AND INTERNAL_FUNCTION("U3")))
       filter(("U1"='A' AND "U4"='B' AND "U2">'X' AND INTERNAL_FUNCTION("U3")))
Ok. not only 3 blocks were read from buffer cache. I have now good performance that I can compare with what I had with the bitmap indexes.

This is because population is completed:

12:06:15 SQL> select segment_name,inmemory_size,bytes,bytes_not_populated from v$im_segments;

---------- ------------- ---------- -------------------
DEMO            31195136  117440512                   0


My conclusion is that altering a table to populate it into the In-memory Column Store looks like an easy operation. But it is not. When you do that:

  • You change the plans to FULL TABLE SCAN which will not be optimal until the table is fully populated.
  • You trigger the population which will increase a lot your server CPU usage
  • you have the risk to get tables only partially populated in case you're in RAC, or if you don't have enough space in the inmemory_size
So this is something to plan and to monitor. And you will also need to think about what happens if your instance crashes and you have to restart it. How long will it take to get back to correct performance?
And that's even without asking yourself yet if you can drop those bitmap indexes that are superseeded by the In-Memory column store now.

Of course, there are solutions for any problem. if you are on Exadata, then SmartScan will come to the rescue until the IMCS is populated. Full table scan is offloaded to storage nodes. Database node CPU resources are available for quick population. In that way, they are complementary.

Successful launch dbi services Zürich

Wed, 2015-04-22 12:50

Yesterday evening dbi services (headquarters in Delémont) launched officially its third branch in Zürich (Beside Basel and Lausanne). Five years after its take off, the "Oracle Database Partner of Year 2014", is employing more than 40 consultants. I would like to use this opportunity to thank all the customers and partners who trust dbi services. A particular thanks goes to the customers and partners who helped us to enjoy a very pleasant inauguration party yesterday.

Thanks also to Mr Thomas Salzmann (KKG) who presented our successful collaboration and to Massimo Castelli (Oracle) who presented the challenges of recruitment in the IT branch. I was pleased to see that large players like Oracle do, sometimes, have the same challenges as mid-sized companies :-) .

All this adventure would not have been possible without our incredible teams, working hard every day to transform ideas and problems into projects and solutions. dbi services will continue to leverage the skills of its employees, to look for opportunities, in order to remain at top-level provider for the operating systems, database and middleware layers.

A final thanks goes to Kurt Meier who will lead the dbi services branch in Zürich and for the very good organisation of this party. After having won the first customers, Kurt proved that dbi services will succeed and manage this new challenge.





D2 xPlore search on apostrophes

Wed, 2015-04-22 03:32

When using the D2 searches you are likely to go into trouble regarding special characters such as apostrophes. The goal in this blog is to show you how to parameterize new special character searches.

In many documents you'll have text with apostrophes or special characters that you want to search on. But unfortunately by default the D2 search will return nothing when you enter apostrophes directly into the search bar. The xPlore will replace special characters by spaces and store the two words one after the other to match them more easily in this order.

In fact this is not a D2 issue. Your xPlore is likely to not be set for special characters handling. By default xPlore is set to recognize apostrophes but in Word for example you have different kind of apostrophes. These characters have a different ascii code, so xPlore doesn't recognize them.

To solve this issue you simply have to tell xPlore to handle Word's apostrophes (or whatever character you want to search on).

In order to do this, login to your xPlore server then edit the following file:


Find the line with:

Line special-characters="@#$%^_~`&:.()-+='/\[]{}" context-characters="!,;?""

Then add your apostrophes or special characters as follow (copy and past directly from Word to the file):

New Line"@#$%^_~`&:.()-+='/\[]{}’‘„“"

And save the file.

Now, new indexed documents can be searched with apostrophes. But note that if you want the older documents to be searchable as well, you will need to re-index the whole repository.

Weblogic ThreadPool has stuck threads

Wed, 2015-04-22 02:49

In Weblogic it is common to have this warning: ThreadPool has stuck threads. Here we will see a way to determine which can be the cause.

When monitoring Weblogic you can notice that time to time your servers are going in Warning mode. And when clicking on warnings you see this screen:


The reason is presented as "ThreadPool has stuck threads". So it doesn't help a lot. But we can have a deeper view and maybe a real cause.

Now click on the server name from your list, then go to Monitoring -> Threads.

The Hogging Thread Count column shows how many threads seems stuck. The Pending User Request Count column shows the number of requests not delivered to the users. If it is different than 0 your users are impacted.

In order to visualize the real state of threads click on Dump Thread Stacks:


Some threads are marked as stuck whereas they aren't. If the process handled by the thread is too long to achieve then Weblogic will detect it as stuck. By default Weblogic will detect stuck threads after 600 seconds (10 minutes) of waiting (This parameter can be changed).

When the thread dump is displayed you can search for thread with STUCK status:


Here you can see that the thread is stuck in java.lang.Object.wait() function. It means that the thread is waiting for a result or another process to end. In this particular case we can see that the function com.crystaldecisions.sdk.occa.managedreports.ras.internal.CECORBACommunicationAdapter.request() was executed just before waiting, so the thread is likely to wait for the result for this "request" function.

As what we thought the issue came from a reporting server that could not deliver the reports anymore. That's why we had some stuck threads.

Stuck threads are generally generated by the application itself or some other components which do not have to do with weblogic.

Last point, you can check if the thread is stuck in the previous view like this:


SQL Server - Change Management: list all updates

Mon, 2015-04-20 01:29

I am looking to have all SQL Server updates on a server including Service Packs, Cumulative Updates and other fixes like we can see in the uninstall panel from Windows.



C15LV: Nothing to BLOG About - Think Again

Wed, 2015-04-15 19:58

Before going to the airport to come back in Europe, the las session I assisted at IOUG Collaborate is a panel: Nothing to BLOG About - Think Again

Some famous bloggers were there to help, motivate and inspire people that are afraid to blog. The idea is good. The session was good. The panelists have shared ideas, tips, experience. It was very good.

I'm already a blogger, but that inspired me anyway. In the same idea I'll expose how I started blogging, just in case it gives some ideas.

Note that tomorrow I speak about participating in the community (read, share, write, talk, blog, tweet, link,...) at our internal event dbi xChange. Thus, waiting at the airport, it's a good occasion to explain how I start blogging in case it gives some ideas to people that want to start.

SQL Monitor report little improvement in 12c

Wed, 2015-04-15 14:35

This is a very short post about something I though I had already blogged about.

I like SQL Monitoring active reports. The are graphical, and still very light. There is only one thing I don't like about it in 11g and it has been improved in 12c

QlikView – Load data in QlikView / QlikSence – Best practice and trick

Tue, 2015-04-14 07:45

In this blog, I will give you some best practices and tricks when you are loading table to generate your data. But before, I will review the different ways to load data in a QlikView or QlikSense report.

1.How to retrieve data in QlikView

You have 2 different possibilities to load your data in your report:

- Database connector

- File connector


   a) Database connector:

     If your data are located in a database, you must use this kind of connectors.

To connect to a database:

Open "Script Editor"


Click on “Tab” and “Add Tab…”


Give a name to the new Tab and clock “OK"


Select the data source


Select your connection (for OLE DB, ODBC, connections should be created in the ODBC administrator tool from windows)

Remark: You can use a special connection on the ETL tool from QlikView named QlikView Expressor


For this example, I want to connect on a Oracle database:

Select the connection and click “OK”


Select a table (1), than select the fields you want see in your report (2) and click “OK” (3)


TRICK 1: If you use the “Connect” option and you add the User ID and the password in the connection interface, they will be put in the SQL script in an encrypted format



   b) Data file:

You have 4 different options:


(1) Table Files: you can select the following kind of files


(2) QlikView File: You can load .qvw file (QlikView file)

(3) Web File: You can load a file coming from a website

(4) Field data: you can load specific rows coming from a field located in a database

In this example, we select a .qvd file using the 1 option (table file)


You have 2 options:

Click “Next” and “Next”: you access to the column choose interface

Remark: To remove a column, click on the cross. Then click “Next ”


Check the SQL. If it’s ok, click on “Finish”


Click on “Reload” to load the data


Best Practices: create a variable path

If you must load data coming from files located in a specific repository, and if this repository is going to change after the report is published in different environments, it is recommended to create a variable to define the folder path.

Go on the Main Tab and create the variable “FilePath”. Don’t forget the “;” at the end


On the other tab where you load data coming from file located in the same folder, add the variable before the name of the file.


After the deployment on other environment, you just have to update the variable and of course, reload the data.


2.Optimize the data recovery in QlikView / QlickSence

In this example, some tables are loaded just for one field. We can optimize this schema with using a mapping function. The goal is to limit the number of tables used directly in the schema.


Warning: The mapping function can only be used to add one field in a table.

In our example, we want to add the filed “Product Group desc” in the table “ITEM_MATSER”.

To create a mapping tab:

Add a new Tab just after the main tab


Use the function “MAPPING” in the script as follow:


In the destination table, add the field with the following function “Applymap” as follow:


(1) Put the name of the Mapping table you have created

(2) Put the name of the key field

(3) Put the name of the field you want to show in your table

Don’t forget to comment the script from your mapped table.

After refreshing the data, you will see that the table has disappeared and the filed has been added in the main table.



3.How to add a Master Calendar table

You can generate automatically a master calendar if you need to have all the days located in a period. This calendar will be generated in 3 different steps:

   a) Creation of the Min / Max date temporary table.

Create a new tab and add the following script:


   b) Creation date temporary table.

Add the following script to create a temporary table with all the dates between the MIN and MAX date you have define using the function “AUTOGENERATE”


Note that we drop the result from the MIN / MAX table at the end of the creation from the temporary table.

   c) Creation Master Calendar table.

After the generation from the temporary table, add the following script to create all the different date fields you need (Year, month, week …)


Remark: to join your tables, you must give to your new generated field the same name than the field you have used to create your calendar. The result should be like this.


I hope that these best pratices and tricks will help you !







Yesterday's AWR Straight to the Goal

Tue, 2015-04-14 06:43

Yesterday I was a speaker at Collaborate15 and did my presentation about reading an AWR report. That was great. Many people (and not enough seats).

Here are some aswers about questions that came later.

Cloning a PDB from a standby database

Mon, 2015-04-13 05:14

Great events like IOUG Collaborate is a good way to meet experts we know through blogs, twitter,etc. Yesterday evening, with nice music in the background, I was talking with Leighton Nelson about cloning PDB databases. Don't miss his session today if you are in Las Vegas. The big problem with PDB cloning is that the source must be read-only. The reason is that it works like transportable tablespaces (except that it can transport the datafiles through database link and that we transport SYSTEM as well instead of having to import metadata). There is no redo shipping/apply here, so the datafiles must be consistent.

Obviously, being read-only is a problem when you want to clone from production.

But if you have a standby database, can you open it read-only and clone a pluggable database from there? From what we know, it should be possible, but better to test it.

Here is my source - a single tenant standby database opened in read-only:

SQL> connect sys/oracle@// as sysdba
SQL> select banner from v$version where rownum=1;

Oracle Database 12c Enterprise Edition Release - 64bit Production

SQL> select name,open_mode,database_role from v$database;

--------- -------------------- ----------------

SQL> select name,open_mode from v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
STDB1                          MOUNTED

Then from the destination I define a database link to it:

SQL> connect sys/oracle@// as sysdba
SQL> select banner from v$version where rownum=1;

Oracle Database 12c Enterprise Edition Release - 64bit Production

SQL> select name,open_mode from v$database;

--------- --------------------

SQL> select name,open_mode from v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
PDB                            READ WRITE

SQL> create database link DBLINK_TO_STCDB connect to system identified by oracle using '//';

Database link created.

and create a pluggable database from it:

SQL> create pluggable database STDB2 from STDB1@DBLINK_TO_STCDB;

Pluggable database created.

SQL> alter pluggable database STDB2 open;

Pluggable database altered.

So yes. This is possible. And you don't need Active Data Guard for that. As long as you can stop the apply for the time it takes to transfer the datafiles, then this is a solution for cloning. Of course, just do one clone and if you need others then you can do it from that first clone. And within the same PDB they can be thin clones if you can use snapshots.

Ok, It's 5 a.m here. As usual, the jetlag made me awake awake a bit early, so that was a good occasion to test what we have discussed yesterday...

RAC Attack! was another great success at C15LV

Sun, 2015-04-12 14:49

The RAC Attack  - install a RAC in your own laptop - is a great success at Las Vegas.

The idea is to help people follow the RAC Attack cookbook which is available at:

It is a complex configuration and there is always problems to troubleshoot:

  • get Virtual Box be able to run a 64-bits guest, and that might involve some BIOS settings
  • be able to install VirtualBox, and we have people with their company laptop where some security policies makes things difficule
  • Network configuration is not simple and any misconfiguration will make things more difficult later

So it is a very good exercise for troubleshooting.

The organisation way excellent: Organisation by Ludovico Caldara, infrastructure by Erik Benner, food sponsored by OTN, and Oracle software made available on USB sticks thanks to Markus Michalewicz. Yes the RAC Product Manager did the racattack installation.

 It's also a very good networking event where people meet people around the technology, thanks to IOUG Collaborate.

More Ninjas graduating the Dojo! #racattack @ioug @racsig #c15lv @OracleDBDev @Mythics

— Erik Benner (@Erik_Benner) April 12, 2015

When people manage to get a VM with the OS installed, they can get the red tee-shirt. Look at the timelapse of the full day and you will see more and more red T-shirts:

Do you wonder why we are so happy to see people having only the OS installed? Because it's the most difficult part. Creating a cluster on a laptop is not easy. You have to create the VM, you have to setup networking, DNS, etc.

Once this setup is good, then installing Grid Infrastructure and Database is straightforward with graphical installer.

Dbvisit replicate REDO_READ_METHOD

Thu, 2015-04-09 12:15

A frequent question about replication is the overhead in the source, because in a lot of cases the source is production. Dbvisit replicate comes with the possibility to do the minimum on the source: only the FETCHER process is there to read de redo logs and sends it to the MINE process wich can be on another server.

But maybe even that - reading those critical online redo logs - is worrying you. That's not a reason to avoid to do a PoC with the production as source. Let's see how we can use the 'archived logs only'.

DataGuard wait events have changed in 12c

Wed, 2015-04-08 11:11

There are several new features in 12c about Data Guard: cascaded standby, far sync instance. But there are also some architecture changes: new processes and new wait events.

QlikSense – The easy way to create a QlikView report

Tue, 2015-04-07 00:57

Today, one of the most used Business Intelligence products is undoubtedly QlikView. This Swedish product is one of the best positioned in the Gartner Magic Quadrant for Business Intelligence and Analytics Platforms matrix and has been for a number of years.


To make Business Intelligence even more accessible to Business users, the Qlik Company decided to add an additional product in its range. This product should be more accessible and easier to use to create analytics reports. That’s why QlikSense was created.

1.Build a report in a few mouse clicks

One of the great strengths of QlikView is that the use of an analytical report has become very simple. The user of a QlikView report need only to select the various criteria they wish to apply to these tables and graphs so that they are automatically adapted to the application.



In this case, the user has just clicked on the criteria "Argentina" and all report data has been filtered to show only the data for this country.

However, the construction of a QlikView’s report is not so simple. The multitude of options available for creating each artifact can quickly become quite complex. This makes the end user still dependent on the developer's report both for its construction than for any changes they would make.

To make the creation of these reports more accessible to the majority, Qlik decided to launch Qlik Sence. This product brings simplicity in building reports because it’s using the Drag and Drop technology. The user can now create his tables and charts using only the mouse, making the creation of a report as easy as in Excel.

To create a new report in QlikSense, click "Create a new App"


Give the new application a name and click « Create »


Click on « Open»


As in any QlikView report, the first thing to do is to retrieve data. There are two options to do that:

"Quick Data Load" or " Load Data Editor."

  • “Quick Data Load" allows you to select any data coming from a flat file or located in a XLS sheet.
  • "Data Load Editor" allows you to create more complex data sources based on SQL -type queries


For our example, we are using « Data Load Editor ». After opening the application, you should only select the data source connector you want to use and set up your connection to your data source.


I will created later a new blog focused on the creation and the setup of data connection in QlikView and QlikSense.

In this case, we will use a .qvx file type. Such files are created through the QlikView ETL application called "QlikView Expressor ".

The load script is then displayed on the screen in SQL format.


You just have to launch the data load.


When it’s done, you should activate the « App Overview » mode.


Then, the user can create a new sheet:


He gives it a name and click on it.


To display the « Edit » mode, you just have to click on this icon:


At this point, the user can create his graphic using the Drag and drop mode


After chosen the chart type, select your dimensions and measures


And that’s the result.


Despite its simplicity, Qlik Sence retains the power of creating and setting chart from QlikView. But the bulk of operations is now made via selection of options, making much easier to create different artifacts.

2.Type "Map" chart

Qlik has not only added the notion of Drag and Drop. One of the great innovations in QlikSense regards the possibility to create very easily a type "Map" chart, that is to say containing geographical information. It was quite difficult to do it in QlikView because it required the use of fairly complex calculated variables. With QlikSense, you just have to use the type "Map" object specially created for this purpose and complete the requested information.

To create a chart "Map", select the chart type "Map"


The only constraint to achieve this kind of graph is to have a GeoMakePoint type object. It is created by combining two geolocation information type objects (latitude and longitude).

In the example below, the object GeoMakePoint "Location" is the location of cities.


Click in the " Layers " menu. In the "Add Layers " box, insert the GeoMakePoint object you just have created. And in the "Expression" box, add an aggregate expression.


Then, go to the "Background" menu, select the Slippy map server you want to use and copy the URL and the duties of the selected server

Small comment: All Slippy Map servers can be used by QlikSense. They are listed on the website of QlikView. To access it, simply click on the hyperlink "URL and Allocation".


To activate the different colors from your dimensions on the map, go in the menu « Apparence » and choose the following options:


Final result will be displayed on a map. Points are the dimension “city” and the size of the point will be the measure (Sum(Profit)).


This chart is as interactive as the other, it will also apply the filters selected by the user.

3.The mode "Story"

QlikSense also offers the possibility to create "Stories", a kind of Microsoft Powerpoint presentation. A "Story" is simply a result of various reports that have been fixed with the values that the user wishes to highlight. These presentations can also contain some kind of animation options from those used in Microsoft Powerpoint. These Stories are built from existing QlikSense reports on which the user takes a snapshot after having applied the desired filter.

To create a "Story" , simply activate the "App Overview" mode.


Then select the « story » mode and click on « new story ».


Give it a name, push « Enter » and click on the story.


Before creating the story, you should have taken snapshots from other sheets.

Warning : You can’t insert snapshots located in some other QlikSense applications.

To create a snapshot, open a Qliksense sheet located in your application.


Open the story you have made and insert the snapshots you need.


You can add some text zones, Microsoft Powerpoint animations, special forms and media objects.


And that’s the final result.


In conclusion, we can say that QlikView and QlikSense are complementary products that are not reserved to the same type of user.

  • QlikSense is easier to handle and is designed especially for use as a dashboarding tool
  • Using HTML 5 technology, it allows you to create reports for any kind of devices
  • Map type graphs are easier to create
  • You can create Stories

So we can say that QlikSense will be more easily used by the Business than QlikView because of its friendliness to use.

Oracle tuning silver bullet: add an order by to make your query faster

Wed, 2015-04-01 00:00

You have read all Cary Millsap work about Oracle database performance tuning. You know that there are no silver bullets. Reducing the response time requires a methodical approach in order to analyze the response time with the goal of eliminating all unnecessary work.

But I'll show something completly opposite here. A performance tuning silver bullet. Do more work in order to run it faster: just add an ORDER BY to your query and its faster. 

Weird issue with sys.fn_hadr_backup_is_preferred_replica() function

Mon, 2015-03-30 13:11

A couple of days ago, I faced a weird backup issue with SQL Server AlwaysOn and availability groups at one of my customer (thank to him to point out this issue :-) ). After installing our DMK tool (Database Management Kit) about database maintenance for AlwaysOn, my customer noticed that their databases have not backed up. Ouch … what’s going on? I never ran into this issue before... Do the problem comes from our tool?

In fact, our DMK uses the useful DMF sys.fn_hadr_backup_is_preferred_replica() to know which databases are candidate for backup operations on replicas at a given time and this is where our issue starts. Indeed, in a specific situation that includes both a case sensitive server collation and entering the name of replicas in lower case, we found that the result of this function is inconsistent. Let me show with an example.

In my customer’s context, the replica names have been filled out from a PowerShell script form in lower case as follows:




Let’s take a look at the system view to check the availability group configuration:


SELECT        replica_server_name,        availability_mode_desc,        failover_mode_desc FROM sys.availability_replicas




Let’s verify that the collation of the SQL Server instance is case sensitive …


SELECT SERVERPROPERTY('Collation') AS ServerCollation;




… and the backup preference policy is configured to “primary”


SELECT        name AS group_name,        automated_backup_preference_desc as backup_preference FROM sys.availability_groups




Finally, let’s verify the database inside the availability group:


SELECT AS group_name,        r.replica_server_name AS replica_name,        dcs.database_name,        drs.database_.state_desc AS db_state FROM sys.dm_hadr_database_replica_states AS drs JOIN sys.availability_replicas AS r        ON drs.replica_id = r.replica_id JOIN sys.availability_groups AS g        ON g.group_id = drs.group_id JOIN sys.dm_hadr_database_replica_cluster_states AS dcs        ON dcs.group_database_id = drs.group_database_id              AND dcs.replica_id = drs.replica_id




Ok now let’s take a look at the result of the DMF sys.fn_hadr_backup_is_preferred_replica() in this context. I put here a simplified sample portion of the TSQL code used in our DMK:


USE master; GO   DECLARE @db_name SYSNAME;   SELECT @db_name = name FROM sys.databases WHERE name = N'DUMMY';   SELECT        @@SERVERNAME AS server_name,        @db_name AS database_name,        sys.fn_hadr_backup_is_preferred_replica(@db_name) AS fn_result;


Concerning the primary:




Concerning the secondary :





If you perform the same by configuring this time the replica names in upper case, you will notice that the issue will disappear. When I think about this issue, it's true that in almost cases customers prefer to use the assistant wizard to configure availability groups and in this case do you notice that the replica names are always switched in upper case?

There also exists a Microsoft connect item about this problem but unfortunately it seems that it will not be solved by Microsoft … so be careful when you implement availability groups by script.

See you on the next availability group ventury!




Windows Cluster vNext and cloud witness

Sun, 2015-03-29 12:14

The next version of Windows will provide some interesting features about WFSC architectures. One of them is the new quorum type: "Node majority and cloud witness" which will solve many cases where a third datacenter is mandatory and missing to achieve a truly resilient quorum.

Let’s imagine the following scenario that may concern the implementation of either an SQL Server availability group or a SQL Server FCI. Let’s say you have to implement a geo-cluster that includes 4 nodes across two datacenters with 2 nodes on each. To achieve the quorum in case of broken network link between the two datacenters, adding a witness is mandatory even if you work with dynamic weight nodes feature but where to put it? Having a third datacenter to host this witness seems to be the better solution but as you may imagine, it is a costly and not affordable solution for many customers.

Using a cloud witness in this case might be a very interesting workaround. Indeed, a cloud witness consists of a blob storage inside a storage account's container. From cost perspective, it is a very cheap solution because you have to pay only for the storage space you will use (first 1TB/month – CHF 0.0217 / GB). Let's take a look at the storage space consumed by my cloud witness from my storage account:





Interesting, isn’t it? To implement a cloud witness, you have to meet the following requirements:

  • Yourstorage account must be configured as a locally redundant storage (LRS) because the created blob file is used as the arbitration point, which requires some consistency guarantees when reading the data. All data in the storage account is made durable by replicating transactions synchronously in this case. LRS doesn’t protect against a complete regional disaster but it may be acceptable in our case because cloud witness is also dynamic weight-based feature.
  • A special container, called msft-cloud-witness, is created to this purpose and contains the blob file lied to the cloud witness.




How to configure my cloud witness?

In the same way than before. By using the GUI, you have to select the quorum type you want to use and then you must provide the storage account information (storage account name and the access key). You may also prefer to configure your cloud witness by using PowerShell cmdlet Set-ClusterQuorum as follows:




After configuring the cloud witness, a corresponded core resource is created with an online state as follows:




By using PowerShell:




Let’s have a deeper look at this core resource, especially the following advanced policies parameters isAlive() and looksAlive() configuration:




We may notice that the basic resource health check interval default value is configured to 15 min. Hmm, I guess that this value will probably be customized according to the customer architecture configuration.

Go ahead and let’s perform some basic tests with my lab architecture. Basically, I have configured a multi-subnet failover cluster that includes four nodes across two (simulated) datacenters. Then, I have implemented a cloud witness hosted inmy storage account “mikedavem”. You may find a simplified picture of my environment below:








You may notice that because I implemented a cloud witness, the system changes the overall node weight configuration (4 nodes + 1 witness = 5 votes). In addition, in case of network failure between my 2 datacenters, I want to prioritize the first datacenter in terms of availability. In order to meet this requirement, I used the new cluster property LowerQuorumPriorityNodeID to change the priority of the WIN104 cluster node.




At this point we are not ready to perform our first test: simulate a failure of the cloud witness:




Then the system recalculates the overall node weight configuration to achieve a maximum quorum resiliency. As expected, the node weight of WIN104 cluster node is changed from 1 to 0 because it has the lower priority.

The second consists in simulating a network failure between the two datacenters. Once again, as expected, the first partition of the WFSC in the datacenter1 keeps online whereas the second partition brings offline according the node weight priority configuration.




Is the cloud witness dynamic behavior suitable with minimal configurations?

I wrote a blog post here about issues that exist with dynamic witness behavior and minimal configurations with only 2 cluster nodes. I hoped to see an improvement on that side but unfortunately no. Perhaps with the RTM release … wait and see.


Happy clustering!



Index on SUBSTR(string,1,n) - do you still need old index?

Fri, 2015-03-27 03:57

In a previous post I've shown that from when you have an index on trunc(date) you don't need additional index. If you need the column with full precision, then you can add it to the index on trunc(). A comment from Rainer Stenzel asked if that optimization is available for other functions. And Mohamed Houri has linked to his post where he shows that it's the same with a trunc() on a number.

Besides that, there is the same kind of optimization with SUBSTR(string,1,n) so here is the demo, with a little warning at the end.

I start with the same testcase as the previous post.

SQL> create table DEMO as select prod_id,prod_name,prod_eff_from +rownum/0.3 prod_date from sh.products,(select * from dual connect by level>=1000);
Table created.

SQL> create index PROD_NAME on DEMO(prod_name);
Index created.

SQL> create index PROD_DATE on DEMO(prod_date);
Index created.

I've an index on the PROD_NAME and I can use it with equality or inequality predicates:

SQL> set autotrace on explain
SQL> select distinct prod_name from DEMO where prod_name > 'Z';
no rows selected

Execution Plan
Plan hash value: 72593368

| Id  | Operation          | Name      | Rows  | Bytes |
|   0 | SELECT STATEMENT   |           |     1 |    27 |
|   1 |  SORT UNIQUE NOSORT|           |     1 |    27 |
|*  2 |   INDEX RANGE SCAN | PROD_NAME |     1 |    27 |

Predicate Information (identified by operation id):

   2 - access("PROD_NAME">'Z')

And I also can use it with a LIKE when there is no starting joker:
SQL> select distinct prod_name from DEMO where prod_name like 'Z%';
no rows selected

Execution Plan
Plan hash value: 72593368

| Id  | Operation          | Name      | Rows  | Bytes |
|   0 | SELECT STATEMENT   |           |     1 |    27 |
|   1 |  SORT UNIQUE NOSORT|           |     1 |    27 |
|*  2 |   INDEX RANGE SCAN | PROD_NAME |     1 |    27 |

Predicate Information (identified by operation id):

   2 - access("PROD_NAME" LIKE 'Z%')
       filter("PROD_NAME" LIKE 'Z%')

That optimization is available for several releases (9.2 if I remember well but I didn' check).


But sometimes, when we want to check if a column starts with a string, the application uses SUBSTR instead of LIKE:

SQL> select distinct prod_name from DEMO where substr(prod_name,1,1) = 'Z';
no rows selected

Execution Plan
Plan hash value: 1665545956

| Id  | Operation          | Name      | Rows  | Bytes |
|   0 | SELECT STATEMENT   |           |     1 |    27 |
|   1 |  SORT UNIQUE NOSORT|           |     1 |    27 |
|*  2 |   INDEX FULL SCAN  | PROD_NAME |     1 |    27 |

Predicate Information (identified by operation id):

   2 - filter(SUBSTR("PROD_NAME",1,1)='Z')

But - as you see - there is no access predicate here. The whole index has to be read.

Of course, I can use a function based index for that:

SQL> create index PROD_NAME_SUBSTR on DEMO( substr(prod_name,1,1) );
Index created.

SQL> select distinct prod_name from DEMO where substr(prod_name,1,1) = 'Z';
no rows selected

Execution Plan
Plan hash value: 4209586087

| Id  | Operation                    | Name             | Rows  | Bytes |
|   0 | SELECT STATEMENT             |                  |     1 |    31 |
|   1 |  HASH UNIQUE                 |                  |     1 |    31 |
|   2 |   TABLE ACCESS BY INDEX ROWID| DEMO             |     1 |    31 |
|*  3 |    INDEX RANGE SCAN          | PROD_NAME_SUBSTR |     1 |       |

Predicate Information (identified by operation id):

   3 - access(SUBSTR("PROD_NAME",1,1)='Z')

One index only?

Then, as in the previous post about TRUNC I'll check if that new index is sufficient. Let's fdrop the first one.

SQL> drop index PROD_NAME;
Index dropped.
The previous index is dropped. Let's see if the index on SUBSTR can be used with an equality predicate:
SQL> select distinct prod_name from DEMO where prod_name = 'Zero';
no rows selected

Execution Plan
Plan hash value: 953445334

| Id  | Operation                    | Name             | Rows  | Bytes |
|   0 | SELECT STATEMENT             |                  |     1 |    27 |
|   1 |  SORT UNIQUE NOSORT          |                  |     1 |    27 |
|*  2 |   TABLE ACCESS BY INDEX ROWID| DEMO             |     1 |    27 |
|*  3 |    INDEX RANGE SCAN          | PROD_NAME_SUBSTR |     1 |       |

Predicate Information (identified by operation id):

   2 - filter("PROD_NAME"='Zero')
   3 - access(SUBSTR("PROD_NAME",1,1)='Z')

Good. The index on substring is used for index range scan on the prefix, and then the filter occurs on the result. This is fine as long as the prefix is selective enough.

It is also available with inequality:
SQL> select distinct prod_name from DEMO where prod_name > 'Z';
no rows selected


Predicate Information (identified by operation id):

   2 - filter("PROD_NAME">'Z')
   3 - access(SUBSTR("PROD_NAME",1,1)>='Z')

And we can use it even when using a substring with a different number of characters:
SQL> select distinct prod_name from DEMO where substr(prod_name,1,4) = 'Zero';
no rows selected


Predicate Information (identified by operation id):

   2 - filter(SUBSTR("PROD_NAME",1,4)='Zero')
   3 - access(SUBSTR("PROD_NAME",1,1)='Z')

However, if we use the LIKE syntax:

SQL> select distinct prod_name from DEMO where prod_name like 'Z%';
no rows selected

Execution Plan
Plan hash value: 51067428

| Id  | Operation          | Name | Rows  | Bytes |
|   0 | SELECT STATEMENT   |      |     1 |    27 |
|   1 |  HASH UNIQUE       |      |     1 |    27 |
|*  2 |   TABLE ACCESS FULL| DEMO |     1 |    27 |

Predicate Information (identified by operation id):

   2 - filter("PROD_NAME" LIKE 'Z%')

The LIKE snytax does not allow to filter from the index on SUBSTR. So there are cases where we have to keep all indexes. Index on full column for LIKE predicates, and index on substring for SUBSTR predicates.

Note that indexes on SUBSTR are mandatory when you have columns larger than your block size, which is probably the case if you allow extended datatypes (VARCHAR2 up to 32k)

AOUG - Real World Performance Tour

Thu, 2015-03-26 13:26

This week, Tom Kyte, Graham Wood and Andrew Holdsworth were present in Europe for several dates. One of the events was organised by the Austrian Oracle User Group (AOUG) in collaboration with the German and Swiss User Group (DOAG and SOUG) and I had the chance to be there to attend to one session of the Real Worl Performance tour session in Vienna.