Skip navigation.

Yann Neuhaus

Syndicate content
All blog entries from http://www.dbi-services.com/
Updated: 7 hours 57 sec ago

An alternative to DBA_EXTENTS optimized for LMT

Sun, 2015-06-07 11:45

This is a script I have for several years, when tablespaces became locally managed. When we want to know to which segment a block (identified by file id, block id) belongs to, the DBA_EXTENTS view can be very long when you have lot of datafiles and lot of segments. This view using the underlying X$ tables and constrained by hints is faster when queried for one FILE_ID/BLOCK_ID. I did that in 2006 when having lot of corruptions on several 10TB databases with 5000 datafiles.

Since then, I've used it only a few times, so there is no guarantee that the plan is still optimal in current version, but the approach of starting to filter the segments that are in the same tablespace as the file_id makes it optimal for a search by file_id and block_id.

The script

Here is the creation of the DATAFILE_MAP view:

create or replace view datafile_map as
WITH
 l AS ( /* LMT extents indexed on ktfbuesegtsn,ktfbuesegfno,ktfbuesegbno */
  SELECT ktfbuesegtsn segtsn,ktfbuesegfno segrfn,ktfbuesegbno segbid, ktfbuefno extrfn, 
         ktfbuebno fstbid,ktfbuebno + ktfbueblks - 1 lstbid,ktfbueblks extblks,ktfbueextno extno 
  FROM sys.x$ktfbue
 ),
 d AS ( /* DMT extents ts#, segfile#, segblock# */
  SELECT ts# segtsn,segfile# segrfn,segblock# segbid, file# extrfn, 
         block# fstbid,block# + length - 1 lstbid,length extblks, ext# extno 
  FROM sys.uet$
 ),
 s AS ( /* segment information for the tablespace that contains afn file */
  SELECT /*+ materialized */
  f1.fenum afn,f1.ferfn rfn,s.ts# segtsn,s.FILE# segrfn,s.BLOCK# segbid ,s.TYPE# segtype,f2.fenum segafn,t.name tsname,blocksize
  FROM sys.seg$ s, sys.ts$ t, sys.x$kccfe f1,sys.x$kccfe f2  
  WHERE s.ts#=t.ts# AND t.ts#=f1.fetsn AND s.FILE#=f2.ferfn AND s.ts#=f2.fetsn 
 ),
 m AS ( /* extent mapping for the tablespace that contains afn file */
SELECT /*+ use_nl(e) ordered */ 
 s.afn,s.segtsn,s.segrfn,s.segbid,extrfn,fstbid,lstbid,extblks,extno, segtype,s.rfn, tsname,blocksize
 FROM s,l e
 WHERE e.segtsn=s.segtsn AND e.segrfn=s.segrfn AND e.segbid=s.segbid
 UNION ALL
 SELECT /*+ use_nl(e) ordered */  
 s.afn,s.segtsn,s.segrfn,s.segbid,extrfn,fstbid,lstbid,extblks,extno, segtype,s.rfn, tsname,blocksize
 FROM s,d e
  WHERE e.segtsn=s.segtsn AND e.segrfn=s.segrfn AND e.segbid=s.segbid
 UNION ALL
 SELECT /*+ use_nl(e) use_nl(t) ordered */ 
 f.fenum afn,null segtsn,null segrfn,null segbid,f.ferfn extrfn,e.ktfbfebno fstbid,e.ktfbfebno+e.ktfbfeblks-1 lstbid,e.ktfbfeblks extblks,null extno, null segtype,f.ferfn rfn,name tsname,blocksize
 FROM sys.x$kccfe f,sys.x$ktfbfe e,sys.ts$ t
 WHERE t.ts#=f.fetsn and e.ktfbfetsn=f.fetsn and e.ktfbfefno=f.ferfn
 UNION ALL
 SELECT /*+ use_nl(e) use_nl(t) ordered */ 
 f.fenum afn,null segtsn,null segrfn,null segbid,f.ferfn extrfn,e.block# fstbid,e.block#+e.length-1 lstbid,e.length extblks,null extno, null segtype,f.ferfn rfn,name tsname,blocksize
 FROM sys.x$kccfe f,sys.fet$ e,sys.ts$ t
 WHERE t.ts#=f.fetsn and e.ts#=f.fetsn and e.file#=f.ferfn
 ),
 o AS (
  SELECT s.tablespace_id segtsn,s.relative_fno segrfn,s.header_block   segbid,s.segment_type,s.owner,s.segment_name,s.partition_name 
  FROM SYS_DBA_SEGS s 
 )
SELECT 
 afn file_id,fstbid block_id,extblks blocks,nvl(segment_type,decode(segtype,null,'free space','type='||segtype)) segment_type,
 owner,segment_name,partition_name,extno extent_id,extblks*blocksize bytes,
 tsname tablespace_name,rfn relative_fno,m.segtsn,m.segrfn,m.segbid
 FROM m,o WHERE extrfn=rfn and m.segtsn=o.segtsn(+) AND m.segrfn=o.segrfn(+) AND m.segbid=o.segbid(+)
UNION ALL
SELECT 
 file_id+(select to_number(value) from v$parameter WHERE name='db_files') file_id,
 1 block_id,blocks,'tempfile' segment_type,
 '' owner,file_name segment_name,'' partition_name,0 extent_id,bytes,
  tablespace_name,relative_fno,0 segtsn,0 segrfn,0 segbid
 FROM dba_temp_files
;
Sample output
COLUMN   partition_name ON FORMAT   A16
COLUMN   segment_name ON FORMAT   A20
COLUMN   owner ON FORMAT   A16
COLUMN   segment_type ON FORMAT   A16

select file_id,block_id,blocks,segment_type,owner,segment_name,partition_name from datafile_map 
where file_id=1326 and 3782 between block_id and block_id + blocks - 1
SQL> /

 FILE_ID BLOCK_ID  BLOCKS SEGMENT_TYPE     OWNER            SEGMENT_NAME     PARTITION_NAME
-------- -------- ------- ---------------- ---------------- ---------------- ----------------
    1326     3781      32 free space

you identified free space block

select file_id,block_id,blocks,segment_type,owner,segment_name,partition_name from datafile_map 
where file_id=1326 and 3982 between block_id and block_id + blocks - 1
SQL> /


 FILE_ID BLOCK_ID  BLOCKS SEGMENT_TYPE     OWNER            SEGMENT_NAME         PARTITION_NAME
-------- -------- ------- ---------------- ---------------- -------------------- ----------------
    1326     3981       8 TABLE PARTITION  TESTUSER         AGGR_FACT_DATA       AFL_P_211

you identified a data block

select file_id,block_id,blocks,segment_type,owner,segment_name,partition_name from datafile_map 
where file_id=202 and 100 between block_id and block_id + blocks - 1
SQL> /

   FILE_ID   BLOCK_ID     BLOCKS SEGMENT_TYPE     OWNER            SEGMENT_NAME         PARTITION_NAME
---------- ---------- ---------- ---------------- ---------------- -------------------- ---------------
       202          1       1280 tempfile                          C:O102TEMP02.DBF

you identified a tempfile file_id

select file_id,block_id,blocks,segment_type,owner,segment_name,partition_name from datafile_map 
where file_id=1 and block_id between 0 and 100 order by file_id,block_id;

   FILE_ID   BLOCK_ID     BLOCKS SEGMENT_TYPE     OWNER            SEGMENT_NAME         PARTITION_NAME
---------- ---------- ---------- ---------------- ---------------- -------------------- ---------------
         1          9          8 ROLLBACK         SYS              SYSTEM
         1         17          8 ROLLBACK         SYS              SYSTEM
         1         25          8 CLUSTER          SYS              C_OBJ#
         1         33          8 CLUSTER          SYS              C_OBJ#
         1         41          8 CLUSTER          SYS              C_OBJ#
         1         49          8 INDEX            SYS              I_OBJ#
         1         57          8 CLUSTER          SYS              C_TS#
         1         65          8 INDEX            SYS              I_TS#
         1         73          8 CLUSTER          SYS              C_FILE#_BLOCK#
         1         81          8 INDEX            SYS              I_FILE#_BLOCK#
         1         89          8 CLUSTER          SYS              C_USER#
         1         97          8 INDEX            SYS              I_USER#

you mapped the first segments in system tablespace

Try it on a database with lot of segments and lot of datafiles, and compare with DBA_EXTENTS. Then you will know which one to choose in case of emergency.

SQL Server 2016: availability groups and the new option DB_FAILOVER

Wed, 2015-06-03 14:41

Let’s continue the SQL Server 2016’s discovery with one of my favourite topic: the AlwaysOn availability groups (AAGs). There are some improvements on this area.

First of all, let’s introduce one of them (DB_FAILOVER option) by remembering of the availability groups behaviour with the previous versions of SQL Server. One misconception that exists at different customer places is that an availability group doesn’t detect a database failure. No, this not a joke but the pure reality. AAGs are designed only to detect issues at the SQL Server level instance until the introduction of SQL Server 2016. You can verify by yourself by looking at the different possible failover conditions levels.

Indeed, SQL Server 2016 adds a new layer of failover capabilities with the new DB_FAILOVER option. According to the BOL, a new response will be taken by SQL Server when a database on the primary replica will be offline. Maybe, we must precise here what offline means: any status other than ONLINE for a database in the availability group will trigger an automatic failover. Keep in mind that this new feature is set on the group level. It means that all databases included to this group will be concerned by this option.

During my session at the previous event Journées SQL Server 2014 in Paris, I had the opportunity to show that no action was taken by the availability group for a database that had lost its transaction log. To be more precise, I simulated a scenario where the disk that contains the transaction log file has been lost. This is probably the most common scenario where we may encounter a database file failure. Let’s perform the same today with SQL Server 2016.

Let’s begin with the creation script of the availability group 2016Grp. I put only the interesting part below:

 

CREATE AVAILABILITY GROUP [2016Grp] WITH (        AUTOMATED_BACKUP_PREFERENCE = PRIMARY,        DB_FAILOVER = ON,        DTC_SUPPORT = NONE ) FOR DATABASE [killerdb], [SSISDB] REPLICA ON N'SQL161' WITH (        ENDPOINT_URL = N'TCP://SQL161.dbi-services.test:5022',        FAILOVER_MODE = AUTOMATIC,        AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,        BACKUP_PRIORITY = 50,        SECONDARY_ROLE(ALLOW_CONNECTIONS = NO) ), N'SQL162' WITH (        ENDPOINT_URL = N'TCP://SQL162.dbi-services.test:5022',        FAILOVER_MODE = AUTOMATIC,        AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,        BACKUP_PRIORITY = 50,        SECONDARY_ROLE(ALLOW_CONNECTIONS = NO) ); GO

 

This is a basic availability group with two replicas (SQL161 and SQL162) configured with synchronous replication and automatic failover that includes two databases named killerdb and SSISDB. You may notice the new options at the top of the script: DB_FAILOVER and DTC support that are available from the GUI as well:

 

blog_49_-_4_-_alwayson_new_parameter_db_failover

 

 

Let’s continue by creating a simple table dbo.t in the killerdb database and next, by inserting a bunch of data.

 

USE killerdb; GO   CREATE TABLE dbo.t (        id INT NOT NULL IDENTITY(1,1) ); GO   SET NOCOUNT ON; GO   INSERT dbo.t DEFAULT VALUES; GO 200   SELECT COUNT(*) FROM dbo.t;

 

As expected the table contains 200 records.

Now, it’s time to unplug my disk from my computer that contains the killerdb transaction log file. After that, a new insertion of 200 additional rows raises the following error:

 

Msg 9001, Level 21, State 3, Line 7 The log for database 'killerdb' is not available. Check the event log for related error messages. Resolve any errors and restart the database. Msg 0, Level 20, State 0, Line 6 A severe error occurred on the current command. The results, if any, should be discarded.

 

And good news! The availability group 2016Grp has switched automatically to the SQL162 replica in my case. Bear in mind that for one database failure, all the group will failover. Let’s take a look at the AlwaysOn _health session:

 

blog_49_-_6_-_SQLDIAG_xel_file_detail

 

You may see the response from the availability group against the error 1117 - I/O – related to my unplugged disk event. On the new secondary the killerdb database is no longer online with the recovery pending state.

 

SELECT        @@SERVERNAME as server_name,        name AS database_name,        state_desc FROM sys.databases WHERE name = 'killerdb'; GO

 

blog_49_-_7_-_killerdb_state

 

This first improvement is surely a good way to enhance the global availability of the AlwaysOn availability group. We'll discuss about the other interesting new options in the next blog posts.

Stay connected!

 

 

Oracle memory advisors: how relevant ?

Wed, 2015-06-03 11:35

Do you look at memory advisors? I usually don't but I still show them during the performance tuning workshop because there is no easy way to get an idea about memory sizing. This morning, while giving the training, I've shown on our workshop environment what the problem is: they are based on statistics cumulated from instance startup, which can cover months of heterogeneous activity, and then the result is probably meaningless.

If you want to trust advisors, then you need to know on which measures it is based. Let's check it.

V$DB_CACHE_ADVISOR

Here is how we query the V$DB_CACHE_ADVISOR in order to see the estimated physical reads for different buffer cache size:

SQL> 
SELECT a.size_for_estimate "Buffer size MB",
       a.size_factor "Factor size",
       round(a.estd_physical_read_time/1000,2) "Estim. time (s)",
       a.estd_physical_read_factor "Estim. time factor",
       a.estd_physical_reads "Estim. nb physical read"
FROM  sys.v$db_cache_advice a
WHERE a.name='DEFAULT'
ORDER BY a.size_for_estimate;

Buffer size MB Factor size Estim. time (s) Estim. time factor Estim. physical read
-------------- ----------- --------------- ------------------ --------------------
            24       .0909            1.31             8.4871             23424349
            48       .1818            1.16             7.5612             20868825
            72       .2727             .37             2.3838              6579289
            96       .3636             .31             1.9787              5461235
           120       .4545             .26             1.6831              4645325
           144       .5455             .23             1.4912              4115679
           168       .6364             .21             1.3713              3784848
           192       .7273              .2             1.2564              3467715
           216       .8182             .18             1.1418              3151277
           240       .9091             .16             1.0568              2916629
           264           1             .16                  1              2759998
           288      1.0909             .15              .9351              2580935
           312      1.1818             .14              .8736              2411003
           336      1.2727             .13              .8291              2288418
           360      1.3636             .12              .7918              2185486
           384      1.4545             .12              .7537              2080272
           408      1.5455             .11              .7035              1941706
           432      1.6364              .1              .6479              1788252
           456      1.7273             .09              .6021              1661696
           480      1.8182             .09               .554              1529086

Look at the factor 1 - the current values. The advisor is based on 2.7 million physical reads. Let's see if it is based on statistics since instance startup or a shorter period.

V$SYSTAT

I'll display the instance statistics (cumulative since instance startup) that measure physical reads:

SQL> select value,name from v$sysstat where name like 'physical reads %';

     VALUE NAME
---------- ----------------------------------------------------------------
   2760403 physical reads cache
  86342292 physical reads direct
     33656 physical reads direct temporary tablespace
     76909 physical reads cache prefetch
     13105 physical reads prefetch warmup
         0 physical reads retry corrupt
      3428 physical reads direct (lob)
         0 physical reads for flashback new
         0 physical reads cache for securefile flashback block new
         0 physical reads direct for securefile flashback block new

Here it's clear: the advisor was based on the 2.7 million physical reads to cache. Those values are cumulated from instance startup. If the instance have been started a long time ago then there is nothing relevant here: activity is not regular, memory component have been resized several times, etc. And if the instance has been started recently, then the cache activity is not significant: you did lot of physical reads to load the cache.

AWR / Statspack

Ok. we know that V$ views are cumulative from instance start. When we want to look at statistics on a shorter period of time we have AWR or Statspack. Here is the Buffer Cache advisor section:

Buffer Pool Advisory                                DB/Inst: DB1/DB1  Snap: 61
-> Only rows with estimated physical reads >0 are displayed
-> ordered by Block Size, Buffers For Estimate

                                    Est
                                   Phys      Estimated                  Est
    Size for   Size      Buffers   Read     Phys Reads     Est Phys % DBtime
P    Est (M) Factor  (thousands) Factor    (thousands)    Read Time for Rds
--- -------- ------ ------------ ------ -------------- ------------ -------
D         24     .1            3    8.5         23,422            1  1307.0
D         48     .2            6    7.6         20,866            1  1164.0
D         72     .3            9    2.4          6,579            1   368.0
D         96     .4           12    2.0          5,461            1   306.0
D        120     .5           15    1.7          4,645            1   260.0
D        144     .5           18    1.5          4,116            1   231.0
D        168     .6           21    1.4          3,785            1   213.0
D        192     .7           24    1.3          3,468            1   195.0
D        216     .8           26    1.1          3,151            1   177.0
D        240     .9           29    1.1          2,917            1   164.0
D        264    1.0           32    1.0          2,760            1   155.0
D        288    1.1           35    0.9          2,581            1   145.0
D        312    1.2           38    0.9          2,411            1   136.0
D        336    1.3           41    0.8          2,289            1   129.0
D        360    1.4           44    0.8          2,186            1   123.0
D        384    1.5           47    0.8          2,080            1   118.0
D        408    1.5           50    0.7          1,942            1   110.0
D        432    1.6           53    0.6          1,788            1   101.0
D        456    1.7           56    0.6          1,662            1    94.0
D        480    1.8           59    0.6          1,529            1    87.0
                          ------------------------------------------------------
this looks like the cumulative values from instance startup. But I want to be sure - not guess. The instance statistics section can show get the number of physical reads to cache during this period of time:
Instance Activity Stats                         DB/Inst: DB1/DB1  Snaps: 60-61
-> Ordered by statistic name

Statistic                                     Total     per Second     per Trans
-------------------------------- ------------------ -------------- -------------

physical reads                                1,973           22.9         140.9
physical reads cache                          1,863           21.7         133.1
only few thousand of them here. This confirms that the advisor is not based on delta values.

Conclusion

From what we see, the memory advisors are based on values cumulated since instance startup. I always advise to focus the performance analysis on a short period where activity is regular. Then I can't advise to use those advisors. You can look at it in the few following days after instance startup, just to have an idea, but don't rely only on that. In my opinion, that a bug. there is no reason to show cumulative values in a Statspack / AWR report. I think that the advisor can do similar estimations on delta values. Anyone volunteer to open a bug ?

Ways to improve your SharePoint Intranet

Wed, 2015-06-03 02:23
World today is changing in companies using different path to get better communication with their staff, to boost productivity, improve the effectiveness of the strategy and business direction.
SharePoint has become a reference's tool regarding this point, however, many organizations still struggling to get the most out of the features it offers and find it difficult to increase adoption.In this article, we will see the key points to improve your SharePoint environnement.
USE "FOLLOW" AND "SHARE"

SharePoint 2013 have some great features that are often overlooked, for example the "follow" and "share" features are simple ways to track which documents are interesting and share information with others.
FOLLOW: users are notified when a change is made to anything they are following whether conversations or documents.
SHARE: allows individuals to share relevant sites and documents with colleagues they feel will benefit from the content. Both these functionalities are ways for people in the organisation to take control of getting up to date information and distributing what they have learnt to others.

followShare

 








It could be like a gossip.. ah ah .. Follow and Share!
MY SITE The SharePoint ‘My Site’ can be compared to a user’s Facebook page for the workplace. ‘My Sites’ serves as a user’s personal page for sharing information about their skills, expertise and interests with others in the company. It also gives a user a central location to manage and store contacts, documents and anything they want to refer back to easily. Users can also personalise their ‘My Sites’ for an element of uniqueness. - See more at: http://www.pointbeyond.com/2015/02/06/five-great-ways-to-improve-usage-of-your-sharepoint-intranet#sthash.3DARs9wW.dpuf The SharePoint ‘My Site’ can be compared to a user’s Facebook page for the workplace. ‘My Sites’ serves as a user’s personal page for sharing information about their skills, expertise and interests with others in the company. It also gives a user a central location to manage and store contacts, documents and anything they want to refer back to easily. Users can also personalise their ‘My Sites’ for an element of uniqueness. - See more at: http://www.pointbeyond.com/2015/02/06/five-great-ways-to-improve-usage-of-your-sharepoint-intranet#sthash.3DARs9wW.dpuf

SharePoint "My Site" can be compared to a user’s Facebook page for the workplace.
MY SITE: serves as a user’s personal page for sharing information about their skills, expertise and interests with others in the company. It also gives a user a central location to manage and store contacts, documents and anything they want.
A "My Site" is personalizable in order to be unique for each user.

MySite

Share carefully, "My Site" is a place to share professional information!

SHAREPOINT MOBILITY

In this mobile age it’s fair to say the biggest improvement any organisation can make to improve usage of an intranet is to make it mobile compatible. Companies may want to consider rolling these features out slowly or even start with a test project in order to get feedback from business people.
The key is to get involve the users themselves to step by step improve adoption.

phonemobile

Access the information and documents needed from everywhere!

STEPS IMPORTANCE

Companies that get the SharePoint implementation right, often start simply, with many of the features disabled.
Taking time to do the things using the BEST PRACTICES in the right way, step after step will drive SharePoint implementation to success, leave the "bells-and-whistles" until the last.

Example:   

  • Event updates
  • Replacing slowly the existing Intranet
  • Create document management
  • Create orms management
  • Use business process and workflow management
  • Start sharing B.I dashboards and reports
  • ...

the_fall

"The shortest way to do many things is to do only one thing at once"

PROVIDE TRAINING AND SUPPORT

To get the utilisation for SharePoint, provide training and support to staff. This is the key to sustainable adoption.
The session must be adapted to the different users case: business user, power user, site owner, site collection admin. The knowledge minimum should be Business User Level.
Use resources for knowledge as FAQ, checklists, etc...

Here are 2 points from Microsoft regarding Training and Support:

 

Support users by creating a training plan

  • Establish short, just-in-time training options for users
  • Ensure that your site owners are properly trained before giving them site ownership
  • Provide training to content contributors to ensure effective content management
  • Create a site owner community to enable users to help each other

Ensure ongoing success by creating a user support plan

  • Establish a contact person for every page
  • Establish a SharePoint Center of Excellence within your organization to provide high-end support for users
  • Survey users on a regular basis to gather feedback and establish metrics
  • Ensure content gets moved from legacy platforms to SharePoint in a planned manner

more details: SharePoint Adoption Guide - Microsoft

 

CONCLUSION

End user adoption is the key of your SharePoint Intranet success!






SQL Server 2016: Live query statistics

Tue, 2015-06-02 16:44

During my tests of the SQL Server 20126 CTP2, I noticed an additional icon on SQL Server Management studio which is close to the Actual Execution Plan button as shown below:

 

blog_48_-_1_-_query_live_statistics_icon

 

I decided to take a look at the SQL Server 2016 BOL and I found a topic about this feature. It seems to be a funny feature. So after that, I decided to test it with one of my financial query (developed for one of my customer) which usually takes a long time to run.

Let's go ahead and after running my long query, this first test was in fact inconclusive because after opening a new tab, I got stuck with the following message even after stopping my query execution:

 

blog_48_-_2_-_message_waiting_for_a_query_plan

 

Maybe my query is too complex and I have to investigate in a near future. Updated 04.06.2015: It seems that it is a bug with this current release of SQL Server 2016 (CTP2). So I will try later with maybe, the next CTP.

Let's continue by cutting out my big query into smaller pieces of code and the test ran successfully this time. Indeed, I was able to see a “progressive” query plan where we can see how long operators are taking as well as how far they are in their operations with some awesome animation.

 

blog_48_-_5_-_live_query_statistics

 

Basically, we get an “enhanced” execution plan tree. The dotted lines point out the operations in progress with some additional information as the current duration for each operator. Likewise, we may retrieve all other usual information that concern a query execution plan.

However, I noticed during my tests that the percentage calculation didn't work on the operators if the row estimate is not correct (it will get stuck on 100% while the time keeps ticking away). At this point, I remembered that I faced the same problem with the DMV sys.dm_exec_query_profiles introduced by SQL Server 2014 (please take a look at my blog post here). Let’s perform the same test by using the sys.dm_exec_query_profiles DMV and as excepted I noticed the same duration issue when cardinality estimation was wrong.

 

 

blog_48_-_4_-_sys_dm_exec_query_stats

 

I guess we will learn better in the future about this interesting feature. We're certainly on the good way in order to detect expensive operations just by looking at the pretty cool animation!

 

 

How to disable a SQL Plan Directive permanently

Tue, 2015-06-02 15:10

In 12c you will see a lot of SQL Plan Directives. Some are useful to get better execution plans, but some will trigger too much Adaptive Dynamic Sampling and that can become a big overhead, especially in Standard Edition. Then you don't want to drop them - or they will reappear. You can disable them, but what will happen after the retention weeks? Let's test it.

Disabled directive

A directive has been created which triggers too expensive dynamic sampling. You don't want that and you have disabled it one year ago with:

SQL> dbms_spd.alter_sql_plan_directive(14130932452940503769,'ENABLED','NO');
and everything is good. You're happy with that. Here is the directive:
SQL> select directive_id,state,last_used,auto_drop,enabled,extract(notes,'/spd_note/spd_text/text()') spd_text,extract(notes,'/spd_note/internal_state/text()') internal_state from dba_sql_plan_directives where directive_id in(select directive_id from dba_sql_plan_dir_objects where owner='DEMO' );

        DIRECTIVE_ID STATE      LAST_USED AUTO_DROP ENABLED SPD_TEXT                         INTERNAL_S
-------------------- ---------- --------- --------- ------- -------------------------------- ----------
14130932452940503769 SUPERSEDED 28-APR-14 YES       NO      {E(DEMO.DEMO_TABLE)[A, B, C, D]} HAS_STATS
The directive has not been used since April 2014 thanks to the 'enabled' set to NO.

If I run a query with a filter on those columns:

SQL> select count(*) Q1 from DEMO_TABLE where a+b=c+d;

                  Q1
--------------------
               10000

23:10:32 SQL> select * from table(dbms_xplan.display_cursor(format=>'basic +note'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------
EXPLAINED SQL STATEMENT:
------------------------
select count(*) Q1 from DEMO_TABLE where a+b=c+d

Plan hash value: 1839825454

-----------------------------------------
| Id  | Operation          | Name       |
-----------------------------------------
|   0 | SELECT STATEMENT   |            |
|   1 |  SORT AGGREGATE    |            |
|   2 |   TABLE ACCESS FULL| DEMO_TABLE |
-----------------------------------------

there is no dynamic sampling that this is exactly what I want.

Retention weeks

My retention is the default: 53 weeks. Let's see what happens after 53 weeks. I can call the 'auto drop' job with dbms_spd.drop_sql_plan_directive passing a null instead of a directive_id:

SQL> exec dbms_spd.drop_sql_plan_directive(null);

PL/SQL procedure successfully completed.

Run a few queries

Then let's have a few queries on those table columns:

SQL> select count(*) Q2 from DEMO_TABLE where a+b=c+d;

                  Q2
--------------------
               10000

SQL> select count(*) Q3 from DEMO_TABLE where a+b=c+d;

                  Q3
--------------------
               10000

and check the execution plan:

SQL> select * from table(dbms_xplan.display_cursor(format=>'basic +note'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------
EXPLAINED SQL STATEMENT:
------------------------
select count(*) Q3 from DEMO_TABLE where a+b=c+d

Plan hash value: 1839825454

-----------------------------------------
| Id  | Operation          | Name       |
-----------------------------------------
|   0 | SELECT STATEMENT   |            |
|   1 |  SORT AGGREGATE    |            |
|   2 |   TABLE ACCESS FULL| DEMO_TABLE |
-----------------------------------------

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)
   - 1 Sql Plan Directive used for this statement

A directive has been used:
 SQL> select directive_id,state,last_used,auto_drop,enabled,extract(notes,'/spd_note/spd_text/text()') spd_text,extract(notes,'/spd_note/internal_state/text()') internal_state from dba_sql_plan_directives where directive_id in(select directive_id from dba_sql_plan_dir_objects where owner='DEMO' );

        DIRECTIVE_ID STATE      LAST_USED AUTO_DROP ENABLED SPD_TEXT                         INTERNAL_S
-------------------- ---------- --------- --------- ------- -------------------------------- ----------
14130932452940503769 SUPERSEDED 15-MAY-15 YES       YES     {E(DEMO.DEMO_TABLE)[A, B, C, D]} HAS_STATS
Oh! The directive is back and enabled !

Auto Drop

Here are the criteria for auto-drop. SPD are considered to be dropped when AUTO_DROP is YES and either:

  • SPD is flagged as redundant
  • One of the tables has been dropped (in recycle_bin means dropped)
  • LAST_USAGE is from before the retention window
  • State is NEW (LAST_USED is null) and CREATED is before retention window
Do you see? Nothing about the ENABLE YES/NO there...

Conclusion

If you want to disable a SPD and be sure that it will never reappear then you have to do both of following:

SQL> exec dbms_spd.alter_sql_plan_directive(14130932452940503769,'ENABLED','NO');
SQL> exec dbms_spd.alter_sql_plan_directive(14130932452940503769,'AUTO_DROP','NO');
then because the AUTO DROP is disabled, the directive will never be deleted automatically.

Recap SharePoint Event Paris 2015

Tue, 2015-06-02 03:12

 




conf_paris


Guillaume Meunier and I went to SharePoint Event 2015 in Paris on Saturday 30, May.

This event was well organized and on time! We learned about the news features regarding SharePoint, related to B.I, SQL, governance, collaboration & Communication, Workflow and Process System.




The sessions we followed are:

How OneDrive Company revolutionized storing files in my business
Almost all companies use the SMB file system and deployment networks on workstations to store and centralize the documents produced by the company. With SharePoint 2013 it is OnPremise Online or you can reach this feature by providing mobility, the Offline mode and Full Web mode. As part of implementation of document management, the first step to store and organize documents. In addition, when audit quality, work for the listener is that easier. This session was presented by Trelohan kevin and Hans Brender.

 

Making SharePoint Governance work for Businesses, IT and Users
With every SharePoint implementation comes the issue of governance. We all need it, but no one wants to do it. This session show us - in a practical way - how to implement a good governance practice that will engage users, it and business users throughout the entire lifetime of your SharePoint Platform. We had an overview of the practical tools and methods to overcome most issues we have to deal with, and a complete framework for SharePoint governance was shared. This session was the latest revision of the SharePoint Governance session delivered by Anders Skjoenaa.

Making-SharePoint-Governance-work-for-business-IT-and-users_20150602-093333_1.pdf

Integrating SharePoint into everyday working methods for a successful user adoption
Regarding SharePoint implementation, for any organization it means being aware of a significant investment: Human and Financial.
Once the platform deployment and content migration achieved, many companies face with the same issue: low usage by their business users.
AvePoint show us theirs solutions in order to enhanced Business Users daily methods: better management of internal meetings and projects directly in SharePoint, sharing content from SharePoint with external stakeholders, using Outlook/Office to register directly documents into SharePoint, a portal to access individualized services catalogs depending on the business needs.Top of Form
A session presented by Emmanuel Deletang from Avepoint.

 

High Availability & Disaster Recovery in SharePoint 2013 with SQL Server Always On Availability Groups!
SQL Server is really the brain of SharePoint; in this session, Serge Luca (SharePoint MVP) and Isabelle Van Campenhoudt (SQL Server MVP) gave us an overview of what any SharePoint consultant and DBA need to know regarding business continuity in SharePoint 2013. Of course SQL Server plays a major role in this story.
Topics covered:

  • Concepts of business continuity
  • SharePoint and Business continuity
  • Patterns and anti-patterns
  • SharePoint and SQL Server Always on Availability groups: what works, what doesn’t work (demos) (HA and DR)

HA-et-DR-en-SharePoint-2013-avec-SQL-server-AOAVG.pdf

If you need more information regarding SQL Server, please feel free to contact our exterts: Stephane Haby, David Barbarin & Stephane Savorgano.

Power BI 365
Power BI is THE BI brick Office 365. Power BI is THE BI brick Office 365.
Resolutely oriented Self-Service BI, it is intended for users who handle data in their trade. But how, when we are not specialist in Business Intelligence?
This session make a complete tour of the Power of BI functionalities: Power Query, Power Pivot, Power View, Power Map, Power Q & A Site Power BI, BI Power App. It makes us discover and understand the value added of the Self-service BI for Users.

A quick reminder: dbi services BI specialist is Matthieu Munch, do not hesitate to contact him if you need more information regarding BI tools and consulting.

Power-BI-365.pdf

Automate processes with SharePoint?
When talking about business process automation or BPM solutions, the SharePoint limits are often reached. Then there several possibilities: develop custom solutions, transform customer needs, and use third-party solutions (as Nintex for example). There is never a single right way to go, but, the most succeed process, I would say: the BEST PRACTICES for the Business Owner drives to satisfaction. We had a presentation of various methods and their impact on costs, the capabilities and constraints they induce.

 

The latest session was driven by all the MVP Team, it was a summary around MS Ignite Session relating the New Features for SharePoint 2016.

MVP_Team
Please have a look at this blog regarding this subject: SharePoint 2016: What’s new? And What to expect?

SAP HANA SQL scripting optimization: the CE Functions

Tue, 2015-06-02 02:15

In SAP HANA, you have two possibilities to create the Calculated Views:

  • Using the graphical method
  • Using the scripting method with CE functions

In this blog, I will demonstrate that CE Functions can improve performances from a Calculated View.

First, I will give you some general information regarding the CE Functions. After that, I will show you the two different ways to create a Calculated View. And at the end, I will compare their performances using a SQL select.

If you want to have more information regarding the SAP HANA technology, don't hesitate to assist at the next dbi services event:

http://www.dbi-services.com/index.php/newsroom-e/events/event-l-in-memory-r-boost-your-it-performance

CE Functions

The CE Functions encapsulate data-transformation functionalities. They constitute an alternative to using SQL statements as their logic is directly implemented in the Sap HANA CALC engine. Direct use of the CALC engine allows implementers to influence the execution of a procedure or a query which, in some cases, is more efficient.

In the table below, you can find the list of the CE Functions and their use cases:

1_SAP_HANA_CEF.PNG

How to create a Calculation View

As I say at the beginning, you have two methods to create a Calculation View in Sap HANA:

  • Using the graphical method
  • Using the Scripting method
Using the graphical Method

Right click on a package and select “New / Calculation View”

2_SAP_HANA_CEF.PNG

Select the “Graphical” type

3_SAP_HANA_CEF.PNG

Create your view following the steps below

        1. Select type of operation you want to perform

        2. Select your sources

        3. Join your sources

        4. Select the object you want to use in your Calculation view

        5. Join your “Aggregation operation” to the “Aggregation” box

4_SAP_HANA_CEF.PNG

Create your Calculation View Layout

          1. Click on the “Aggregation” box

          2. Select the object you want to have in your layout

          3. Check if the object are selected and add, if needed, calculated columns

5_SAP_HANA_CEF.PNG

 Check the objects

          1. Click on “Semantics” box

          2. Select the type of the objects

          3. Validate and activate the view

6_SAP_HANA_CEF.PNG

Using the “SQL scripting” Method

Right click on a package and select “New / Calculation View”

7_SAP_HANA_CEF.PNG

Select the “SQL Script” type

8_SAP_HANA_CEF.PNG

Create your view following the steps below

          1. Click on the “Script View” box

          2. Type our SQL script

          3. Introduce the CE functions

9_SAP_HANA_CEF.PNG

Check the objects

          1. Click on “Semantics” box

          2. Select the type of the objects

          3. Validate and activate the view

10_SAP_HANA_CEF.PNG

SQL Performance comparison Goal of the test

In this part, I will compare the SQL performance from two calculated views that have been built with the two different methods:

  • “Graphical” method
  • “SQL scripting” method
Description of the test

The same SELECT query will be send to the database and we will check the Server time response. The two SELECT will use a different calculated view as data source:

  • CAL_AMOUNT (graphical method)
  • CAL_AMOUNT_CE (SQL Scripting method)
Test with the “Graphical” calculated view SQL Query

11_SAP_HANA_CEF.PNG

Performance

12_SAP_HANA_CEF.PNG

Test with the “SQL Scripting” calculated view SQL Query

13_SAP_HANA_CEF.PNG

Performance

14_SAP_HANA_CEF.PNG

Performance decoding

Why the use of these CE functions boost the performances of your queries? The explanation is located in the query execution plan and especially in the use of the CALC engine from the SAP HANA database.

When you send a “normal” SQL query in the SAP HANA database, the CALC engine is not used. The SQL parser send the query directly to the “Database optimizer” to optimize the execution of the query (1).

15_SAP_HANA_CEF.PNG

When your SELECT query uses a calclated view with CE functions, the CALC Engine optimizes the calculation model.

16_SAP_HANA_CEF.PNG

In our case, when we analyze the “normal” SQL query, the “calculation search” task has been split in 3 different sub-queries that can’t start at the same time.

But with the optimized calculated view, we can remark that there is only 2 sub-queries.

17_SAP_HANA_CEF.PNG

Conclusion

The use of CE functions in the creation of calculated views can significantly accelerate the execution of your SQL queries. The CALC engine from SAP HANA is optimize to use these functions.

There’s only one restriction using this kind of functions. The performance will dramatically reduce if you try to create a SQL query mixing “normal” and “optimized” calculated views.

 

 

 

 

 

 

 


 

 

 

 

SQL Server 2016 CTP2 : first thoughts about tempdb database

Sun, 2015-05-31 21:00

In my first blog about SQL Server 2016, I noticed during the SQL Server installation process that we are able to configure the number of files for tempdb. This is surely a great improvement but at this point, you may think that the configuration of the tempdb database is finished but in fact you’re wrong.

Let me say why. First let’s have a look at the tempdb files configuration after installing SQL Server.

 

use tempdb; go   select        name,        physical_name,        size * 8 as siz_kb,        case is_percent_growth              when 0 then concat(growth * 8, 'KB')              else concat(cast(growth as decimal(5, 2)), '%')        end growth from sys.database_files; go

 

blog_46-_1_-_tempdb_first_config

 

On my virtual environment, the installation process has detected 4 VCPUs and it will automatically assign 4 files for tempdb in my case. If we take a look at the BOL, we can see the maximum number of files that can be assigned by the installer is 8. At this point, we may think this is a very interesting improvement because I faced some performance issues caused by a misconfigured tempdb database during my career. However let’s take a look at the default parameters set by the installer: first, all data files are configured with a size of 8MB as well as the growth value with 10%. Next, let’s have a look at the default path … Ok, it seems that we still have extra steps to finalize the configuration of the tempdb database and for the moment, I’m not sure we can push in the trash our post-install script for tempdb. Having some additional parameters from the installer (GUI and script) to configure tempdb may be a good idea, so I guess we should keep an eye on this configuration area for the next CTP release.

But that’s not all, we also have others good news in this topic: trace flags 1117 (grow all files in a filegroup equally) and 1118 (full extent only) are no longer needed for tempdb database. This is a good news because implementing the trace flag 1117 impacts all databases on the server.

Let me show you with some tests:

 

-- perform a checkpoint to empty the Tlog checkpoint; go   -- create a user in tempdb create table #t_extent_uniform (        id int default 1,        col1 varchar(50) default 'T' ); go   -- insert one record to allocate pages insert #t_extent_uniform default values; go

 

Next. Let’s take a look at the transaction log content by using the undocumented function sys.fn_db_log(). (Note that there are several ways to check allocation. Using sys.fn_db_log() function is one of them).

 

select        [Current LSN],        Operation,        Context,        AllocUnitName,        Description from sys.fn_dblog(null, null); go

 

blog_46-_2_-_extent_allocation_

 

Ok, after inserting my record, I can confirm that SQL Server has allocated a uniform extent for my object.

Let’s continue by checking if SQL Server will grow uniformly the tempdb data files. First I recreated a temporary table with a sufficient row size to fill up quickly a data page.

 

-- create a user in tempdb create table #t_extent_uniform (        id int default 1,        col1 char(8000) default 'T' ); go

 

We will check up the growing and filling of the tempdb data files by using the following script:

use tempdb; go   select        name,        physical_name,        size / 128 as siz_mb,        case is_percent_growth              when 0 then concat(growth * 8, 'KB')              else concat(cast(growth as decimal(5, 2)), '%')        end growth,        cast(FILEPROPERTY(name, 'SpaceUsed') * 100. / size as decimal(15, 2)) as space_used_percent from sys.database_files where type_desc = 'ROWS' go

 

So let’s perform a basic test by inserting a bunch of data to raise a growth of the tempdb data files:

- Here the situation before inserting the data:

 

blog_46-_3_-_tempdb_data_size

 

- Before raising the growing of the tempdb data files:

At this point, all the data files are filled up by SQL Server as expected:

 

blog_46-_4_-_tempdb_data_size

 

- And after several insertions and grow of the tempdb data files:

 

blog_46-_5_-_tempdb_data_size

 

We performed basic tests here and we will have probably to take into account other real scenarios but for the moment, let’s savour these good news. But just to clarify, this improvement is not magic and we will still face some concurrency issues with workloads that use heavily the tempdb database. However,  introducing this new configuration area directly from the installer may encourage DBAs to be aware of the tempdb database.  Just note that it still remains some other interesting mechanisms in order to improve concurrency of tempdb as caching mechanism but it often requires reviewing the T-SQL code from the application side (In my opinion, this is probably the most important improvement that I have ever seen on the tempdb topic).

I’m confident that we’ll get other good news, so stay connected!

 

Convert a snapshot standby to a new database

Sun, 2015-05-31 10:05

A snapshot standby database is a nice solution when you want a temporary copy of the primary where you can do whatever you want (test a bug fix for example) and then convert it back to physical standby. But you don't want to stay in that state definitely because you will accumulate archived logs from the changes done to the primary, and flashback logs for the changes made to the snapshot standby.

Here I'll show how to finish it when you want a permanent new database. You can use that when you want to do a duplicate, but don't know in advance which point-in-time should be used. Then you just configure a physical standby, convert it to snapshot standby, maybe flashback it the the required point. And once you have found the right point, then you finish it to have a new database name and DBID.

testing postgresql development snapshots

Thu, 2015-05-28 03:00
If you want to test some of the features that will be coming with the next postgresql release even before an official beta version is released you can use the daily development snapshot .

SQL Server 2016 CTP2 is out!

Wed, 2015-05-27 14:22

As announced by Satya Nadella at the Microsoft Ignite keynote in Chicago, SQL Server 2016 CTP2 is finally out and it follows a previous private CTP.

This CTP will provide some interesting features to evaluate as:

  • Always Encrypted
  • Stretch databases
  • Operational analytics

However, this CTP does not stop here and will incorporate a myriad of new features and many improvements, such as:

  • Polybase
  • AlwaysOn enhancements
  • Row level security
  • Dynamic data masking
  • Native JSON support
  • Temporary data support
  • Query store
  • MDS enhancements
  • Enhanced hybrid backup to Azure

Installation updates are also included with CTP now!

 

Ok just for fun, here the first point that I noticed during my first SQL Server 2016 installation:

 

blog_46_-_2_-_install_sql2016_-_tempdb

 

Tempdb is now configurable during the installation process  (the number of files is computed according to the number of CPUs available on the server). There will be other points for sure!.

Ready to evaluate SQL Server 2016? At dbi services, this is the case and we'll have the opportunity to talk about the new features during this summer.  We'll also the opportunity to give you an quick overview of new In-memory features during our In-Memory event.

Because SharePoint 2013 has also limitations!

Wed, 2015-05-27 06:42
WHAT IS? Image-13

 Boundaries: are Static limits that cannot be exceeded by design. An example of a boundary is the 2 GB document size limit.

 Thresholds: are configurable limits that can be exceeded to accommodate specific  requirements. A good example is the document size limit.

 Supported limits: are configurable limits that have been set by default to a tested    value. An example of a supported limit is the number of
 site collections per farm. 

 

In other words, let's try to explain quickly what means those terms for non-IT people !
Let's take Human subject Wink

  • A Boundary would be defined by the limits of one's personal space.
  • A threshold would be the point at which a stimulus is great enough to produce an effect. 
    For example, a pain threshold is the point at which a person becomes aware of pain. 

  • The supported limits would be the body/brain limitation "by default" which could be enhanced by practicing sport or learning.

 

However basic important ones are listed below
  • Application pools: 10 per web server 

The maximum number is determined by hardware capabilities.

This limit is dependent largely upon:

    The amount of memory allocated to the web servers

    The workload that the farm is serving, that is, the user base and the usage characteristics (a single highly active application pool can utilize 10 GB or more) 

  • Site collections per farm: 750,000 (500,000 Personal Sites and 250,000 other sites per farm)
  • Web site: 250,000 per site collection
  • Number of content databases: 500 per farm

The maximum number of content databases per farm is 500. With 500 content databases per web application, end user operations such as opening the site or site collections are not affected. But administrative operations such as creating a new site collection will experience decreasing performance. We recommend that you use Windows PowerShell to manage the web application when a large number of content databases are present, because the management interface might become slow and difficult to navigate.

  • Content database size (general usage scenarios): 200 GB per content database

 We strongly recommended limiting the size of content databases to 200 GB.

If you are using Remote BLOB Storage (RBS), the total volume of remote BLOB storage and metadata in the content database must not exceed this limit. 

  • Content database items: 60 million items including documents and list items

The largest number of items per content database that has been tested on SharePoint Server 2013 is 60 million items, including documents and list items. If you plan to store more than 60 million items in SharePoint Server 2013, you must deploy multiple content databases. 

  • File size: 2 GB               

The default maximum file size is 250 MB. This is a configurable limit that can be increased up to 2 GB (2,047 MB). However, a large volume of very large files can affect farm performance.

  • Documents: 30,000,000 per library               

You can create very large document libraries by nested folders, or using standard views and site hierarchy. This value may vary depending on how documents and folders are organized, and by the type and size of documents stored. 

  • Major versions: 400,000               

If you exceed this limit, basic file operations—such as file open or save, delete, and viewing the version history— may not succeed. 

  • Minor versions: 511 

The maximum number of minor file versions is 511. This limit cannot be exceeded. 

  • Items: 30,000,000 per list 

You can create very large lists using standard views, site hierarchies, and metadata navigation. This value may vary depending on the number of columns in the list and the usage of the list. 

  • Bulk operations: 100 items per bulk operation               

The user interface allows a maximum of 100 items to be selected for bulk operations. 

  • Coauthoring in Word and PowerPoint for .docx, .pptx and .ppsx files: 10 concurrent editors per document               

Recommended maximum number of concurrent editors is 10. The boundary is 99.

If there are 99 co-authors who have a single document opened for concurrent editing, each successive user sees a “File in use” error, and can only open a read-only copy.

More than 10 co-editors will lead to a gradually degraded user experience with more conflicts, and users might have to go through more iteration to successfully upload their changes to the server.

  • Web parts: 25 per wiki or Web Part page              

This figure is an estimate based on simple Web Parts. The complexity of the Web Parts dictates how many Web Parts can be used on a page before performance is affected.

  • Blog posts: 5,000 per site               

 

  • Comments: 1,000 per post 

The maximum number of comments is 1,000 per post. 

  • File size of Visio web drawings: 50 MB               

Visio Services has a configuration setting that enables the administrator to change the maximum size of web drawings that Visio processes. 

Larger file sizes have the following side effects:

    Increase in the memory footprint of Visio Services.

    Increase in CPU usage.

    Reduction in application server requests per second.

    Increase overall latency.

    Increase SharePoint farm network load 

  • Users in a site collection: 2 million per site collection

 

COMPARISON CHART:

Limit Name

SharePoint 2010 Maximum Value

SharePoint 2013 Maximum Value

Web application limits

   

Web application

Not Published

20 per farm

Content database

300 per Web application

300 per Web application

Zone

5 per Web application

5 per Web application

Managed path

20 per Web application

20 per Web application

Solution cache size

300 MB per Web application

300 MB per Web application

Site collection

250,000 per Web application

250,000 per Web application

Web server and application server limits

   

Application pools

10 per Web server

10 per Web server

Content database limits

   

of content databases

300 per Web application

500 per farm

Content database size (general usage scenarios)

200 GB per content database

200 GB per content database

Content database size (all usage scenarios)

4 TB per content database

4 TB per content database

Content database size (document archive scenario)

No explicit content database limit

No explicit content database limit

Content database items

60 million items including documents and list items

60 million items including documents and list items

Site collections per content database

2,000 recommended

5,000 maximum

2,000 recommended

5,000 maximum

MySites per content database

Not Published

5,000 per content database

Remote BLOB Storage (RBS) storage subsystem on Network Attached Storage (NAS)

Time to first byte of any response from the NAS cannot exceed 20 milliseconds

Time to first byte of any response from the NAS cannot exceed 20 milliseconds

Site collection limits

   

Site collection

Not Published

2,000,000 per farm

Web site

250,000 per site collection

250,000 per site collection

MySites per farm

Not Published

500,000 per farm

Site collection size

Maximum size of the content database

Maximum size of the content database

Number of device channels per publishing site collection

Not Available in SharePoint 2010

10

List and library limits

   

List row size

8,000 bytes per row

8,000 bytes per row

File size

2 GB

2 GB

Documents

30,000,000 per library

30,000,000 per library

Major versions

400,000

400,000

Minor versions

Not Published

511

Items

30,000,000 per list

30,000,000 per list

Rows size limit

6 table rows internal to the database used for a list or library item

6 table rows internal to the database used for a list or library item

Bulk operations

100 items per bulk operation

100 items per bulk operation

List view lookup threshold

8 join operations per query

8 join operations per query

List view threshold

5,000

5,000

List view threshold for auditors and administrators

20,000

20,000

Subsite

2,000 per site view

2,000 per site view

Coauthoring in Word and PowerPoint for .docx, .pptx and .ppsx files

10 concurrent editors per document

10 concurrent editors per document

Security scope

1,000 per list

1,000 per list

Column limits

   

Single line of text

276

276

Multiple Lines of Text

192

192

Choice

276

276

Number

72

72

Currency

72

72

Date and Time

48

48

Lookup

96

96

Yes / No

96

96

Person or group

96

96

Hyperlink or picture

138

138

Calculated

48

48

GUID

6

6

Int

96

96

Managed metadata

94

94

Page limits

   

Web parts

25 per wiki or Web part page

25 per wiki or Web part page

Security limits

   

Number of SharePoint groups a user can belong to

5,000

5,000

Users in a site collection

2 million per site collection

2 million per site collection

Active Directory Principles/Users in a SharePoint group

5,000 per SharePoint group

5,000 per SharePoint group

SharePoint groups

10,000 per site collection

10,000 per site collection

Security principal: size of the Security Scope

5,000 per Access Control List (ACL)

5,000 per Access Control List (ACL)

Search limits

   

Search service applications

20 per farm

20 per farm

Crawl databases

10 crawl databases per search service application

25 million items per crawl database

5 crawl databases per search service application

Crawl components

16 per search service application

2 per search service application

Index components

Not Published

60 per Search service application

Index partitions

20 per search service application

128 total

20 per search service application

Index replicas

Not Available in SharePoint 2010

3 per index partition

Indexed items

100 million per search service application;

10 million per index partition

100 million per search service application;

10 million per index partition

Crawl log entries

100 million per search application

100 million per search application

Property databases

10 per search service application;128 total

10 per search service application;128 total

Link database

Not Available in SharePoint 2010

Two per Search service application

Query processing components

128 per search application

64/(total crawl components) per server

1 per server computer

Content processing components

Not Published

One per server computer

Scope rules

100 scope rules per scope; 600 total per search service application

100 scope rules per scope; 600 total per search service application

Scopes

200 site scopes and 200 shared scopes per search service application

200 site scopes and 200 shared scopes per search service application

Display groups

25 per site

25 per site

Alerts

1,000,000 per search application (seems like a mistake)

100,000 per search application

Content sources

50 per search service application

50 per search service application

Start addresses

100 per content source

100 per content source

Concurrent crawls

20 per search application

20 per search application

Crawled properties

500,000 per search application

500,000 per search application

Crawl impact rule

100

no limit

Crawl rules

100 per search service application

no limit

Managed properties

100,000 per search service application

50,000 per search service application

Values per managed property

Not Published

100

Indexed managed property size

Not Published

512 KB per searchable/queryable managed property

Managed property mappings

100 per managed property

100 per managed property

Retrievable managed property size

Not Published

16 KB per managed property

Sortable and refinable managed property size

Not Published

16 KB per managed property

URL removals

100 removals per operation

100 removals per operation

Authoritative pages

1 top level and minimal second and third level pages per search service application

1 top level and minimal second and third level pages per search service application

Keywords

200 per site collection

200 per site collection

Metadata properties recognized

10,000 per item crawled

10,000 per item crawled

Analytics processing components

Not Available in SharePoint 2010

6 per Search service application

Analytics reporting database

Not Available in SharePoint 2010

Four per Search service application

Maximum eDiscovery KeywordQuery text length

Not Available in SharePoint 2010

16 KB

Maximum KeywordQuery text length

Not Available in SharePoint 2010

4 KB

Maximum length of eDiscovery KeywordQuery text at Search service application level

Not Available in SharePoint 2010

20 KB

Maximum length of KeywordQuery text at Search service application level

Not Available in SharePoint 2010

20 KB

Maximum size of documents pulled down by crawler

Not Available in SharePoint 2010

64 MB (3 MB for Excel documents)

Navigable results from search

Not Available in SharePoint 2010

100,000 per query request per Search service application

Number of entries in a custom entity extraction dictionary

Not Available in SharePoint 2010

1 million

Number of entries in a custom search dictionary

Not Available in SharePoint 2010

5,000 terms per tenant

Number of entries in a thesaurus

Not Available in SharePoint 2010

1 million

Ranking models

Not Available in SharePoint 2010

1,000 per tenant

Results removal

Not Available in SharePoint 2010

No limit

Term size

Not Available in SharePoint 2010

300 characters

Unique terms in the index

Not Available in SharePoint 2010

2^31 (>2 billion terms)

Unique contexts used for ranking

Not Available in SharePoint 2010

15 unique contexts per rank model

User defined full text indexes

Not Available in SharePoint 2010

10

User Profile Service limits

   

User profiles

2,000,000 per service application

2,000,000 per service application

Social tags, notes and ratings

500,000,000 per social database

500,000,000 per social database

Maximum Number of Users Imported Via FIM

Not Published

1000000

Content deployment limits

   

Content deployment jobs running on different paths

20

20

Blog limits

   

Blog posts

5,000 per site

5,000 per site

Comments

1,000 per post

1,000 per post

Business Connectivity Services limits

   

ECT (in-memory)

5,000 per Web server (per tenant)

5,000 per Web server (per tenant)

External system connections

500 per Web server

500 per Web server

Database items returned per request

2,000 per database connector

2,000 per database connector

Response latency

Not Published

600 seconds

Service response size

Not Published

150,000,000 bytes

Filter Descriptor (in-store)

Not Published

200 per ECT method

ECT Identifier (in-store)

Not Published

20 per ECT

Database Item

Not Published

1,000,000 per request

Workflow limits

   

Workflow postpone threshold

15

15

Workflow timer batch size

100

100

Workflow associations

Not Published

100 per list

List items or documents that can be bulk created or uploaded to start workflow instances

Not Published

5,000 items

Published workflow definitions per site

Not Published

1,000 per site

Total workflow associations per site

Not Published

1,799 per site

Maximum workflow definition (xaml) size

Not Published

5,120 KB

Maximum depth of a workflow sub-step in xaml (workflow complexity)

Not Published

121 levels

Workflow instance activations per second per web server

Not Published

6 per second

Rest calls from SharePoint workflow per second per web server

Not Published

60 per second

Workflow variable value size

Not Published

256 KB

Maximum list size for workflow lookups to non-indexed fields

Not Published

5,000 items per list view

Maximum list size for auto-start workflow associations

Not Published

10 million items per list

Managed Metadata term store (database) limits

   

Maximum number of levels of nested terms in a term store

7

7

Maximum number of term sets in a term store

1,000

1,000

Maximum number of terms in a term set

30,000

30,000

Total number of items in a term store

1,000,000

1,000,000

Number of Variation Labels

Not Available in SharePoint 2010

209 per term store

Visio Services limits

   

File size of Visio Web drawings

50 MB

50 MB

Visio Web drawing recalculation time-out

120 seconds

120 seconds

Visio Services minimum cache age (data connected diagrams)

Minimum cache age: 0 to 24hrs

Minimum cache age: 0 to 24hrs

Visio Services maximum cache age (non-data connected diagrams)

Maximum cache age: 0 to 24hrs

Maximum cache age: 0 to 24hrs

SharePoint Web Analytics service limits

   

SharePoint entities

30,000 per farm when Web Analytics is enabled

Deprecated

PerformancePoint Services limits

   

Cells

1,000,000 per query on Excel Services data source

1,000,000 per query on Excel Services data source

Columns and rows

15 columns by 60,000 rows

15 columns by 60,000 rows

Query on a SharePoint list

15 columns by 5,000 rows

15 columns by 5,000 rows

Query on a SQL Server data source

15 columns by 20,000 rows

15 columns by 20,000 rows

Word Automation Services limits

   

Input file Size

512 MB

512 MB

Frequency with which to start conversions (minutes)

1 minute (recommended) 

15 minutes (default)

59 minutes (boundary)

1 minute (recommended) 

15 minutes (default)

59 minutes (boundary)

Number of conversions to start per conversion process

For PDF/XPS output formats: 30 x M For all other output formats: 72 x M Where M is the value of Frequency with which to start conversions (minutes)

For PDF/XPS output formats: 30 x M For all other output formats: 72 x M Where M is the value of Frequency with which to start conversions (minutes)

Conversion job size

100,000 conversion items

100,000 conversion items

Total active conversion processes

N-1, where N is the number of cores on each application server

N-1, where N is the number of cores on each application server

Word Automation Services database size

2 million conversion items

2 million conversion items

SharePoint Workspace limits (Renamed to Office Live Workspace)

   

Office Live Workspace list items

30,000 items per list

30,000 items per list

Office Live Workspace documents

1,800 documents limit in SharePoint Workspace

1,800 documents in Office Live Workspace

OneNote limits

   

Number of Sections and Section Groups in a OneNote Notebook (on SharePoint)

See limit for “Documents” in List and library limits

See limit for “Documents” in List and library limits

Maximum size of a section

See limit for “File size” in List and library limits

See limit for “File size” in List and library limits

Maximum size of an image, embedded file, and XPS OneNote printout in a OneNote section.

See limit for “File size” in List and library limits

See limit for “File size” in List and library limits

Maximum size of all images, embedded files, and XPS printouts in a single OneNote page.

Default limit is double the “File size” limit.

Default limit is double the “File size” limit.

Merge operations

One per CPU core per Web server

One per CPU core per Web server

Excel Services limits

   

Maximum workbook size

Not Published

10 MB

Machine Translation Service limits

   

Input file size for binary files

Not Available in SharePoint 2010

524,288 KB per file

Input file size for text files

Not Available in SharePoint 2010

15,360 KB per file

Maximum character count for Microsoft Word Documents

Not Available in SharePoint 2010

10,000,000 per document

Total concurrent translation processes

Not Available in SharePoint 2010

5

Delay between translations

Not Available in SharePoint 2010

59 minutes

Number of translations per translation process

Not Available in SharePoint 2010

   

For more reference :

http://technet.microsoft.com/en-us/library/cc262787(v=office.14).aspx

http://technet.microsoft.com/en-us/library/cc262787(office.15).aspx


CONCLUSION

diagnoz

 

  Always tested performance & capacity limits of a software and how limits relate to acceptable
  performance.
  Having this knowledge before a deployment will help to appropriately configured limits in your environment
  in order to get a safe and strong structure.

  "It's good to have money and the things that money can buy, but it's good, too, to check up once in a while and

  make sure that you haven't lost the things that money can't buy." George Lorimer

   

12c Dynamic Sampling and Standard Edition

Tue, 2015-05-26 14:47

12c is coming with more dynamic sampling, now called dynamic statistics and using the new Adaptive Dynamic Sampling algorithm. The goal is to have better estimations and better estimations gives better execution plans. However, this new approach will increase parse time because dynamic sampling kicks in more often, reads more blocs, and run more queries.

It's probably not a problem for applications that are well designed, using bind variables to avoid to many parses, having good statistics (histograms where it makes sense, extended statistics for correlated columns). The SQL Plan Directives are there to trigger dynamic sampling only where misestimates have been observed. An OLTP application should not parse often, and should not have huge misestimates. A reporting use-case can spend more time on parsing and the few seconds spend to do dynamic sampling will probably benefit to the execution time.

A SharePoint - Business Analyst? What for?

Mon, 2015-05-25 08:04
Business analysis is becoming a widely accepted and valued position in organizations today, particularly those that are looking for new ways to enhance information systems.
Good business analysis allows us to deal effectively with complexity, which in turn enables creativity and innovation.    Image-13 Quick Reminder of what is SharePoint

Microsoft SharePoint is a robust collaboration platform that empowers people to work together. It's a CMS to share, organize, discover, build and manage projects / data.
This is a Collaboration tool.

 

Challenges with SharePoint Implementations

A Sharepoint Business Analyst has challenges to drive:

  • Because of the huge range of capabilities of SP, users are thinking it can do anything, BA has to prioritize needs Priorits.
  • In order to get this, the BA has to involve the Business, including the right people (departments, communication, SP Support Team, sponsors,...) to implement SharePoint successfully.
  • Understanding the Users requirements is one of the hugest part, BA has to translate in "IT - SharePoint words" the information given. Without this understanding, most of the time you will get this affirmation: "SharePoint? yes we have it, but we don't use it".
  • Define the Business and Stakeholder needs and perspectives is the key ! Perspectives could be so different depending the user role, an IT person will thinks everyone wants "wikis" whereas Records managers trust that all should be classified as a record, Marketing person thinks if it sparkles people will use it and other will always use folders. It really depends of the user.
  • User adoption: each organization is different, "change is hard" they use to say, it is creating complexity: the Business Analyst will have to plan and communicate change in order to manage the steps using the best way.
What is the SharePoint Business Analyst role?

The Business Analyst should be one of the most critical functions in your organization.The role of the business analyst is to:  

  • Understand what the business does, how it operates
  • Examine existing business processes
  • Identify gaps in processes, opportunities for improvements and automation
  • Capture Users requirements (URS), create mockups
  • Generate technical requirements, proof of concept solutions
  • Help implement the new processes, features and tools
  • Document improvements, measure, repeat the process

A Business Analyst must also understand the capabilities and constraints of SharePoint, and being aware of the latest Versions/Products/Features; this person must have business analysis skills competencies.

CONCLUSION

By defining the problem, you will build the right solution, plain and simple. That's why the Business Analyst function is so important for a SharePoint Project Management.

 

How to import SQL Plan Directives

Thu, 2015-05-21 10:01

Today I've presented SQL Plan Directives at the SOUG Romandie event. I had a question about the import/export of directives by Data Pump. The idea is that a lot of testing has been done on QA in order to validate the upgrade to 12c. A few directives had bad consequences (see Ludovico Caldara blog post for an example), then directives have been disabled. When going to production, they want to start with those directives disabled. Yes, they can be imported. We have to pack them into a staging table, import that table, and unpack them. It's similar tho what we do with SQL Plan Baselines.

testcase: a PERMANENT SPD

I'm using the state of the demo in my presentation where I have a PERMANENT directive that I have disabled because I don't want to do too much dynamic sampling.

SQL> show user
USER is "DEMO"
SQL> select table_name,num_rows,last_analyzed from user_tables;

TABLE_NAME             NUM_ROWS LAST_ANA
-------------------- ---------- --------
STGTAB
DEMO_TABLE                 1000 17:35:51

SQL> select table_name,column_name,num_distinct,last_analyzed from user_tab_col_statistics;

TABLE_NAME           COLUMN_NAME  NUM_DISTINCT LAST_ANA
-------------------- ------------ ------------ --------
DEMO_TABLE           A                       2 17:35:51
DEMO_TABLE           B                       2 17:35:51
DEMO_TABLE           C                       2 17:35:51
DEMO_TABLE           D                       2 17:35:51
DEMO_TABLE           SYS_STSPJNMI            2 17:35:51

SQL> select directive_id,state,created,last_modified,last_used,enabled,extract(notes,'//internal_state/text()') internal_state from dba_sql_plan_directives where directive_id in( select directive_id from dba_sql_plan_dir_objects where owner=user ) order by created;

           DIRECTIVE_ID STATE      CREATED  LAST_MOD LAST_USE ENABLED INTERNAL_
----------------------- ---------- -------- -------- -------- ------- ---------
    9456295843023884415 USABLE     17:35:45 18:07:16 18:07:16 NO      PERMANENT

Pack directive

Datapump can export/import the table, the extended statistics and the statistics, but the SQL Plan Directives are not included. We have to pack then into a staging table in order to export/import that table and umpack it at the destination site.

SQL> exec dbms_spd.create_stgtab_directive('STGTAB');

PL/SQL procedure successfully completed.

SQL> select dbms_spd.pack_stgtab_directive('STGTAB',user,9456295843023884415) from dual;

DBMS_SPD.PACK_STGTAB_DIRECTIVE('STGTAB',USER,9456295843023884415)
-----------------------------------------------------------------
                                                                1

DataPump

Here I'll import the DEMO table, and the STGTAB which contains the packed statistics, through a loopback database link, and import it to another schema: DEMO_SITE2

SQL> create database link LOOPBACK connect to demo identified by demo using '&_CONNECT_IDENTIFIER';
Database link created.

SQL> host impdp demo/demo@&_CONNECT_IDENTIFIER network_link=LOOPBACK tables="DEMO.DEMO_TABLE,DEMO.STGTAB" remap_schema='DEMO:DEMO_&&1';

Import: Release 12.1.0.1.0 - Production on Thu May 21 18:07:42 2015

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "DEMO"."SYS_IMPORT_TABLE_01":  demo/********@//192.168.78.114/DEMO14 network_link=LOOPBACK tables=DEMO.DEMO_TABLE,DEMO.STGTAB remap_schema='DEMO:DEMO
_SITE2'
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 256 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . imported "DEMO_SITE2"."STGTAB"                            6 rows
. . imported "DEMO_SITE2"."DEMO_TABLE"                     1000 rows
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Job "DEMO"."SYS_IMPORT_TABLE_01" successfully completed at Thu May 21 18:08:18 2015 elapsed 0 00:00:37

Check what is imported:

SQL> show user
USER is "DEMO_SITE2"
SQL> select table_name,num_rows,last_analyzed from user_tables;

TABLE_NAME             NUM_ROWS LAST_ANA
-------------------- ---------- --------
DEMO_TABLE                 1000 17:35:51
STGTAB

SQL> select table_name,column_name,num_distinct,last_analyzed from user_tab_col_statistics;

TABLE_NAME           COLUMN_NAME  NUM_DISTINCT LAST_ANA
-------------------- ------------ ------------ --------
DEMO_TABLE           A                       2 17:35:51
DEMO_TABLE           B                       2 17:35:51
DEMO_TABLE           C                       2 17:35:51
DEMO_TABLE           D                       2 17:35:51
DEMO_TABLE           SYS_STSPJNMI            2 17:35:51

SQL> select directive_id,type,state,created,last_modified,last_used,enabled,notes from dba_sql_plan_directives where directive_id in( select directive_id from dba_sql_plan_dir_objects where owner=user ) order by created;

no rows selected

I have my statistics (columns and extended stats for the column group) but no directives. If I stop here, a new query will probably create a new SQL Plan Directive, which will became PERMANENT and will trigger Adaptive Dynamic Sampling. I want to avoid that and get my disabled directive from the previous environment.

Unpack directive

SQL> select dbms_spd.unpack_stgtab_directive('STGTAB',user,9456295843023884415) from dual;

DBMS_SPD.UNPACK_STGTAB_DIRECTIVE('STGTAB',USER,9456295843023884415)
-------------------------------------------------------------------
                                                                  0
Look at the returned number: no directive has been unpacked. Because I'm now into another schema, I have to update the owner. I don't know if there is a better way to do it, but here I update the STGTAB:
SQL> update stgtab set c1='DEMO_SITE2';

6 rows updated.

SQL> commit;

Commit complete.

don't forget to commit. It doesn't work if you don't.
SQL> select dbms_spd.unpack_stgtab_directive('STGTAB',user,9456295843023884415) from dual;

DBMS_SPD.UNPACK_STGTAB_DIRECTIVE('STGTAB',USER,9456295843023884415)
-------------------------------------------------------------------
                                                                  1

SQL> select directive_id,state,created,last_modified,last_used,enabled,extract(notes,'//internal_state/text()') internal_state from dba_sql_plan_directives where directive_id in( select directive_id from dba_sql_plan_dir_objects where owner=user ) order by created;

           DIRECTIVE_ID STATE      CREATED  LAST_MOD LAST_USE ENABLED INTERNAL_
----------------------- ---------- -------- -------- -------- ------- ---------
   18397758907897483632 USABLE     18:09:12                   NO      PERMANENT

The directive is there. It's disabled which is what I wanted. Creation date and last used has been reset. If you see a directive with a status different than NEW and without a LAST_USED then it's an imported one. Note that the directive ID has changed.

Conclusion

When you have directives in HAS_STATS, then because Data Pump imports the statistics by default (including extended statistics), you probably don't need to import the SQL Plan Directives. But if you disabled some statistics and want the same in another environment, you have to pack/import/unpack them.

This morning event was a great event. Thanks Ludovico for the picture.

Live demos are best! @FranckPachot pic.twitter.com/aJfBO9cX5C

— Ludovico Caldara (@ludodba) May 21, 2015

DBA_SQL_PLAN_DIRECTIVE.LAST_USED

Thu, 2015-05-21 02:30

If you have read Matching SQL Plan Directives and queries using it then you know how to use the '+metrics' format of dbms_xplan.

21:49:54 SQL> explain plan for select distinct * from DEMO_TABLE 
              where a=0 and b=0 and c=0 and d=0;
Explained.

21:50:01 SQL> select * 
              from table(dbms_xplan.display(format=>'basic +rows +note +metrics'));

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------
Plan hash value: 3823449216

-------------------------------------------------
| Id  | Operation          | Name       | Rows  |
-------------------------------------------------
|   0 | SELECT STATEMENT   |            |     1 |
|   1 |  SORT UNIQUE NOSORT|            |     1 |
|   2 |   TABLE ACCESS FULL| DEMO_TABLE |   500 |
-------------------------------------------------

Sql Plan Directive information:
-------------------------------
  Used directive ids:
    2183573658076085153
Note
-----
   - dynamic statistics used: dynamic sampling (level=2)
   - 1 Sql Plan Directive used for this statement
So, when I run this query with predicates on columns A,B,C,D I'm using the directive id 2183573658076085153. 'Using' means doing dynamic sampling in order to get good estimations, because the directive tells us that there is a misestimate when using only the static statistics. Then we can look at that directive:
21:50:11 SQL> select directive_id,state,created,last_modified,last_used 
              from dba_sql_plan_directives where directive_id=2183573658076085153;

           DIRECTIVE_ID STATE      CREATED  LAST_MODIFIED LAST_USED
----------------------- ---------- -------- ------------- -------------
    2183573658076085153 USABLE     21:41:50 21:41:55.0000 21:41:55.0000
Look at the timestamps. I'm at 21:50 and the directive which has been created 9 minutes ago at 21:41:50 has been used 5 seconds later at 21:45:55 and it's the last usage.

Let's run the query now. I know that it will use the directive:

21:50:40 SQL> select distinct * from DEMO_TABLE where a=0 and b=0 and c=0 and d=0;

         A          B          C          D
---------- ---------- ---------- ----------
         0          0          0          0
It I check to be sure that the directive has been used:
21:50:55 SQL> select * from table(dbms_xplan.display_cursor(format=>'basic +rows +note'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------
EXPLAINED SQL STATEMENT:
------------------------
select distinct * from DEMO_TABLE where a=0 and b=0 and c=0 and d=0

Plan hash value: 3823449216

-------------------------------------------------
| Id  | Operation          | Name       | Rows  |
-------------------------------------------------
|   0 | SELECT STATEMENT   |            |       |
|   1 |  SORT UNIQUE NOSORT|            |     1 |
|   2 |   TABLE ACCESS FULL| DEMO_TABLE |   500 |
-------------------------------------------------

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)
   - 1 Sql Plan Directive used for this statement

Yes: SPD used, we have dynamic sampling and accurate cardinality estimation.
The SPD has been used by the query that I've run at 21:50:40

So I'll will check the LAST_USED timestamp, after being sure that what has been modified in memory is written to dictionary:

21:50:59 SQL> exec dbms_spd.flush_sql_plan_directive;
PL/SQL procedure successfully completed.

21:51:07 SQL> select directive_id,state,created,last_modified,last_used 
              from dba_sql_plan_directives where directive_id=2183573658076085153;

           DIRECTIVE_ID STATE      CREATED  LAST_MODIFIED LAST_USED
----------------------- ---------- -------- ------------- -------------
    2183573658076085153 USABLE     21:41:50 21:41:55.0000 21:41:55.0000
The LAST_USED has not been updated.

LAST_USED

This is not a bug. Updating the dictionary for all directives used during the previous 15 minutes (the automatic flush frequency) would be too expensive, so it is not done at each flush. The LAST_USED goal is to manage SPD retention - drop those that have not been used for a while (53 weeks by default) - and the retention is defined as weeks. So the LAST_USED is updated only if it is one week higher than the previous LAST_USED. Not exactly one week but 6.5 days.

6.5 days

In fact, when LAST_USED is updated, it adds 6.5 days to the current timestamp. We can see that in the underlying table:

SQL> alter session set nls_date_format='dd-mon-yyyy hh24_mi:ss';
Session altered.

SQL> select type,state,created,last_modified,last_used from sys.opt_directive$ where dir_id=2183573658076085153;
      TYPE      STATE CREATE  LAST_MODIFI LAST_USED
---------- ---------- ------- ----------- --------------------
         1          3 19-may- 19-may-2015 26-may-2015 09:41:26
Last used as displayed by the DBA_SQL_PLAN_DIRECTIVES is 19-may-2015 at 21:41:26 but the internal table OPT_DIRECTIVE$ stores 26-may-2015 09:41:26 which is 6.5 days later. The view subtracts 6.5 days to it in order to expose it to us. But we must keep in mind that this timestamp will not be updated until 26-may-2015.

Conclusion: We cannot rely on LAST_USED when we want to know if a directive has been used recently. We must use explain plan for that. Note that this implementation is not new: it's exactly the same as the LAST_EXECUTED column in DBA_SQL_PLAN_BASELINES. I'ts an optimization because tracking each usage would be too expensive.

what that in-memory term is about, and what not

Thu, 2015-05-21 02:10

Everybody is talking about in-memory databases these days. And everybody is talking about columnar store for sets of data because this can be a benefit for analytic queries. And a lot of people start mixing these terms not realizing that these topics are not exchangeable.

 

Alfresco: video thumbnails and transformations

Thu, 2015-05-21 01:00


Alfresco support, by default, the preview in your browser of some video formats like mp4 but doesn't support some others like mkv, avi or wmv. Even if Google Chrome can read mkv files for example, if you try to use the action "View in Browser" (this is not the same thing as the preview), Chrome will not try to play the mkv file but will download it instead. That's why if you upload an mp4 video in Alfresco, you will certainly be able to watch this video directly in your browser. In this blog, I will try to explain how to configure an external tool to take care of the thumbnails creation and transformations for your videos. To achieve that, we will install and configure ffmpeg. In addition to that, you can also very easily configure Alfresco to embed an external video player like "FlowPlayer" that would take care of playing all video formats directly "streaming" from Alfresco, not using your browser. Basically, this is done by replacing the preview page for some Mime types but I will not describe it in details here.


For this blog post, I will use /opt/alfresco-4.2.c as the $ALF_HOME environment variable: the folder where Alfresco has been installed. The current release of ffmpeg is version 2.6.3. This is at least working for Alfresco v4.2.x and v5.x.


I. Installation of ffmpeg


The ffmpeg binaries/executables are available in the ffmpeg website. From this website, download the "Static" build for Windows, Linux or Mac depending on the Operating System on which Alfresco has been installed. For this example, I will use a linux host (RedHat 6.x, 64 bits) but I can assure you that it's working properly on Windows too. I already installed ffmpeg with several versions of Alfresco from v4.2.x to 5.x.

[alfresco ~]$ cd /opt
[alfresco /opt]$ wget http://johnvansick...elease-64bit-static.tar.xz
[alfresco /opt]$ tar -xJf ffmpeg-release-64bit-static.tar.xz
[alfresco /opt]$ chmod -R 755 ffmpeg-2.6.3-64bit-static
[alfresco /opt]$ chown -R alfresco:alfresco ffmpeg-2.6.3-64bit-static
[alfresco /opt]$ mv ffmpeg-2.6.3-64bit-static /opt/alfresco-4.2.c/ffmpeg


These commands will download, extract, change the permissions/ownership and rename the folder to something more friendly (at a more proper location too).


II. Configuration of Alfresco


Once ffmpeg has been installed, the next step is to configure Alfresco to use it. The first thing to do is to add some parameters in the well-known alfresco-global.properties file. Don't hesitate to customize these paremeters, remove the lines that aren't needed, aso...

[alfresco /opt]$ cd /opt/alfresco-4.2.c/tomcat/shared/classes

[alfresco /classes]$ cat alfresco-global.properties
### Begin of the file with your custom parameters ###
### E.g.: dir.root, db.driver, db.username, aso... ###

### FFMPEG executable location ###
ffmpeg.exe=/opt/alfresco-4.2.c/ffmpeg/ffmpeg

### Video Thumbnails parameters ###
# ffmpeg.thumbnail
content.transformer.ffmpeg.thumbnail.priority=50
content.transformer.ffmpeg.thumbnail.extensions.3g2.jpg.supported=true
content.transformer.ffmpeg.thumbnail.extensions.3gp.jpg.supported=true
content.transformer.ffmpeg.thumbnail.extensions.asf.jpg.supported=true
content.transformer.ffmpeg.thumbnail.extensions.avi.jpg.supported=true
content.transformer.ffmpeg.thumbnail.extensions.avx.jpg.supported=true
content.transformer.ffmpeg.thumbnail.extensions.flv.jpg.supported=true
content.transformer.ffmpeg.thumbnail.extensions.mov.jpg.supported=true
content.transformer.ffmpeg.thumbnail.extensions.movie.jpg.supported=true
content.transformer.ffmpeg.thumbnail.extensions.mp4.jpg.supported=true
content.transformer.ffmpeg.thumbnail.extensions.mpeg2.jpg.supported=true
content.transformer.ffmpeg.thumbnail.extensions.mpg.jpg.supported=true
content.transformer.ffmpeg.thumbnail.extensions.ogv.jpg.supported=true
content.transformer.ffmpeg.thumbnail.extensions.wmv.jpg.supported=true

### Video Transformations parameters ###
# ffmpeg.flv
content.transformer.ffmpeg.flv.priority=50
content.transformer.ffmpeg.flv.extensions.3g2.flv.supported=true
content.transformer.ffmpeg.flv.extensions.3gp.flv.supported=true
content.transformer.ffmpeg.flv.extensions.asf.flv.supported=true
content.transformer.ffmpeg.flv.extensions.avi.flv.supported=true
content.transformer.ffmpeg.flv.extensions.avx.flv.supported=true
content.transformer.ffmpeg.flv.extensions.mov.flv.supported=true
content.transformer.ffmpeg.flv.extensions.movie.flv.supported=true
content.transformer.ffmpeg.flv.extensions.mp4.flv.supported=true
content.transformer.ffmpeg.flv.extensions.mpeg2.flv.supported=true
content.transformer.ffmpeg.flv.extensions.mpg.flv.supported=true
content.transformer.ffmpeg.flv.extensions.ogv.flv.supported=true
content.transformer.ffmpeg.flv.extensions.wmv.flv.supported=true
# ffmpeg.mp4
content.transformer.ffmpeg.mp4.priority=50
content.transformer.ffmpeg.mp4.extensions.3g2.mp4.supported=true
content.transformer.ffmpeg.mp4.extensions.3gp.mp4.supported=true
content.transformer.ffmpeg.mp4.extensions.asf.mp4.supported=true
content.transformer.ffmpeg.mp4.extensions.avx.mp4.supported=true
content.transformer.ffmpeg.mp4.extensions.mov.mp4.supported=true
content.transformer.ffmpeg.mp4.extensions.movie.mp4.supported=true
content.transformer.ffmpeg.mp4.extensions.mpeg2.mp4.supported=true
content.transformer.ffmpeg.mp4.extensions.mpg.mp4.supported=true
content.transformer.ffmpeg.mp4.extensions.wmv.mp4.supported=true
content.transformer.avi.mp4.extensions.mpg.mp4.supported=true
content.transformer.ogv.mp4.extensions.wmv.mp4.supported=true
# ffmpeg.mp3
content.transformer.ffmpeg.mp3.priority=50
content.transformer.ffmpeg.mp3.extensions.aiff.mp3.supported=true
content.transformer.ffmpeg.mp3.extensions.au.mp3.supported=true
content.transformer.ffmpeg.mp3.extensions.m4a.mp3.supported=true
content.transformer.ffmpeg.mp3.extensions.oga.mp3.supported=true
content.transformer.ffmpeg.mp3.extensions.wav.mp3.supported=true


In the above configuration, the parameter "ffmpeg.exe=/opt/alfresco-4.2.c/ffmpeg/ffmpeg" is the location of the binary file named ffmpeg. If you install Alfresco and ffmpeg on a Windows host, then this parameter looks like: "ffmpeg.exe=C:/Alfresco-4.2.c/ffmpeg/bin/ffmpeg.exe"


Once this is done, you need to enable the thumbnail & transformation extensions in Alfresco and this is done using the following steps:

[alfresco /classes]$ cd /opt/alfresco-4.2.c/tomcat/shared/classes/alfresco/extension
[alfresco /extension]$ mv video-thumbnail-context.xml.sample video-thumb-context.xml
[alfresco /extension]$ mv video-transformation-context.xml.sample video-transf-context.xml


If you want, you can parse these two files to understand what Alfresco will do with ffmpeg but basically these files define the commands and options that will be used by Alfresco.


And that's it, Alfresco has been configured to use ffmpeg for thumbnails creation and transformations of your video. For Alfresco to take these changes into account, simply restart your Application Server using an Alfresco service:

[alfresco ~]$ service alfresco restart


Or using Alfresco default scripts:

[alfresco ~]$ /opt/alfresco-4.2.c/alfresco.sh stop
[alfresco ~]$ /opt/alfresco-4.2.c/alfresco.sh start



To check if ffmpeg is working properly, simply open Alfresco Share, browse your repository to find a video with an avi or wmv format and a thumbnail should now be present (if not, just refresh the page...). You can also try to upload a video and if a thumbnail is created for this video, then ffmpeg is working!



Row Store vs Column Store in SAP HANA

Wed, 2015-05-20 00:00

The SAP HANA database allows you to create your tables in Row or Column Store mode. In this blog, I will demonstrate that each method has its advantages and disadvantages and should be used for specific cases.

Thanks to two kind of tests, I will show you that the Row Store mode should be used for simple SELECT SQL queries, without aggregation and the Column Store mode should be used for complex SELECT queries, containing aggregation levels.

If you want to have more information regarding the Column Store or the In-memory technologies, don't hesitate to assist at the next dbi services event:

http://www.dbi-services.com/index.php/newsroom-e/events/event-l-in-memory-r-boost-your-it-performance

Test 1: Simple SELECT query Goal of the tests

This test will show you the difference of performance using a Row Store and a Column Store table in a simple SQL query.

Description of the test

A SELECT query will be send to the database and we will check the Server time response.

SQL Query Using a Row Store table

The SQL is the following:

1_SQL_ROW_STORE.PNG

Using a Column Store table

The SQL is the following:

2_SQL_COLUMN_STORE.PNG

Tables Row Store Table

You can find here information regarding the Row Store table used in the test.

Name:                 SALES_ROW

Table type:          Row Store

Row count:         10 309 873

Index:                1

Partition:            0 (SAP HANA doesn’t allow the possibility to create partition on Row Store table)

3_TABLE_ROW_STORE_1.PNG

 

4_TABLE_ROW_STORE_2.PNG

Column Store Table

You can find here information regarding the Column Store table used in the test.

Name:                  SALES_COLUMN

Table type:           Column Store

Row count:          10 309 873

Index:                 0 (SAP HANA automatically apply a index if it is need)

Partition:             1 RANGE partition on CUST_ID

6_TABLE_COLUMN_STORE_2.PNG

Result of the test Using the Row Store table

8_SQL_Q1_SELECT_ROW_RESULT.PNG

Using the Column Store table

9_SQL_Q1_SELECT_COLUMN_RESULT.PNG

Test 2: Complex SELECT query Goal of the tests

This test will show you the difference of performance using a Row Store and a Column Store table in a complex SQL query.

Description of the test

A SELECT query will be send to the database and we will check the Server time response.

SQL Query Using a Row Store table

The SQL is the following:

10_SQL_ROW_STORE.PNG

Using a Column Store table

The SQL is the following:

11_SQL_COLUMN_STORE.PNG

Tables Row Store Fact Table

You can find here information regarding the Row Store table used in the test.

Name:                  SALES_ROW

Table type:          Row Store

Row count:         10 309 873

Index:                   2

Partition:             0 (SAP HANA doesn’t allow the possibility to create partition on Row Store table)

Column Store Fact Table

You can find here information regarding the Column Store table used in the test.

Name:                  SALES_COLUMN

Table type:          Column Store

Row count:         10 309 873

Index:                   0 (SAP HANA automatically apply a index if it is need)

Partition:             1 RANGE partition on CUST_ID

Result of the test Using the Row Store tables

12_SQL_Q2_SELECT_ROW_RESULT.PNG

Using the Column Store tables

13_SQL_Q2_SELECT_COLUMN_RESULT.PNG

Conclusion

Row and Column store modes in SAP HANA should be used in two different contexts:

 - Tables in Row store mode must be used in SELECT queries WITHOUT any aggregation functions

 -Tables in Column store mode are powerful when they are used to create analytical queries or view, using aggregation functions (GROUP BY, …)

The performance can be highly optimized if the tables selected in the queries have the right store mode.