Skip navigation.

Feed aggregator

Blog : Solutions Cloud Oracle

Jean-Philippe Pinte - Wed, 2015-06-03 08:05
Retrouvez mon nouveau blog consacré aux solutions Cloud Oracle (Private, Public & Hybrid) :
http://oraclecloudsolutions.blogspot.com

URL Encoding and other from Groovy

Angelo Santagata - Wed, 2015-06-03 07:44

There are times when you want to execute some code within Groovy which Oracle Sales Cloud's groovy doesn’t like. A very common example is URLEncode and Base64Encoding, however there are many others..


Native Groovy supports both base64 encoding/decoding and URL Encoding/Decoding


e.g.



String encoded = s.bytes.encodeBase64.toString()



Alas the groovy interpreter within Sales Cloud doesn’t support either the base64 encoding/decoding classes or the URLEncoding classes. Thankfully there is a an easy workaround, Sales Cloud does support the ability to call a SOAP Service from Sales Cloud and given that many SalesCloud installations will have a Java Cloud SX instance available to them its quite easy to create a Java SOAP Service, deploy it to JCSSX and then call this from Sales Cloud to do the stuff that Sales Cloud’s groovy wont allow you to do.



https://4.bp.blogspot.com/-W_PVvNpvJ_Y/VW7a8Ow9DpI/AAAAAAAAMq8/FOvtRAKulGM/s400/base64image.jpg


Steps to recreate this

  1. Create a new Project within your favourite IDE (I use JDeveloper11g for Sales Cloud Development, Netbeans for other stuff)

  2. Ensure your project has support for JAX-WS WebServices, within JDeveloper  create a JEE project.

  3. Within your project create a new Java class, I’ve called PTSEncoder

  4. Now cut and paste the following code into this class, obviously rename the Class name if you havent used the same name as I have

package oracle.pts.encoder;

import java.io.IOException;

import java.io.UnsupportedEncodingException;

import java.net.URLDecoder;

import java.net.URLEncoder;

import javax.jws.WebMethod;

import javax.jws.WebService;

import javax.xml.bind.DatatypeConverter;

@WebService

public class PTSEncoder {

   public PTSEncoder() {

       super();

   }


   /**

    *

    * @param s - String to be translated

    * @return

    */

@WebMethod(operationName = "encode")

   public String utf8encode(String s) {

       String result = "";

       try {


           result = URLEncoder.encode(s, "UTF-8");

           System.out.println("Encoded URL " + result);


       } catch (UnsupportedEncodingException e) {

           System.err.println(e);

       }

       return result;

   }


   /**

    *

    * @param s - String to be translated

    * @param enc - The name of a supported character encoding

    * @return

    */

   @WebMethod(operationName = "encodeWithEncType")

   public String ptsEncodeWithEncType(String s, String enc) {

       String result = "";

       try {

           if (enc == null || enc.length() <= 0) {

               enc = "UTF-8";

           }

           result = URLEncoder.encode(s, enc);

           System.out.println("Encoded URL " + result);


       } catch (UnsupportedEncodingException e) {

           System.err.println(e);


       }

       return result;

   }


   /**

    *

    * @param s - String to be translated

    * @return

    */

   @WebMethod(operationName = "decode")

   public String ptsDecode(String s) {

       String result = "";

       try {


           result = URLDecoder.decode(s, "UTF-8");

           System.out.println("Decoded URL " + result);


       } catch (UnsupportedEncodingException e) {

           System.err.println(e);

       }

       return result;

   }


   /**

    *

    * @param s - String to be translated

    * @param enc - The name of a supported character encoding

    * @return

    */

   @WebMethod(operationName = "decodeWithEncType")

   public String ptsDecodeWithEncType(String s, String enc) {

       String result = "";

       try {

           if (enc == null || enc.length() <= 0) {

               enc = "UTF-8";

           }

           result = URLDecoder.decode(s, enc);

           System.out.println("Decoded URL " + result);


           // String decodedUrl = URLDecoder.decode(encodedUrl, "UTF-8");

           //System.out.println("Dncoded URL " + decodedUrl);


       } catch (UnsupportedEncodingException e) {

           System.err.println(e);


       }

       return result;

   }


   /**

    * @param s

    * @return

    * @throws IOException

    */

@WebMethod(operationName = "encodebase64")

   public String ptsEncodeBase64(String s) throws IOException {        

       return DatatypeConverter.printBase64Binary(s.getBytes());

   }


   /**

    * @param s

    * @return

    * @throws IOException

    */

   @WebMethod(operationName = "decodebase64")

   public String ptsDecodeBase64(String s) throws IOException {    

       String result = new String(DatatypeConverter.parseBase64Binary(s));

       return result;

   }

// Simple tester

@WebMethod(exclude = true)

   public static void main(String[] args) {

       PTSEncoder pTSEncode = new PTSEncoder();

       pTSEncode.utf8encode("Angelo Woz here");

       pTSEncode.ptsEncodeWithEncType("Angelo Woz Here", "UTF-8");

       pTSEncode.utf8encode("------------");

       pTSEncode.ptsDecode("Jo was here");

       pTSEncode.ptsDecodeWithEncType("Jo Was here", "UTF-8");


       try {

           System.out.println("Encode Angelo = "+pTSEncode.ptsEncodeBase64("Encode Angelo"));

       } catch (IOException e) {

           e.printStackTrace();

       }

   }

}


For interest I created this class by first creating the methods and then using J Developers wizard to convert a class+methods into a SOAP WebService. This class uses Java annotations which tell at JEE server that most (not all) of these methods are WebService calls. This is done using server side injection at deployment time.

  1. If within JDeveloper you created your project as a web/jee project you can simply deploy it as is to your JCSSX, or local WLS Application Server

    1. Right Mouse Click on the Project, deploy to your deployment profile

    2. Deploy to Application Server

    3. Choose your application server and deploy

    4. Check the deployment

You can now test the SOAP Service using a SOAP testing tool like Http Analyzer or SOAP UI. The WSDL of the service would be the contextRoot+WebService Name. For JDeveloper this can be found if you right-click on the Webservice Class,Java WebServices Editor and look at the generation options



So in my instance the WSDL will be available at


https://<JCSSXServer>.java.us2.oraclecloudapps.com/PTSEncoder/PTSEncoderService?wsdl



  1. You can put this into SOAPUI or Http Analyzer and test away

  2. Now last you can register it in Sales Cloud as a web service and use it from Groovy

    1. Activate a Sandbox,  that way you can undo changes if oyu want

    2. Navigate to Application Composer

    3. Navigate to the application you will be using the SOAP WebService from (Common,Sales etc)

    4. Select WebServices

    5. Enter a name for the WebService, this name becomes the groovy package name

    6. Security None (for testing only)

    7. Then finally use the SoapService from any groovy script you desire, remember the Palette helps you find different services registered on you system



Sample Groovy Code

def base64result = adf.webServices.PTSBase64.encodebase64("Angelo Woz Here")


Final footnote

This example shows how to execute a base64 encoding externally using Java Cloud ServiceSaaS eXtensions (JCSSX), the example could easily have used Java Cloud Service, or some other Cloud service. More importantly you can code extensions using Java Cloud Service and call them from SalesCloud. Given that most JCSSX instances are going to be co-located within the same datacentre this makes the operation quick, efficient and very flexible!

Lastly, the service I deployed didn’t contain any security because it’s a stateless service and ok for anyone to call, that said in a production environment I would still add a medicum of security to the service just to make sure someone doesn’t try and abuse it.

Angelo
Enjoy!


Docker on Oracle Linux

Marco Gralike - Wed, 2015-06-03 05:54
As a reminder for myself and for those who might have missed the info. Here…

SQLDeveloper and Userdefined datatypes in tables

Darwin IT - Wed, 2015-06-03 05:49
You might have tables that contain columns with a userdefined datatypes. For instance from 11g onwards SOASuite contain Integration B2B, with that datamodel

B2B works with advanced queueing with the queue-table ip_qtab based on the IP_MESSAGE_TYPE Oracle Type wich is defined like:
create or replace type IP_MESSAGE_TYPE as OBJECT (
MSG_ID VARCHAR2(128),
INREPLYTO_MSG_ID VARCHAR2(128),
FROM_PARTY VARCHAR2(512),
TO_PARTY VARCHAR2(512),
ACTION_NAME VARCHAR2(512),
DOCTYPE_NAME VARCHAR2(512),
DOCTYPE_REVISION VARCHAR2(512),
MSG_TYPE INT,
PAYLOAD CLOB,
ATTACHMENT BLOB
);
In the queuetable you then have a payload column based on this type. When you do a select on such a table the payload column has actually several attributes. Tools like Pl/Sql Developer from Allroundautomations or TOAD apparently encounter that the column is based on the Oracle Type, so they actually show the seperate attributes in the grid.

SQLDeveloper (currently 4.3) apparently does not so. But it is quite easy to add this information in your select. For a select on the queuetable (actually with AQ you shouldn't query the queuetable, but the accompanying AQ$<queuetable> view) it will look like:

SELECT QTB.QUEUE,
QTB.MSG_ID,
QTB.CORR_ID,
QTB.MSG_PRIORITY,
QTB.MSG_STATE,
QTB.RETRY_COUNT,
QTB.USER_DATA.MSG_ID MSG,
QTB.USER_DATA.INREPLYTO_MSG_ID INREPLYTO_MSG_ID,
QTB.USER_DATA.FROM_PARTY FROM_PARTY,
QTB.USER_DATA.TO_PARTY TO_PARTY,
QTB.USER_DATA.ACTION_NAME ACTION_NAME,
QTB.USER_DATA.DOCTYPE_NAME DOCTYPE_NAME,
QTB.USER_DATA.DOCTYPE_REVISION DOCTYPE_REVISION,
QTB.USER_DATA.MSG_TYPE MSG_TYPE,
QTB.USER_DATA.PAYLOAD PAYLOAD,
QTB.CONSUMER_NAME,
QTB.PROTOCOL
FROM AQ$IP_QTAB QTB;

You see that the trick is to just add the attribute as a seperate identifier to the user_data-column, using the dot-notation.

If you're certain that the selected rows contain a valid XML document in the Payload attribute you could provide that attribute to the xmltype() constructor:
SELECT QTB.QUEUE,
QTB.MSG_ID,
QTB.CORR_ID,
QTB.MSG_PRIORITY,
QTB.MSG_STATE,
QTB.RETRY_COUNT,
QTB.USER_DATA.MSG_ID MSG,
QTB.USER_DATA.INREPLYTO_MSG_ID INREPLYTO_MSG_ID,
QTB.USER_DATA.FROM_PARTY FROM_PARTY,
QTB.USER_DATA.TO_PARTY TO_PARTY,
QTB.USER_DATA.ACTION_NAME ACTION_NAME,
QTB.USER_DATA.DOCTYPE_NAME DOCTYPE_NAME,
QTB.USER_DATA.DOCTYPE_REVISION DOCTYPE_REVISION,
QTB.USER_DATA.MSG_TYPE MSG_TYPE,
xmltype(QTB.USER_DATA.PAYLOAD) PAYLOAD,
QTB.CONSUMER_NAME,
QTB.PROTOCOL
FROM AQ$IP_QTAB QTB;

And of course this works for other tables as well. This is just a quick example for a table based on an object type. Unfortunately I don't have some example data in the queue at the moment.

Oracle Documentation URLs : What I would like to see!

Tim Hall - Wed, 2015-06-03 04:42

Broken-LinkAfter my recent rant about broken URLs, I thought it would be sensible to say something a little more constructive, so this is what I would do if I were asked to structure the documentation. Other opinions are valid. :)

Base URL: I’m assuming the base URL for the database documentation will never change again from it’s current value.

Version: Next comes a version. Personally I would have a separate version for every patchset, so you can easily flick between them to see the variations in the documentation, but I would also have a concept of the “latest” for each major release and the “overall latest” version of the page, so you can always link to the most up to date version of the document if you want. That means, whatever happens with new releases, you will always have the link pointing to the latest page for that feature, unless of course the feature has been removed. All previous version of the docs will remain and the URLs will still be valid. Believe it or not, sometimes people really do need to read the old documentation!

Book: Some indicator of the book the page belongs to. Oracle are already doing this with things like “DBSEG” for Database Security Guide. This must never change!

Page: A slug representing the page. It would be nice if these were human readable, like “audit_admin”, but if they want to use those crappy GUIDs, that’s fine, provided that they are cast in stone as the ID for that page, regardless of version forever. The GUID must not be unique for each version of the page, or it makes it impossible to easily switch between the same page for different database versions.

Internal anchors: Some of the internal anchors in pages have some odd behaviour now. You click a link, which takes you to the correct part of the page, but the URL bar still shows the top-level page URL. As a result, if you grab the URL for a link in your blog, you are not really pointing to the correct place on the page. So you have to find the original link you clicked and copy that, so you are really getting to the link you want. Very annoying! Internal anchors should be consistent, visible and live forever. If you want to change the anchor, you can add a new one in addition to the old one. Nothing wrong with that! Once again, the ugly GUIDs are acceptable here, but only if the GUID for an anchor never changes, so to read the same section of text in another DB version, you only have to change the version part of the URL.

As an example of all this, let’s think about the “Administering the Audit Trail” page from the 12c documentation and show how this could be handled going forward.

  • “/12.1.0.1/DBSEG/audit_admin.htm”
  • “/12.1.0.2/DBSEG/audit_admin.htm”
  • “/12.1-latest/DBSEG/audit_admin.htm” : Points to 12.1.0.2 unless a newer patchset is released for the 12.1 release.
  • “/12.2.0.1/DBSEG/audit_admin.htm”
  • “/12.2-latest/DBSEG/audit_admin.htm” : Points to 12.2.0.1 until a newer patchset is released for the 12.2 release.
  • “/latest/DBSEG/audit_admin.htm” : Points to the very latest version of the page. The latest patchset for the latest release (12.2.0.1, 13.1.0.1 etc.).

This would allow all versions of the docs to coexist. You could switch between them easily, as in most cases, the only thing to you ever need to change is the version number. A perfect example of this can be seen in the MySQL documentation, which is organised beautifully. It’s so simple the pages include version links so you can switch between version with a single click.

I appreciate there are situations where things would not run to plan, like when features are removed, or expanded to the point where pages are split into several new pages etc. These could still be catered for if a sensible approach were taken, like the original page becoming a “link page” to all the expanded content.

I would not expect Oracle to retro-fit all the old documentation, as that would be a massive task and break even more links, but something more sensible and future-proof needs to happen compared to what we have seen in recent years, which to be brutally frank has been a clusterfuck on a monumental scale!

I know Oracle are taking steps to address this issue. I just hope their solution is not more smoke and mirrors and actually starts to resemble a basic filing system!

Cheers

Tim…

Any DBAs out there thinking of Optimal Flexible Architecture (OFA)? :)

Update: Apart from being ugly, I have no real problem with the GUID. My only worry is Oracle will assign a new GUID for a page for each version (typo correction, release, DB version etc.) of the same page, thus making the whole switching between DB versions by altering one part of the URL impossible. If they do this constant change of the GUIDs, it will also result on one of two things.

  1. If the old version of the page is not kept forever, you will have yet more broken links.
  2. If pages are kept forever, that’s better, but if a new GUID is created for every small revision of the same page (within a database release/version), you will continue to point to the old uncorrected page, which will lower the quality of your links.

So the GUIDs themselves are the problem. It’s how they “could” be used that “could” be the problem. Think about the possible scenarios during the lifespan of a single section of the documentation and I think you will see how disastrous this could be.

 

Oracle Documentation URLs : What I would like to see! was first posted on June 3, 2015 at 11:42 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Drop table if exists

Laurent Schneider - Wed, 2015-06-03 03:16

The syntax that you are looking for is
docs.oracle.com/…/drop-table.html

 
DROP [TEMPORARY] TABLE [IF EXISTS]
    tbl_name [, tbl_name] ...
    [RESTRICT | CASCADE]

Wait, this does not work !


SQL> drop table if exists t;
drop table if exists t
              *
ERROR at line 1:
ORA-00933: SQL command not properly ended

Okay. It was the Oracle MySQL book &#x1f609;

In the Oracle database, I have created my own droptableifexists script.

I went for a SQL*Plus no-plsql approach. PL/SQL is also possible but it generated different error messages (ORA-06512: at line 1) and different feedback (PL/SQL procedure successfully completed.)

So I check the dictionary, put a command to drop in the sqlplus buffer if a table exists, then run that command first.

droptableifexists.sql


set feed off ver off pages 0 newp none
def cmd="select 'OK: Table does not exist' from dual"
col cmd new_v cmd nopri
select 'drop table "'||table_name||'"' cmd 
from user_tables 
where table_name='&1';
1 select
del *
1 &cmd
set feedb 6 head off
/
set head on
del *
undef cmd
col cmd clear

Ok, let’s try


SQL> create table t(x number);

Table created.

SQL> @droptableifexists T

Table dropped.

SQL> @droptableifexists T
OK: Table does not exist

A PL/SQL approach could be


for f in (
  select 'drop table "'||table_name||'"' cmd 
  from user_tables where table_name='T')
loop 
  execute immediate f.cmd;
end loop;

Try it :


SQL> create table t(x number);

Table created.

SQL> exec for f in (select 'drop table "'||table_name||'"' cmd from user_tables where table_name='T')loop execute immediate f.cmd;end loop

PL/SQL procedure successfully completed.

SQL> exec for f in (select 'drop table "'||table_name||'"' cmd from user_tables where table_name='T')loop execute immediate f.cmd;end loop

PL/SQL procedure successfully completed.

A bit easier to read. Same has to be done for USER, VIEW and so on.

PS: there are also other versions around catching for ORA-942, but ORA-942 may be ORA-943 in next release, try drop table t;; in 11g and 12c to see those things really happen !

Ways to improve your SharePoint Intranet

Yann Neuhaus - Wed, 2015-06-03 02:23
World today is changing in companies using different path to get better communication with their staff, to boost productivity, improve the effectiveness of the strategy and business direction.
SharePoint has become a reference's tool regarding this point, however, many organizations still struggling to get the most out of the features it offers and find it difficult to increase adoption.In this article, we will see the key points to improve your SharePoint environnement.
USE "FOLLOW" AND "SHARE"

SharePoint 2013 have some great features that are often overlooked, for example the "follow" and "share" features are simple ways to track which documents are interesting and share information with others.
FOLLOW: users are notified when a change is made to anything they are following whether conversations or documents.
SHARE: allows individuals to share relevant sites and documents with colleagues they feel will benefit from the content. Both these functionalities are ways for people in the organisation to take control of getting up to date information and distributing what they have learnt to others.

followShare

 








It could be like a gossip.. ah ah .. Follow and Share!
MY SITE The SharePoint ‘My Site’ can be compared to a user’s Facebook page for the workplace. ‘My Sites’ serves as a user’s personal page for sharing information about their skills, expertise and interests with others in the company. It also gives a user a central location to manage and store contacts, documents and anything they want to refer back to easily. Users can also personalise their ‘My Sites’ for an element of uniqueness. - See more at: http://www.pointbeyond.com/2015/02/06/five-great-ways-to-improve-usage-of-your-sharepoint-intranet#sthash.3DARs9wW.dpuf The SharePoint ‘My Site’ can be compared to a user’s Facebook page for the workplace. ‘My Sites’ serves as a user’s personal page for sharing information about their skills, expertise and interests with others in the company. It also gives a user a central location to manage and store contacts, documents and anything they want to refer back to easily. Users can also personalise their ‘My Sites’ for an element of uniqueness. - See more at: http://www.pointbeyond.com/2015/02/06/five-great-ways-to-improve-usage-of-your-sharepoint-intranet#sthash.3DARs9wW.dpuf

SharePoint "My Site" can be compared to a user’s Facebook page for the workplace.
MY SITE: serves as a user’s personal page for sharing information about their skills, expertise and interests with others in the company. It also gives a user a central location to manage and store contacts, documents and anything they want.
A "My Site" is personalizable in order to be unique for each user.

MySite

Share carefully, "My Site" is a place to share professional information!

SHAREPOINT MOBILITY

In this mobile age it’s fair to say the biggest improvement any organisation can make to improve usage of an intranet is to make it mobile compatible. Companies may want to consider rolling these features out slowly or even start with a test project in order to get feedback from business people.
The key is to get involve the users themselves to step by step improve adoption.

phonemobile

Access the information and documents needed from everywhere!

STEPS IMPORTANCE

Companies that get the SharePoint implementation right, often start simply, with many of the features disabled.
Taking time to do the things using the BEST PRACTICES in the right way, step after step will drive SharePoint implementation to success, leave the "bells-and-whistles" until the last.

Example:   

  • Event updates
  • Replacing slowly the existing Intranet
  • Create document management
  • Create orms management
  • Use business process and workflow management
  • Start sharing B.I dashboards and reports
  • ...

the_fall

"The shortest way to do many things is to do only one thing at once"

PROVIDE TRAINING AND SUPPORT

To get the utilisation for SharePoint, provide training and support to staff. This is the key to sustainable adoption.
The session must be adapted to the different users case: business user, power user, site owner, site collection admin. The knowledge minimum should be Business User Level.
Use resources for knowledge as FAQ, checklists, etc...

Here are 2 points from Microsoft regarding Training and Support:

 

Support users by creating a training plan

  • Establish short, just-in-time training options for users
  • Ensure that your site owners are properly trained before giving them site ownership
  • Provide training to content contributors to ensure effective content management
  • Create a site owner community to enable users to help each other

Ensure ongoing success by creating a user support plan

  • Establish a contact person for every page
  • Establish a SharePoint Center of Excellence within your organization to provide high-end support for users
  • Survey users on a regular basis to gather feedback and establish metrics
  • Ensure content gets moved from legacy platforms to SharePoint in a planned manner

more details: SharePoint Adoption Guide - Microsoft

 

CONCLUSION

End user adoption is the key of your SharePoint Intranet success!






WebCenter Content - A new Delivery Channel in 11.1.1.9.0 !!

Tim Dexter - Wed, 2015-06-03 02:13

Hi Everyone

One of the new features introduced in 11.1.1.9.0 is WebCenter Content (WCC) Server as a delivery channel. Prior to this release, we could manage delivery to WCC (formerly UCM) server using webDAV as explained by Tim in his blog "BI Publisher and WebDAV... done!". However, there were few restrictions

  • No way to include standard or custom metadata. Therefore, there was no description for the documents submitted and searching these documents in WCC was not convenient.
  • WebDAV uses folders to store the document and access to these folders have to be pre-configured by WCC Administrator. User can not select security group or account at the time of scheduling.

Moreover, with WCC as delivery channel you have the ability to use idc(s), http(s) and JAX-WS protocols. Refer to the documentation on RIDC protocols for more details on these protocols.

We have now videos in BI Publisher Youtube Channel to demonstrate how BI Publisher integration with WCC works. The videos are split in two parts: Part 1 (Title: BIPublisherWCC Part1) covers an overview and explains some of the WCC concepts, while Part 2 (Title: BIPublisherWCC Part2) walks you through all the steps necessary to make the integration work.

You can find additional details in the documentation guide. Navigate to the Books link and check the Administrators Guide for setting up delivery destinations and the Data Modeling Guide for Custom Metadata & Bursting related information. 

I am sure you will find this new feature very easy to configure and very useful for maintaining documents in WCC. Have a nice day !! 

Categories: BI & Warehousing

SQL Server 2016: Live query statistics

Yann Neuhaus - Tue, 2015-06-02 16:44

During my tests of the SQL Server 20126 CTP2, I noticed an additional icon on SQL Server Management studio which is close to the Actual Execution Plan button as shown below:

 

blog_48_-_1_-_query_live_statistics_icon

 

I decided to take a look at the SQL Server 2016 BOL and I found a topic about this feature. It seems to be a funny feature. So after that, I decided to test it with one of my financial query (developed for one of my customer) which usually takes a long time to run.

Let's go ahead and after running my long query, this first test was in fact inconclusive because after opening a new tab, I got stuck with the following message even after stopping my query execution:

 

blog_48_-_2_-_message_waiting_for_a_query_plan

 

Maybe my query is too complex and I have to investigate in a near future. Updated 04.06.2015: It seems that it is a bug with this current release of SQL Server 2016 (CTP2). So I will try later with maybe, the next CTP.

Let's continue by cutting out my big query into smaller pieces of code and the test ran successfully this time. Indeed, I was able to see a “progressive” query plan where we can see how long operators are taking as well as how far they are in their operations with some awesome animation.

 

blog_48_-_5_-_live_query_statistics

 

Basically, we get an “enhanced” execution plan tree. The dotted lines point out the operations in progress with some additional information as the current duration for each operator. Likewise, we may retrieve all other usual information that concern a query execution plan.

However, I noticed during my tests that the percentage calculation didn't work on the operators if the row estimate is not correct (it will get stuck on 100% while the time keeps ticking away). At this point, I remembered that I faced the same problem with the DMV sys.dm_exec_query_profiles introduced by SQL Server 2014 (please take a look at my blog post here). Let’s perform the same test by using the sys.dm_exec_query_profiles DMV and as excepted I noticed the same duration issue when cardinality estimation was wrong.

 

 

blog_48_-_4_-_sys_dm_exec_query_stats

 

I guess we will learn better in the future about this interesting feature. We're certainly on the good way in order to detect expensive operations just by looking at the pretty cool animation!

 

 

Nominations Due June 30 for Leadership in Finance Awards

Linda Fishman Hoyle - Tue, 2015-06-02 16:20

A Guest Post by Oracle Senior Director Anne Ozzimo (pictured left), Oracle ERP Applications Product Marketing

Now in its fourth year, Oracle’s Leadership in Finance Award―part of the Oracle Excellence Awards―has become the award of choice for visionary finance executives who have demonstrated outstanding leadership in leveraging Oracle solutions to transform their businesses.

Last year, Oracle Executive Vice Chairman of the Board Jeff Henley honored these winners at the 2014 CFO Summit @ Leaders Circle:

  • Former Hyatt Hotels CFO Gebhard Rainer (North America)
  • SITA CFO Colm O’Higgins (EMEA)
  • University of Melbourne CFO Allan Tait (JAPAC)
  • Nufarm Financial Controller Fabio De Felippis (LAD)

Customers, partners, and Oracle employees are urged to submit their nominations now for the 2015 Leadership in Finance Awards. CFOs, controllers, chief accounting officers, and VPs of finance from North America, EMEA, LAD, and JAPAC are all eligible for consideration, as long as they were directly involved in the success of the Oracle solution deployed. Full details and submission guidelines are available on the Oracle Excellence Awards website.

Award winners will be announced at a new event―the Modern Finance Experience―hosted by Oracle CEO Safra Catz in San Francisco. The dates are September 21-23, 2015. Katz and Henley will honor the award winners in an exclusive ceremony. Nominate your favorite finance executive now before nominations close on June 30, 2015.

Please email comments about this story to me at anne.ozzimo@oracle.com.

Embrace Diversity and Congruence in the Workplace

Linda Fishman Hoyle - Tue, 2015-06-02 16:16

A Guest Post by Andy Campbell, Oracle HCM Cloud Sales Evangelist (pictured left)

What makes a healthy workforce? Ask a roomful of recruiters and someone will very quickly mention diversity. After all, businesses need employees from a wide variety of backgrounds who can inspire and challenge their colleagues and create a workforce that better resembles the outside world.

But if diversity is important, so too is inclusion. If recruiters can find people from diverse backgrounds, with diverse skills, but who share many of a company’s perspectives and values it’s more likely they’ll fit in and contribute towards a cohesive, productive team.

And let’s not forget just how important it is to get that right fit: the most expensive part of the hiring process is when the wrong hire is made and the whole thing has to start again.

For HR departments and recruiters looking for congruence, it seems to me that social sourcing has a whole lot to offer. This is because social media doesn’t just allow recruiters to advertise on a broader scale, but it can even enable them to channel campaigns through employees’ networks.

Why is this important? Because most people’s networks are networks of like-minded people. Recruiting through your number one sales rep’s LinkedIn network is likely to increase your chances of hiring a new employee of similar calibre. Moreover, it will save money on recruitment fees. There is of course a caveat here: social media is only part of the answer. Let’s face it, most people’s social networks are made up of people of a similar demographic to them and this can be limiting. Yes, companies want like-minded people, but they do not want clones.

For this reason, transparency is hugely important. A potential employee must to be able to easily find out about a company and determine whether the company shares his or her values and is a good fit.

Social media not only makes it easier for a prospect to find out this information, it makes it absolutely inevitable. All that remains is for employers to ensure that when the right recruit looks at their business through the lens of social media they like what they see.

The real power of social media in the recruitment space is that it works both ways. Not only can employers find out more about potential recruits and access talent that may otherwise have been unavailable to them, but prospects can learn more about the inner workings of a business than ever before. The end result should be more right fits more of the time.

Oracle Fusion Middleware (FMW) Training by Atul Kumar starting on 6th June

Online Apps DBA - Tue, 2015-06-02 16:15

oracle_fmw_admin_450

I am starting a new batch of Oracle Fusion Middleware training for Architects/Administrator from 6th June 2015. To maintain quality, We’ve restricted number of trainees in this batch, We’ve now left with last 2 seats.

Course/Training Name : Oracle Fusion Middleware Administration (WLS, SOA, OBIEE, WCS, OHS, SSL, etc )

Course/Training Dates : 6, 7, 13, 14, 20, 21, 27, 28 June (9 -11 AM PST) followed by 4 hours of lab exercise (We provide dedicated machine on cloud and step by step lab exercises).

fmw_450

100% Money Back Guarantee : We’re ready to prove everything we claim. Buy The Course right now, and see it for yourself with-in the first 2 Sessions. If the course is not right for you just inform us after 2 sessions, We’ll refund full Money back to you in next 72 hours with no questions asked.

Still worried for you Money ?
Call Us: US +1 408 627 8021 |  INDIA +91 85 880 97686 (9 AM-5PM GMT only)

Mail Us: support [at] k21technologies.com

Claim your seat by registering for Oracle Fusion Middleware Course from last 2 available seats.

Previous in series Next in seriesRelated Posts for FusionM
  1. Oracle Fusion Middleware Part II
  2. Oracle Fusion Middleware Overview
  3. Oracle Fusion Middleware : BEA WebLogic or Oracle Application Server
  4. Oracle Fusion Middleware 11g is coming … 1 July 2009
  5. Oracle Fusion Middleware 11g launched today
  6. Oracle Fusion Middleware 11g concepts for Apps DBA’s
  7. Fusion Middleware 11g – How to register Oracle Instance with Webogic Server (opmnctl) ?
  8. Reader’s Question : How to change hostname, domainname, IP of Fusion Middleware 11g (SOA, WebCenter, WebLogic) ?
  9. Oracle Fusion Middleware 11g R1 patchset 2 (11.1.1.3.0) – SOA, WebCenter, RCU, WebLogic (10.3.3)
  10. Oracle Fusion Middleware Challenge : Can you answer Why different domain home for Admin & Managed Server ?
  11. Beware !!! Oracle Fusion Middleware 11g R1 patchset 2 (11.1.1.3.0) is patch set only
  12. Oracle Fusion Middleware 11g R1 PS 3 (11.1.1.4) available now
  13. Oracle Fusion Middleware 11g R1 PS4 (11.1.1.5) is available now
  14. Cloning Oracle Fusion Middleware 11g (SOA, WebCenter, UCM) – copyBinary, copyConfig, extractMovePlan, pasteBinary, pasteConfig, cloningclient.jar
  15. Fusion Middleware 11g startup issue : OPMN RCV No such file or directory
  16. Oracle Fusion Middleware Start-up Issue : jps-config.xml No such file or directory : startScriptEnabled
  17. #OFMW 11.1.1.6 (SOA, WebCenter, IdM, OBIEE, OHS, ADF, CEP…) is now available
  18. ODL-52049 DMS-Startup oracle.core. ojdl.logging. LogUtil log cannot create instance of class ‘oracle. dfw. incident.Incident DetectionLog Filter’
  19. Oracle Fusion Middleware (FMW) 12.1.2 is available now : WebLogic, Coherence, OWSM, OHS, ADF etc
  20. Oracle Fusion Middleware Installation : java.lang.UnsatisfiedLinkError libmawt.so libXtst.so.6: cannot open shared object file
  21. Oracle Fusion Middleware Training – Win FREE Lesson : Suggest topic to our client
  22. YouTube Sunday : Troubleshoot Fusion Middleware Pre-Requisite Failure : Kernel Setting
  23. Oracle Fusion Middleware (FMW) 11.1.1.9 now available : Documentation & Download
  24. Oracle Fusion Middleware (FMW) Training by Atul Kumar starting on 6th June
  25. Oracle Fusion Middleware (FMW) Training is now closed

The post Oracle Fusion Middleware (FMW) Training by Atul Kumar starting on 6th June appeared first on Oracle : Design, Implement & Maintain.

Categories: APPS Blogs

How to disable a SQL Plan Directive permanently

Yann Neuhaus - Tue, 2015-06-02 15:10

In 12c you will see a lot of SQL Plan Directives. Some are useful to get better execution plans, but some will trigger too much Adaptive Dynamic Sampling and that can become a big overhead, especially in Standard Edition. Then you don't want to drop them - or they will reappear. You can disable them, but what will happen after the retention weeks? Let's test it.

Disabled directive

A directive has been created which triggers too expensive dynamic sampling. You don't want that and you have disabled it one year ago with:

SQL> dbms_spd.alter_sql_plan_directive(14130932452940503769,'ENABLED','NO');
and everything is good. You're happy with that. Here is the directive:
SQL> select directive_id,state,last_used,auto_drop,enabled,extract(notes,'/spd_note/spd_text/text()') spd_text,extract(notes,'/spd_note/internal_state/text()') internal_state from dba_sql_plan_directives where directive_id in(select directive_id from dba_sql_plan_dir_objects where owner='DEMO' );

        DIRECTIVE_ID STATE      LAST_USED AUTO_DROP ENABLED SPD_TEXT                         INTERNAL_S
-------------------- ---------- --------- --------- ------- -------------------------------- ----------
14130932452940503769 SUPERSEDED 28-APR-14 YES       NO      {E(DEMO.DEMO_TABLE)[A, B, C, D]} HAS_STATS
The directive has not been used since April 2014 thanks to the 'enabled' set to NO.

If I run a query with a filter on those columns:

SQL> select count(*) Q1 from DEMO_TABLE where a+b=c+d;

                  Q1
--------------------
               10000

23:10:32 SQL> select * from table(dbms_xplan.display_cursor(format=>'basic +note'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------
EXPLAINED SQL STATEMENT:
------------------------
select count(*) Q1 from DEMO_TABLE where a+b=c+d

Plan hash value: 1839825454

-----------------------------------------
| Id  | Operation          | Name       |
-----------------------------------------
|   0 | SELECT STATEMENT   |            |
|   1 |  SORT AGGREGATE    |            |
|   2 |   TABLE ACCESS FULL| DEMO_TABLE |
-----------------------------------------

there is no dynamic sampling that this is exactly what I want.

Retention weeks

My retention is the default: 53 weeks. Let's see what happens after 53 weeks. I can call the 'auto drop' job with dbms_spd.drop_sql_plan_directive passing a null instead of a directive_id:

SQL> exec dbms_spd.drop_sql_plan_directive(null);

PL/SQL procedure successfully completed.

Run a few queries

Then let's have a few queries on those table columns:

SQL> select count(*) Q2 from DEMO_TABLE where a+b=c+d;

                  Q2
--------------------
               10000

SQL> select count(*) Q3 from DEMO_TABLE where a+b=c+d;

                  Q3
--------------------
               10000

and check the execution plan:

SQL> select * from table(dbms_xplan.display_cursor(format=>'basic +note'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------
EXPLAINED SQL STATEMENT:
------------------------
select count(*) Q3 from DEMO_TABLE where a+b=c+d

Plan hash value: 1839825454

-----------------------------------------
| Id  | Operation          | Name       |
-----------------------------------------
|   0 | SELECT STATEMENT   |            |
|   1 |  SORT AGGREGATE    |            |
|   2 |   TABLE ACCESS FULL| DEMO_TABLE |
-----------------------------------------

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)
   - 1 Sql Plan Directive used for this statement

A directive has been used:
 SQL> select directive_id,state,last_used,auto_drop,enabled,extract(notes,'/spd_note/spd_text/text()') spd_text,extract(notes,'/spd_note/internal_state/text()') internal_state from dba_sql_plan_directives where directive_id in(select directive_id from dba_sql_plan_dir_objects where owner='DEMO' );

        DIRECTIVE_ID STATE      LAST_USED AUTO_DROP ENABLED SPD_TEXT                         INTERNAL_S
-------------------- ---------- --------- --------- ------- -------------------------------- ----------
14130932452940503769 SUPERSEDED 15-MAY-15 YES       YES     {E(DEMO.DEMO_TABLE)[A, B, C, D]} HAS_STATS
Oh! The directive is back and enabled !

Auto Drop

Here are the criteria for auto-drop. SPD are considered to be dropped when AUTO_DROP is YES and either:

  • SPD is flagged as redundant
  • One of the tables has been dropped (in recycle_bin means dropped)
  • LAST_USAGE is from before the retention window
  • State is NEW (LAST_USED is null) and CREATED is before retention window
Do you see? Nothing about the ENABLE YES/NO there...

Conclusion

If you want to disable a SPD and be sure that it will never reappear then you have to do both of following:

SQL> exec dbms_spd.alter_sql_plan_directive(14130932452940503769,'ENABLED','NO');
SQL> exec dbms_spd.alter_sql_plan_directive(14130932452940503769,'AUTO_DROP','NO');
then because the AUTO DROP is disabled, the directive will never be deleted automatically.

Oracle and Adaptive Case Management: Part 1

Jan Kettenis - Tue, 2015-06-02 14:14
In this blog posting I address four key concepts that are used in Oracle Adaptive Case Management, or ACM for short. This article is the first in a series on ACM.

Recently I was involved in an Oracle Adaptive Case Management (ACM) project. Although some people involved knew about case management in general, it turned out that not everyone immediately understood how case management works with Oracle ACM. As you may be one of them, I will walk you through some of the concepts, using a format that differs from what I have seen so far, and seemed to work well for my audience.

I will not discuss the more general concept of case management. There are sufficient other references that probably do a better job than I could (for example Case Management Model and Notation, or CMMN for short, as defined by the Object Management Group ). For this article I will restrict myself to explaining that, unlike a "normal" BPMN process, case management supports a much more flexible "flow" of a process, for example supporting paths (flows) that were not thought of before, activity types that were not identified before, as well as stakeholders that were not known yet during the initial design. The "A" of Adaptive in ACM refers to the fact that some of this behavior can be configured run-time (after the system is out of development).

A typical example used in the context of case management is a complaints management process. Depending on the nature of the complaint, such a process can go back and forth a couple of times when more information about the complaint becomes available, coming from the filer or after consultations of experts that were not yet recognized.

Case Life Cycle
The first concept to discuss is that of a Case Life Cycle. A case can have the state open, suspended and closed. Unlike a BPMN process, after being closed a case can be reopened again (if required).


Mile StonesThe second concept is that of Milestones. In real life, you probably are used to defining 1 single milestone for every phase or "stage" (as it is called in CMMN) of a case, marking a significant result reached at the end of the it. With case management there may be milestones that are not always applicable, and therefore one should be able to skip them, or even close them halfway. It may also be necessary to revoke a previously reached milestone.

For Oracle ACM it therefore typically works best to identify the beginning and end of a stage as a milestone. So instead of defining 1 milestone "Complaint Evaulated" you define 2 milestones "Complaint Evaluation Started" and "Complaint Evaluation Ended". With Oracle ACM one can flag a milestone as being reached at any point during the stage.

Especially unpredictable behavior concerning the way milestones are reaches, it something that is very hard to model in BPMN.


Activities
As a third concept there are the Activities to discuss. From the perspective of the case, an activity is an atomic step that happens within a stage (in CMMN this is called a "task"). Normally, during its execution the case manager is not interested in the way the activity is executed, only in the result. Think for example about the consultation of an external expert: the only thing the complaints manager should be interested in, is the expert's report.


Within a stage an activity can be mandatory (Activity 1), for example the Initial Complaints Assessment, or optional (Activity 5), for example Request Info from Filer. Some activities may happen more than once within a stage (Activity 4), for example Request Expert Input. An activity may also apply to more than one stage (Activity 3), for example Update Complaints Manager. Activities may be sequential (Activity 1 and 2) or parallel (Activity 3 is parallel with 1 and 2). There can also be ad-hoc activities, like for example notifying some external party that turns out to be a stakeholder of the complaint.

As I will explain in the next article, an activity can be anything from a simple human task to perform, to a complete and even complex business process of its own.

Stakeholders
The fourth and last concept that I would like to discuss in this article, is that of Stakeholders. In real life a stakeholder is anyone that has some interest in the case, but may not always be involved as a case worker (someone doing something for the case) or case reviewer.

In Oracle ACM however, you are only interested in stakeholders that are case workers or case reviewers (very similar to the CMMN notion of case workers, except for that CMMN does not recognize case reviewers). As I will explain later, with Oracle ACM there can still be people that play a part in case while not being defined as a stakeholder.

Handling the Leap Second – Linux

Pythian Group - Tue, 2015-06-02 13:31

Last week I published a blog post titled “Are You Ready For the Leap Second?“, and by looking at the blog statistics I could tell that many of you read it, and that’s good, because you became aware of the risks that the leap second on June 30th, 2015 introduces. On the other hand, I must admit I didn’t provide clear instructions that you could use to avoid all possible scenarios. I’ve been looking into this for a good while and I think the official RedHat announcements and My Oracle Support notes are confusing. This blog post is my attempt to explain how to avoid the possible issues.

Update (June 9th, 2015): Made it clear in the text below that ntp’s slewing mode (ntp -x) is mandatory from Oracle Grid Infrastructure and therefore for RAC too.

The complexity of solving these problems comes from the fact that there are multiple contributing factors. The behavior of the system will depend on a combination of these factors.
In the coming sections I’ll try to explain what exactly you should pay attention to and what you should do to avoid problems. The content of this post is fully theoretical and based on the documentation I’ve read. I have NOT tested it, so it may behave differently. Please, if you notice any nonsense in what I’m writing, let me know by leaving a comment!

1. Collect the data

The following information will be required for you to understand what you’re dealing with:

  1. OS version and kernel version:
    $ cat /etc/issue
    Oracle Linux Server release 6.4
    Kernel \r on an \m
    
    $ uname -r
    2.6.39-400.17.1.el6uek.x86_64
    
  2. Is NTP used and which version of NTP is installed:
    $ ps -ef | grep ntp
    oracle    1627  1598  0 02:06 pts/0    00:00:00 grep ntp
    ntp       7419     1  0 May17 ?        00:00:17 ntpd -u ntp:ntp -p /var/run/ntpd.pid -g
    
    $ rpm -qa | grep ntp-
    ntp-4.2.4p8-3.el6.x86_64
    
  3. Version of tzdata and the configuration of /etc/localtime:
    $ rpm -qa | grep tzdata-
    tzdata-2012j-1.el6.noarc
    
    $ file /etc/localtime
    /etc/localtime: timezone data, version 2, 5 gmt time flags, 5 std time flags, no leap seconds, 235 transition times, 5 abbreviation chars
    
2. Check the kernel

Here’s a number of bugs that are related to leap second handling on Linux:

  1. System hangs on printing the leap second insertion message – This bug will hang your server at the time when the NTP notifies kernel about the leap second, and that can happen anytime on the day before the leap second (in our case anytime on June 30th, 2015). It’s fixed in kernel-2.6.9-89.EL (RHEL4) and kernel-2.6.18-164.el5 (RHEL5).
  2. Systems hang due to leap-second livelock – Because of this bug systems repeatedly crash due to NMI Watchdog detecting a hang. This becomes effective when the leap second is added. The note doesn’t exactly specify which versions fix the bug.
  3. Why is there high CPU usage after inserting the leap second? – This bug causes futex-active applications (i.e. java) to start consuming 100% CPU. Based on what’s discussed in this email in Linux Kernel Mailing List Archive, it’s triggered by a mismatch between timekeeping and hrtimer structures, which the leap second introduces. The document again does not clearly specify which versions fix the problem, however this “Kernal Bug Fix Update” mentions these symptoms to be fixed in 2.6.32-279.5.2.el6.

MOS Note: “How Leap Second Affects the OS Clock on Linux and Oracle VM (Doc ID 1453523.1)” mentions that kernels 2.4 to 2.6.39 are affected, but I’d like to know the exact versions. I’ve searched a lot, but I haven t found much, so here are the ones that I did find:

I’m quite sure by reading this you’re thinking: “What a mess!”. And that’s true. I believe, the safest approach is to be on kernel 2.6.39-200.29.3 or higher.

3. NTP is used

You’re using NTP if the ntpd process is running. In the outputs displayed above it’s running and has the following arguments: ntpd -u ntp:ntp -p /var/run/ntpd.pid -g. The behavior of the system during the leap second depends on which version of NTP you use and what’s the environment.

  • ntp-4.2.2p1-9 or higher (but not ntp-4.2.6p5-19.el7, ntp-4.2.6p5-1.el6 and ntp-4.2.6p5-2.el6_6) configured in slew mode (with option “-x”) – The leap second is not added by kernel, but the extra time is added by increasing the length of each second over ~2000 second period based on the differences of the server’s time and the time from NTP after the leap second. The clock is never turned backward. This is the configuration you want because:
    • Time never goes back, so there will be no impact to the application logic.
    • Strange time values like 23:59:60 are not used, so you won’t hit any DATE and TIMESTAMP datatype limitation issues.
    • As the leap second is not actually added, It should be possible to avoid all 3 kernel bugs that I mentioned by using this configuration. In many cases updating NTP is much simpler than a kernel upgrade, so if you’re still on an affected kernel use this option to bypass the bugs.

    The drawbacks of this configuration are related to the fact that the leap second is smeared out over a longer period of time:

    • This probably is not usable for applications requiring very accurate time.
    • This may not be usable for some clusters where all nodes must have exactly the same clocktime, because NTP updates are usually received every 1 to 18 minutes, plus giving the ~2000 seconds of time adjustment in slew mode the clocks could be off for as long as ~50 minutes. Please note, the slewing mode is (ntp -x) is mandatory for Oracle Grid Infrastructure as documented in Oracle® Grid Infrastructure Installation Guides 11g Release 2 and 12c Release 1.
  • ntp-4.2.2p1-9 or higher configured without slew mode (no “-x” option) – The NTP will notify the kernel about the upcoming leap second some time during June 30th, and the leap second will be added as an extra “23:59:59″ second (time goes backward by one second). You will want to be on kernel with all fixes present.
  • below ntp-4.2.2p1-9 – The NTP will notify the kernel about the upcoming leap second some time during June 30th, and depending on the environment, the leap second will be added as an extra “23:59:59″ second (time goes backward by one second), or the time will freeze for one second at midnight.

Extra precaution: if you’re running NTP make sure your /etc/localtime does not include leap seconds by running “file /etc/localtime” and confirming it lists message “no leap seconds”.

4. NTP is NOT used

If NTP is not used the time is managed locally by the server. The time is most likely off already, so I really do recommend enabling NTP in slew mode as described above, this is the right moment to do so.

If you have tzdata-2015a or higher installed, the information about the leap second on June 30th, 2015 is also available locally on the server, but it doesn’t mean yet it’s going to be added. Also if NTP is not used and the leap second is added locally, it will appear as “23:59:60″, which is an unsupported value for DATE and TIMESTAMP columns, so this is the configuration you don’t want to use. Here are the different conditions:

  • You’re below tzdata-2015a – the leap second will not be added.
  • You’re on tzdata-2015a or higher and “file /etc/localtime” includes message “X leap seconds”, where X is a number – the leap second will be added as “23:59:60″ and will cause problems for your DATE/TIMESTAMP datatypes. You don’t want this configuration. Disable leap second by copying the appropriate timezone file from /usr/share/zoneinfo over /etc/localtime. It’s a dynamic change, no reboots needed. (Timezone files including the leap seconds are located in /usr/share/zoneinfo<strong>/right</strong>)
  • “file /etc/localtime” includes message “no leap seconds” – the leap second will not be added.
The recommendations

Again I must say this is a theoretical summary on how to avoid leap second issues on Linux, based on what’s written above. Make sure you think about it before implementing as you’re the one who knows your own systems:

  • Single node servers, or clusters where time between nodes can differ – Upgrade to ntp-4.2.2p1-9 or higher and configure it in slew mode (option “-x”). This should avoid the kernel bugs too, but due to lack of accurate documentation it’s still safer to be on kernel 2.6.39-200.29.3 or higher.
  • Clusters or applications with very accurate time requirements – NTP with slew mode is not suitable as it’s unpredictable when it will start adjusting the time on each server. You want to be on kernel 2.6.39-200.29.3 or higher. NTP should be enabled. Leap second will be added as an extra “23:59:59″ second (the time will go backward by one second). Oracle Database/Clusterware should detect time drifting and should deal with it. Check MOS for any bugs related to time drifting for the versions you’re running.
  • I don’t care about the time accuracy, I can’t update any packages, but need my systems up at any cost – The simplest solution to this is stopping the NTP on June 29th and starting it up on July 1st, so that the server was left unaware of the leap second. Also, you need to make sure the /etc/localtime does not contain the leap second for June 30th, 2015 as explained above.
    -- on June 29th (UTC)
    # /etc/init.d/ntpd stop
    # date -s "`date`"    (reset the system clock)
    -- on July 1st (UTC)
    # /etc/init.d/ntpd start
  • Very accurate time requirements + time reduction is not allowed – I don’t know. I can’t see how this can be implemented. Does anyone have any ideas?
Post Scriptum

Initially I couldn’t understand why this extra second caused so much trouble. Don’t we change the time by a round hour twice a year without any issues? I found the answers during the research, and it’s obvious. Servers work in UTC time, which does not have daylight saving time changes. The timezone information is added just for representation purposes later on. UTC Time is continuous and predictable, but the leap second is something which breaks this normal continuity and that’s why it is so difficult to handle it. It’s also a known fact that Oracle Databases rely heavily on gettimeofday() system calls and these work in UTC too.

 

Discover more about Pythian’s Oracle Ace Maris Elsins.

Categories: DBA Blogs

Predicate Order

Jonathan Lewis - Tue, 2015-06-02 12:10

A recent OTN post demonstrated a very important point about looking at execution plans – especially when you don’t use the right data types. The question was:

We’ve this query which throws invalid number

SELECT * FROM table A
WHERE A.corporate_id IN (59375,54387) AND TRUNC(created_dt) BETWEEN '19-DEC-14' AND '25-DEC-14';

However it works fine if we use not in instead of in

SELECT * FROM table A  
WHERE A.corporate_id  NOT IN (59375,54387) AND TRUNC(created_dt) BETWEEN '19-DEC-14' AND '25-DEC-14';

Please assist.

A follow-up post told us that corporate_id was a varchar() type – so the root cause of the ORA-01722: invalid number error is simply that you shouldn’t be mixing data types. Either the corporate_id should have been defined as numeric or the in-list should have been a list of varchar2() values. (And, of course, the character strings that look like dates should have been converted explicitly to date data types using either the to_date() function with a 4-digit year or the date ‘yyyy-mm-dd’ syntax; and using “created_dt >=  19th Dec and created_dt < 26th Dec” would have given the optimizer a chance to get a better cardinality estimate)

The answer to the slightly more specific problem – why does changing NOT IN to IN allow the query to run rather than crashing – is (probably) one that I first addressed in an article in Oracle Magazine just over eleven years ago: with CPU costing enabled Oracle can change the order in which it applies filter predicates to a table. It’s also a question that can easily be answered by my commonest response to many of the optimizer questions that appear on OTN – look at the execution plan.

In this example it’s a fairly safe bet that there’s a reasonable small volume of data (according to the optimizer’s estimate) where to_number(corporate_id) is one of the required values, and a much larger volume of data where it is not; with some intermediate volume of data where the created_dt falls in the required date range. With CPU costing enabled (optional in 9i, enabled by default in 10g) the optimizer would then do some arithmetic to calculate the most cost-effective order of applying the filter predicates based on things like: the number of CPU cycles it takes to walk along a row to find a particular column. the number of CPU cycles it takes to convert a character column to a number and compare it with a number; the number of CPU cycles it takes truncate a date column and compare it with a string, the number of rows that would pass the numeric test hence requiring the first-applied date test, compared with the number of rows that would survive the first-applied date test hence requiring either the second date test or the numeric test to take place.

Here’s some code to demonstrate the point. It may require the system stats to be set to a particular values to ensure that it is probably repeatable, but there’s probably some flexibility in the range, which is why I’ve called dbms_stats.set_system_stats() in the first few lines:

drop table t1 purge;

create table t1 (
        v1      varchar2(10),
        d1      date
)
;

insert into t1 values(1,'01-Jan-2015');

insert into t1 values('x','02-Jan-2015');

insert into t1 values(3,'03-Jan-2015');
insert into t1 values(4,'04-Jan-2015');
insert into t1 values(5,'05-Jan-2015');
insert into t1 values(6,'06-Jan-2015');
insert into t1 values(7,'07-Jan-2015');
insert into t1 values(8,'08-Jan-2015');
insert into t1 values(9,'09-Jan-2015');
insert into t1 values(10,'10-Jan-2015');

execute dbms_stats.gather_table_stats(user,'t1');

First we create a table, load some data, and gather stats. You’ll notice that I’ve got a varchar2(10) column into which I’ve inserted numbers for all rows except one where it holds the value ‘x’. Now we just run some code to check the execution plans for a couple of queries.


explain plan for
select
        *
from    t1
where   v1 in (4,6)
and     d1 between '03-Jan-2015' and '09-Jan-2015'
;

select * from table(dbms_xplan.display);

explain plan for
select
        *
from    t1
where   v1 not in (4,6)
and     d1 between '03-Jan-2015' and '&1-Jan-2015'
;

select * from table(dbms_xplan.display);

As with the original question I’ve take a query with an IN operator and changed it to NOT IN. The in-list is numeric even though the relevant column is varchar2(10). The first query crashes with ORA-01722: invalid number, the second one runs and returns the correct result. You’ll notice, of course, that the “bad” value for v1 is not in the set of rows
where d1 is between 3rd and 9th Jan 2015. You’ll also notice that in my code I’ve used &1 for the end day in the query with the NOT IN clause so that I can re-run the query a few times to show the effects of changing the date range. Here are the execution plans – first with the IN clause:


--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     2 |    20 |     2   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |     2 |    20 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter((TO_NUMBER("V1")=4 OR TO_NUMBER("V1")=6) AND
              "D1">=TO_DATE(' 2015-01-03 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "D1"<=TO_DATE(' 2015-01-09 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

The optimizer arithmetic predicts 2 rows returned using a full tablescan – it doesn’t know the query is going to crash. Notice the predicate information, though. The first predicate says Oracle will attempt to convert v1 to a number and compare it with 4 and then (if the first test fails) with 6. The query will crash as soon as it hits a row with a non-numeric value for v1. In outline, the optimizer has decided that the numeric conversion and test is very cheap (on CPU) and only a few rows will survive to take the more expensive date comparison; wherease either of the (expensive) date comparisons would leave a lot of rows that would still have to be checked with the numeric test. It makes sense to do the numeric comparison first.

Here’s the plan for the query with the NOT IN clause when I set the date range to be 3rd Jan to 7th Jan.


Execution plan for NOT IN:  7th Jan
--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     5 |    50 |     2   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |     5 |    50 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("D1"<=TO_DATE(' 2015-01-07 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss') AND "D1">=TO_DATE(' 2015-01-03 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss') AND TO_NUMBER("V1")<>4 AND TO_NUMBER("V1")<>6)

The plan is still a full tablescan – there are no indexes available – and the estimated number of rows has gone up to 5. The important thing, though, is the predicate section. In this case the optimizer has decided that the first thing it will apply is the (relatively expensive) predicate “d1 >= 3rd Jan” before worrying about the “NOT IN” numeric predicate. The optimizer has worked out that almost all the data will survive the NOT IN predicate, so it’s not efficient to apply it before using other predicates that eliminate more data.

By a stroke of luck my simple example happened to be a very good example. Here’s what happened when I set the end date to 8th Jan:


--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     6 |    60 |     2   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |     6 |    60 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("D1">=TO_DATE(' 2015-01-03 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss') AND "D1"<=TO_DATE(' 2015-01-08 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss') AND TO_NUMBER("V1")<>4 AND TO_NUMBER("V1")<>6)

The estimated rows has gone up to 6 – but the interesting thing, as before, is the predicate section: in the previous example Oracle did the tests in the order “upper bound”, “lower bound”, “numeric”; in this test it has done “lower bound”, “upper bound”, “numeric”.

And this is what I got when I ran the test with 9th Jan:


--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     7 |    70 |     2   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |     7 |    70 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("D1">=TO_DATE(' 2015-01-03 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss') AND TO_NUMBER("V1")<>4 AND TO_NUMBER("V1")<>6 AND
              "D1"<=TO_DATE(' 2015-01-09 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

Again the estimated rows has gone up by one, but the ever-interesting predicate section now shows the evaluation order as: “lower bound”, “numeric”, “upper bound”.

There are only 6 possible orders for the predicate evaluation for the query with the NOT IN clause, and we’ve seen three of them. Three will fail, three will succeed – and I got all three of the successful orders. It wouldn’t take much fiddling around with the data (careful choice of duplicate values, variation in ranges of low and high values, and so on) and I could find a data set where small changes in the requested date range would allow me to reproduce all six variations. In fact when I changed my nls_date_format to “dd-mon-yy” and used a 2 digit year for testing I got two of the three possible failing predicate evaluation orders – “numeric”, “lower bound”, “higher bound” and “higher bound”, “numeric”, “lower bound” without changing the data set. (To be able to get all six orders with a single data set I’d probably need a data set where the “bad” v1 value corresponded to a d1 value somewhere mear the middle of the d1 range.)

The bottom line – use the correct data types; make sure your date literals are dates with a 4-digit year; check the predicate section to see if the optimizer did any implicit conversions with your predicates and what order it used them in. If you don’t do this you may find that a query can work perfectly for months, then crash because you finally got unlucky with the arithmetic.


Partner Corner: TeamInformatics on What Oracle Documents Cloud Service Means for You?

WebCenter Team - Tue, 2015-06-02 08:58

Thanks to all who attended our last month's executive webcast on Introducing Documents Cloud Service. We appreciate your feedback and all the follow up. For those who missed it or would like a replay of the webcast, you can watch it here.

To follow up on the discussion of Oracle Documents Cloud Service and what that means for your organization, one of our valued partners, TeamInformatics recently published an educational post on what the solution offers and how it can be leveraged to solve your day to day content needs. Given the informational value of the post, we are re-posting their original piece here in the hope that you find it useful. Looking forward to hearing your comments and questions.

Originally published on TeamInformatics blog

What Oracle’s Documents Cloud Service Means for You

By: Jon Chartrand - Solution Architect, TeamInformatics

The sphere of influence that is Enterprise Content Management has been steadily expanding to encompass areas such as records, digital assets, web content, and others. This has meant your ECM solution suite has had to grow and mature to support and maintain these activities. The newest is cloud-based document management, sharing, and collaboration. Now, I bet you’re thinking, “We don’t need that nor do we support that in our enterprise.” Here’s the trick though: Your users are already doing it and they’re very likely making it happen with software that’s not a part of your enterprise ecosystem. That means it’s probably unsupported, potentially insecure, and generally out of your control – not a good combination.

The rapid growth of this field has led to many solutions which attempt to enhance the consumer-level products and businessify them by offering a few more features at a wildly increased price. While these options can seem appealing, they still represent a gap in enterprise coverage as they aren’t themselves enterprise applications. Oracle, however, has expanded their Public Cloud offering – already the largest in the world – to not only fill the gap of Enterprise File Sync & Share, but also to expand cloud content management to your on-premises solutions, as well as mesh seamlessly with other applications. Now it’s possible to keep your users happy and productive while maintaining control and even expanding the capabilities of your enterprise. Introducing Oracle’s Documents Cloud Service, also known as DOCS.

DOCS for File Sync & Share

DOCS represents a trident of capability, the first tine of which is as an enterprise-grade file sync and share replacement for the consumer-grade applications your users may already be utilizing. Before you can sync or share content, however, you have to manage it and Oracle provides a modern, intuitive web interface, for access across every device, to do just that. From here users can upload, preview, revision, delete, and share content and folders with ease making this the front line in our EFSS battle.

On the syncing front, native desktop applications for both Windows and MacOS allows users to seamlessly sync folders of their choosing with the local file system. This means files are available for viewing and editing when and where users demand them and without the need for an Internet connection. When connectivity is restored the sync application automatically updates the cloud with any changes, removing a step for the user.

On the sharing front, sharing content internally and externally has been rendered both simple and secure. Internally, named users can be shared to folders as one of four roles; Manager, Contributor, Downloader, or Reader. This means you have control over who has access and what kind of permissions they receive. When sharing to an external, non DOCS, user Oracle has provided several capabilities to make the process simple and safe. First, public link accesses are carefully tracked and an audit trail is provided. Each public link can also be assigned an expiration date so you don’t have to worry about forever managing every link that’s been distributed. Even more, each public link can be created with a required passcode so that even if the link is improperly distributed, the materials remain secure. Finally, each public link can be assigned a role which is granted to those who use it. All these features combine to allow incredibly granular control over who can access what content when and with what privileges.

The last point is for those on-the-go. For mobile users Oracle provides native applications for both Android and iOS which enable feature-parity between the mobile and web platforms. This means users can access their content from virtually any device, at any time, and maintain the full suite of capabilities no matter what method they’re using. This represents an unprecedented level of access to and control over enterprise content for your users.

DOCS for Hybrid Content Management

File Sync & Share is a great step forward in content management, however we’re still potentially left with a cache of content that stands apart from your Enterprise Content repository. DOCS addresses this through a process whereby your ECM repository is “tethered” to your DOCS repository through a 3rd party solution and content is shuttled between the two applications when edits are made, ensuring both repositories have the appropriate version available. This process allows your current ECM solution to remain the single point of truth in your enterprise for all content but enables users to access that content from beyond the firewall in a safe and secure manner.

The use cases for this method are almost endless but imagine a contract package being worked on by a CMO, a salesperson in the field, and a client with contributor access via a shared link. The CMO, working from within the company, can make edits to the documents and upload them to the ECM system. The salesperson in the field accesses the documents via DOCS and can also make changes and suggestions. As revisions are made, the CMO is kept in the loop as the document updates back to the ECM system as well. Finally, when complete, the client can access the documents, digitally sign them, and upload new versions to DOCS. Within moments of uploading the CMO has access and can move them to the appropriate next step.

Hybrid Content Management takes the premise of EFSS and keeps it a truly enterprise endeavor by ensuring that content is reflective of only one repository. This ensures that all users are working with the same materials without fear of unknown changes or missing versions. It also guarantees that content owned by the enterprise is continually merged into the enterprise so there’s reduced anxiety over content ownership and location.

DOCS for PaaS Application Integration

Finally, DOCS takes an even longer and wider view of its role in the enterprise by enabling you to integrate other Software as a Service (SaaS) applications. The idea here is that any application to which users are uploading content represents another repository in the enterprise. Why should contracts uploaded to SalesForce live in that application? It’s not a content management application and it doesn’t have the metadata, workflows, and processes that your ECM system has. Documents Cloud Service works to solve this issue by providing a rich API foundation and an accessible embedded interface to allow you to merge applications with it and utilize its capabilities as a content platform. This Platform as a Service (PaaS) functionality allows you to keep your enterprises’ content in a single location – especially if you’re utilizing the Hybrid CM capabilities and merging your DOCS repository with your ECM platform.

With the embedded interface method you can add a simple iframe to any updateable UI to create an almost seamless merging of the two applications. While it looks like a user is uploading documents to the primary application, in reality they’re uploading to DOCS. With the API method, much more elaborate services can be written to customize the functionality of virtually any application, creating a background integration with Documents Cloud Service that is completely transparent to users. In either case, you’re removing another disparate cache of content and centralizing management into a single location. Ultimately this means less storage overhead for your SaaS applications and more complete control over your enterprise content.

Bringing It All Together

Consider a purchase order document uploaded to a contact entity in SalesForce. Though an integration with Document Cloud Services, the content item is actually seamlessly uploaded to DOCS. With the DOCS repository linked to your on-premises platform, the content is replicated to the appropriate folder in the ECM system and an automatic workflow is started, alerting the Director of Sales to the new purchase order and requesting approval. The Director makes a small edit and approves the content. This sends a notification to the sales agent and ends the workflow. The content, now being newer in the ECM system than on DOCS, then flows outward to the cloud, updating the version there. The sales agent happens to also use the desktop client to sync DOCS content with their laptop and so the version there is updated automatically. On receiving the notification, the agent goes to their Oracle Documents folder on the desktop and opens the purchase order to review the Director’s changes. Satisfied, the agent closes the document and then right-clicks on it to access DOCS’ sharing. The agent creates a public link with downloader privileges and sends this link to the purchaser.

In this scenario, the content is available through the SalesForce site, the DOCS site, the DOCS mobile apps, synced to the desktop, and through the on-premises ECM platform. Instead of having two, three, even four different copies of the content across various systems and on various workstations, all versions are centrally managed and maintained in the system of record. This degree of centralized control is precisely what Enterprise Content Management seeks to achieve and Documents Cloud Services bring us all one step closer to that goal.

Cassandra Update – Leap Second & JMX Security

Pythian Group - Tue, 2015-06-02 08:45

This is a short post about two things that should be on the to-do list for all Cassandra Administrators. The leap second issue and the new JMX default.

The Leap Second

Before we move on you should learn more about how the leap second affects Cassandra in more detail.

In short, you must update your JVM to version 7u60 or above. If you are on Cassandra 2.0.14+ or 2.1.x then all JVM on version 8 are safe and tested. One issue that the the JVM doesn’t solve is that time-series data might become interleaved. If is this is critical for your deployment (not for most cases) be aware of this.

JMX Security

Since Cassandra 2.0.14 and 2.1.4 the cassandra-env.sh file sets the JMX to only listen to the localhost.  So unless you are fine with this you should enable remote access, while making sure you activate security!

Short version:

    1. Edit $CASSANDRA_CONF/cassandra-env.sh update and set LOCAL_JMX=no
    2. Create /etc/cassandra/jmxremote.password and add the username and password:
monitorRole QED
controlRole R&amp;D
USERNAME PASSWORD
    1. Change ownership to the user you run Cassandra with and permission to read only:
chown cassandra:cassandra /etc/cassandra/jmxremote.password
chmod 400 /etc/cassandra/jmxremote.password
    1. Add the username defined before with readwrite permission to $JAVA_HOME/lib/management/jmxremote.access:
monitorRole readonly
USERNAME readwrite
controlRole readwrite \
create javax.management.monitor.,javax.management.timer. \
unregister
  1. Re-start Cassandra

 

Want to learn more about our Cassandra services? Contact us now!

Categories: DBA Blogs

Old Folks Boogie

Floyd Teter - Tue, 2015-06-02 08:42
And you knowThat you're over the hillWhen your mind makes a promiseThat your body can't fill...             - From Little Feat's "Old Folks Boogie"
I think I'm must be over the hill...a grumpy old man.  There was a time when, faced with an app that failed to work as promised, I would fuss and fight with that app to make it work.  No more.  Now, in the event that an app doesn't work as promised, I delete it and move on try something else.  No patience anymore.  I continually tell myself that I'll put the "fixer" hat back on my head, but I just never get to actually do so.
Mobile apps are the best example of my impatience.  There are many mobile apps for any outcome I care to achieve:  mobile meetings, tracking my heart rate, listening to music, taking notes... Plenty of alternatives.  So, when I run into an app that fails to work (or even fails to meet my expectations), I immediately junk it and move on to the next choice.  No effort, no feedback to the app developer, no nothing.  Just junked.  As my newest daughter-in-law would say: "ain't nobody got time for that".
In today's market, there is an expectation that apps just work.  Buggy apps die quick deaths in the market.  Reliability is not something special now...it's simply a requirement to get a seat at the table.
Classic example.  Last week, I was in Kansas visiting my new granddaughter.  Having taken our pet dogs on the trip, I wanted to find the local dog park.  Google Maps failed to find the dog park recommended by my son...the town in Kansas is just too small for Google Maps to fuss with.  Waze took me right to it.  Any guess as to which app is still on my smartphone and which one got junked on the spot?
Of course, there is a downside.  If everyone took my approach, the developers would never get feedback on their app from the field.  So their app would never improve.  But I have a "grumpy old man" response for that too.  So what?  Why should I be the beta tester?  Build something that works in the first place.
So yeah, I have that grumpy old man attitude when it comes to apps...especially mobile apps.  It either meets my expectations or gets kicked to the curb without further thought.  If I don't immediately get the outcome I'm expecting, I move on.
What about you?  Are you another member of the figurative "grumpy old man" or "grumpy old woman" club (no gender bias here - we accept everyone)?  Or are you willing to provide feedback and work with an app to help make it better?  Respond in the comments.

Recap SharePoint Event Paris 2015

Yann Neuhaus - Tue, 2015-06-02 03:12

 




conf_paris


Guillaume Meunier and I went to SharePoint Event 2015 in Paris on Saturday 30, May.

This event was well organized and on time! We learned about the news features regarding SharePoint, related to B.I, SQL, governance, collaboration & Communication, Workflow and Process System.




The sessions we followed are:

How OneDrive Company revolutionized storing files in my business
Almost all companies use the SMB file system and deployment networks on workstations to store and centralize the documents produced by the company. With SharePoint 2013 it is OnPremise Online or you can reach this feature by providing mobility, the Offline mode and Full Web mode. As part of implementation of document management, the first step to store and organize documents. In addition, when audit quality, work for the listener is that easier. This session was presented by Trelohan kevin and Hans Brender.

 

Making SharePoint Governance work for Businesses, IT and Users
With every SharePoint implementation comes the issue of governance. We all need it, but no one wants to do it. This session show us - in a practical way - how to implement a good governance practice that will engage users, it and business users throughout the entire lifetime of your SharePoint Platform. We had an overview of the practical tools and methods to overcome most issues we have to deal with, and a complete framework for SharePoint governance was shared. This session was the latest revision of the SharePoint Governance session delivered by Anders Skjoenaa.

Making-SharePoint-Governance-work-for-business-IT-and-users_20150602-093333_1.pdf

Integrating SharePoint into everyday working methods for a successful user adoption
Regarding SharePoint implementation, for any organization it means being aware of a significant investment: Human and Financial.
Once the platform deployment and content migration achieved, many companies face with the same issue: low usage by their business users.
AvePoint show us theirs solutions in order to enhanced Business Users daily methods: better management of internal meetings and projects directly in SharePoint, sharing content from SharePoint with external stakeholders, using Outlook/Office to register directly documents into SharePoint, a portal to access individualized services catalogs depending on the business needs.Top of Form
A session presented by Emmanuel Deletang from Avepoint.

 

High Availability & Disaster Recovery in SharePoint 2013 with SQL Server Always On Availability Groups!
SQL Server is really the brain of SharePoint; in this session, Serge Luca (SharePoint MVP) and Isabelle Van Campenhoudt (SQL Server MVP) gave us an overview of what any SharePoint consultant and DBA need to know regarding business continuity in SharePoint 2013. Of course SQL Server plays a major role in this story.
Topics covered:

  • Concepts of business continuity
  • SharePoint and Business continuity
  • Patterns and anti-patterns
  • SharePoint and SQL Server Always on Availability groups: what works, what doesn’t work (demos) (HA and DR)

HA-et-DR-en-SharePoint-2013-avec-SQL-server-AOAVG.pdf

If you need more information regarding SQL Server, please feel free to contact our exterts: Stephane Haby, David Barbarin & Stephane Savorgano.

Power BI 365
Power BI is THE BI brick Office 365. Power BI is THE BI brick Office 365.
Resolutely oriented Self-Service BI, it is intended for users who handle data in their trade. But how, when we are not specialist in Business Intelligence?
This session make a complete tour of the Power of BI functionalities: Power Query, Power Pivot, Power View, Power Map, Power Q & A Site Power BI, BI Power App. It makes us discover and understand the value added of the Self-service BI for Users.

A quick reminder: dbi services BI specialist is Matthieu Munch, do not hesitate to contact him if you need more information regarding BI tools and consulting.

Power-BI-365.pdf

Automate processes with SharePoint?
When talking about business process automation or BPM solutions, the SharePoint limits are often reached. Then there several possibilities: develop custom solutions, transform customer needs, and use third-party solutions (as Nintex for example). There is never a single right way to go, but, the most succeed process, I would say: the BEST PRACTICES for the Business Owner drives to satisfaction. We had a presentation of various methods and their impact on costs, the capabilities and constraints they induce.

 

The latest session was driven by all the MVP Team, it was a summary around MS Ignite Session relating the New Features for SharePoint 2016.

MVP_Team
Please have a look at this blog regarding this subject: SharePoint 2016: What’s new? And What to expect?