Skip navigation.

Feed aggregator

Nominations Due June 30 for Leadership in Finance Awards

Linda Fishman Hoyle - Tue, 2015-06-02 16:20

A Guest Post by Oracle Senior Director Anne Ozzimo (pictured left), Oracle ERP Applications Product Marketing

Now in its fourth year, Oracle’s Leadership in Finance Award―part of the Oracle Excellence Awards―has become the award of choice for visionary finance executives who have demonstrated outstanding leadership in leveraging Oracle solutions to transform their businesses.

Last year, Oracle Executive Vice Chairman of the Board Jeff Henley honored these winners at the 2014 CFO Summit @ Leaders Circle:

  • Former Hyatt Hotels CFO Gebhard Rainer (North America)
  • SITA CFO Colm O’Higgins (EMEA)
  • University of Melbourne CFO Allan Tait (JAPAC)
  • Nufarm Financial Controller Fabio De Felippis (LAD)

Customers, partners, and Oracle employees are urged to submit their nominations now for the 2015 Leadership in Finance Awards. CFOs, controllers, chief accounting officers, and VPs of finance from North America, EMEA, LAD, and JAPAC are all eligible for consideration, as long as they were directly involved in the success of the Oracle solution deployed. Full details and submission guidelines are available on the Oracle Excellence Awards website.

Award winners will be announced at a new event―the Modern Finance Experience―hosted by Oracle CEO Safra Catz in San Francisco. The dates are September 21-23, 2015. Katz and Henley will honor the award winners in an exclusive ceremony. Nominate your favorite finance executive now before nominations close on June 30, 2015.

Please email comments about this story to me at

Embrace Diversity and Congruence in the Workplace

Linda Fishman Hoyle - Tue, 2015-06-02 16:16

A Guest Post by Andy Campbell, Oracle HCM Cloud Sales Evangelist (pictured left)

What makes a healthy workforce? Ask a roomful of recruiters and someone will very quickly mention diversity. After all, businesses need employees from a wide variety of backgrounds who can inspire and challenge their colleagues and create a workforce that better resembles the outside world.

But if diversity is important, so too is inclusion. If recruiters can find people from diverse backgrounds, with diverse skills, but who share many of a company’s perspectives and values it’s more likely they’ll fit in and contribute towards a cohesive, productive team.

And let’s not forget just how important it is to get that right fit: the most expensive part of the hiring process is when the wrong hire is made and the whole thing has to start again.

For HR departments and recruiters looking for congruence, it seems to me that social sourcing has a whole lot to offer. This is because social media doesn’t just allow recruiters to advertise on a broader scale, but it can even enable them to channel campaigns through employees’ networks.

Why is this important? Because most people’s networks are networks of like-minded people. Recruiting through your number one sales rep’s LinkedIn network is likely to increase your chances of hiring a new employee of similar calibre. Moreover, it will save money on recruitment fees. There is of course a caveat here: social media is only part of the answer. Let’s face it, most people’s social networks are made up of people of a similar demographic to them and this can be limiting. Yes, companies want like-minded people, but they do not want clones.

For this reason, transparency is hugely important. A potential employee must to be able to easily find out about a company and determine whether the company shares his or her values and is a good fit.

Social media not only makes it easier for a prospect to find out this information, it makes it absolutely inevitable. All that remains is for employers to ensure that when the right recruit looks at their business through the lens of social media they like what they see.

The real power of social media in the recruitment space is that it works both ways. Not only can employers find out more about potential recruits and access talent that may otherwise have been unavailable to them, but prospects can learn more about the inner workings of a business than ever before. The end result should be more right fits more of the time.

Oracle Fusion Middleware (FMW) Training by Atul Kumar starting on 6th June

Online Apps DBA - Tue, 2015-06-02 16:15


I am starting a new batch of Oracle Fusion Middleware training for Architects/Administrator from 6th June 2015. To maintain quality, We’ve restricted number of trainees in this batch, We’ve now left with last 2 seats.

Course/Training Name : Oracle Fusion Middleware Administration (WLS, SOA, OBIEE, WCS, OHS, SSL, etc )

Course/Training Dates : 6, 7, 13, 14, 20, 21, 27, 28 June (9 -11 AM PST) followed by 4 hours of lab exercise (We provide dedicated machine on cloud and step by step lab exercises).


100% Money Back Guarantee : We’re ready to prove everything we claim. Buy The Course right now, and see it for yourself with-in the first 2 Sessions. If the course is not right for you just inform us after 2 sessions, We’ll refund full Money back to you in next 72 hours with no questions asked.

Still worried for you Money ?
Call Us: US +1 408 627 8021 |  INDIA +91 85 880 97686 (9 AM-5PM GMT only)

Mail Us: support [at]

Claim your seat by registering for Oracle Fusion Middleware Course from last 2 available seats.

Previous in series Next in seriesRelated Posts for FusionM
  1. Oracle Fusion Middleware Part II
  2. Oracle Fusion Middleware Overview
  3. Oracle Fusion Middleware : BEA WebLogic or Oracle Application Server
  4. Oracle Fusion Middleware 11g is coming … 1 July 2009
  5. Oracle Fusion Middleware 11g launched today
  6. Oracle Fusion Middleware 11g concepts for Apps DBA’s
  7. Fusion Middleware 11g – How to register Oracle Instance with Webogic Server (opmnctl) ?
  8. Reader’s Question : How to change hostname, domainname, IP of Fusion Middleware 11g (SOA, WebCenter, WebLogic) ?
  9. Oracle Fusion Middleware 11g R1 patchset 2 ( – SOA, WebCenter, RCU, WebLogic (10.3.3)
  10. Oracle Fusion Middleware Challenge : Can you answer Why different domain home for Admin & Managed Server ?
  11. Beware !!! Oracle Fusion Middleware 11g R1 patchset 2 ( is patch set only
  12. Oracle Fusion Middleware 11g R1 PS 3 ( available now
  13. Oracle Fusion Middleware 11g R1 PS4 ( is available now
  14. Cloning Oracle Fusion Middleware 11g (SOA, WebCenter, UCM) – copyBinary, copyConfig, extractMovePlan, pasteBinary, pasteConfig, cloningclient.jar
  15. Fusion Middleware 11g startup issue : OPMN RCV No such file or directory
  16. Oracle Fusion Middleware Start-up Issue : jps-config.xml No such file or directory : startScriptEnabled
  17. #OFMW (SOA, WebCenter, IdM, OBIEE, OHS, ADF, CEP…) is now available
  18. ODL-52049 DMS-Startup oracle.core. ojdl.logging. LogUtil log cannot create instance of class ‘oracle. dfw. incident.Incident DetectionLog Filter’
  19. Oracle Fusion Middleware (FMW) 12.1.2 is available now : WebLogic, Coherence, OWSM, OHS, ADF etc
  20. Oracle Fusion Middleware Installation : java.lang.UnsatisfiedLinkError cannot open shared object file
  21. Oracle Fusion Middleware Training – Win FREE Lesson : Suggest topic to our client
  22. YouTube Sunday : Troubleshoot Fusion Middleware Pre-Requisite Failure : Kernel Setting
  23. Oracle Fusion Middleware (FMW) now available : Documentation & Download
  24. Oracle Fusion Middleware (FMW) Training by Atul Kumar starting on 6th June
  25. Oracle Fusion Middleware (FMW) Training is now closed

The post Oracle Fusion Middleware (FMW) Training by Atul Kumar starting on 6th June appeared first on Oracle : Design, Implement & Maintain.

Categories: APPS Blogs

How to disable a SQL Plan Directive permanently

Yann Neuhaus - Tue, 2015-06-02 15:10

In 12c you will see a lot of SQL Plan Directives. Some are useful to get better execution plans, but some will trigger too much Adaptive Dynamic Sampling and that can become a big overhead, especially in Standard Edition. Then you don't want to drop them - or they will reappear. You can disable them, but what will happen after the retention weeks? Let's test it.

Disabled directive

A directive has been created which triggers too expensive dynamic sampling. You don't want that and you have disabled it one year ago with:

SQL> dbms_spd.alter_sql_plan_directive(14130932452940503769,'ENABLED','NO');
and everything is good. You're happy with that. Here is the directive:
SQL> select directive_id,state,last_used,auto_drop,enabled,extract(notes,'/spd_note/spd_text/text()') spd_text,extract(notes,'/spd_note/internal_state/text()') internal_state from dba_sql_plan_directives where directive_id in(select directive_id from dba_sql_plan_dir_objects where owner='DEMO' );

-------------------- ---------- --------- --------- ------- -------------------------------- ----------
14130932452940503769 SUPERSEDED 28-APR-14 YES       NO      {E(DEMO.DEMO_TABLE)[A, B, C, D]} HAS_STATS
The directive has not been used since April 2014 thanks to the 'enabled' set to NO.

If I run a query with a filter on those columns:

SQL> select count(*) Q1 from DEMO_TABLE where a+b=c+d;


23:10:32 SQL> select * from table(dbms_xplan.display_cursor(format=>'basic +note'));

select count(*) Q1 from DEMO_TABLE where a+b=c+d

Plan hash value: 1839825454

| Id  | Operation          | Name       |
|   0 | SELECT STATEMENT   |            |
|   1 |  SORT AGGREGATE    |            |

there is no dynamic sampling that this is exactly what I want.

Retention weeks

My retention is the default: 53 weeks. Let's see what happens after 53 weeks. I can call the 'auto drop' job with dbms_spd.drop_sql_plan_directive passing a null instead of a directive_id:

SQL> exec dbms_spd.drop_sql_plan_directive(null);

PL/SQL procedure successfully completed.

Run a few queries

Then let's have a few queries on those table columns:

SQL> select count(*) Q2 from DEMO_TABLE where a+b=c+d;


SQL> select count(*) Q3 from DEMO_TABLE where a+b=c+d;


and check the execution plan:

SQL> select * from table(dbms_xplan.display_cursor(format=>'basic +note'));

select count(*) Q3 from DEMO_TABLE where a+b=c+d

Plan hash value: 1839825454

| Id  | Operation          | Name       |
|   0 | SELECT STATEMENT   |            |
|   1 |  SORT AGGREGATE    |            |

   - dynamic statistics used: dynamic sampling (level=2)
   - 1 Sql Plan Directive used for this statement

A directive has been used:
 SQL> select directive_id,state,last_used,auto_drop,enabled,extract(notes,'/spd_note/spd_text/text()') spd_text,extract(notes,'/spd_note/internal_state/text()') internal_state from dba_sql_plan_directives where directive_id in(select directive_id from dba_sql_plan_dir_objects where owner='DEMO' );

-------------------- ---------- --------- --------- ------- -------------------------------- ----------
14130932452940503769 SUPERSEDED 15-MAY-15 YES       YES     {E(DEMO.DEMO_TABLE)[A, B, C, D]} HAS_STATS
Oh! The directive is back and enabled !

Auto Drop

Here are the criteria for auto-drop. SPD are considered to be dropped when AUTO_DROP is YES and either:

  • SPD is flagged as redundant
  • One of the tables has been dropped (in recycle_bin means dropped)
  • LAST_USAGE is from before the retention window
  • State is NEW (LAST_USED is null) and CREATED is before retention window
Do you see? Nothing about the ENABLE YES/NO there...


If you want to disable a SPD and be sure that it will never reappear then you have to do both of following:

SQL> exec dbms_spd.alter_sql_plan_directive(14130932452940503769,'ENABLED','NO');
SQL> exec dbms_spd.alter_sql_plan_directive(14130932452940503769,'AUTO_DROP','NO');
then because the AUTO DROP is disabled, the directive will never be deleted automatically.

Oracle and Adaptive Case Management: Part 1

Jan Kettenis - Tue, 2015-06-02 14:14
In this blog posting I address four key concepts that are used in Oracle Adaptive Case Management, or ACM for short. This article is the first in a series on ACM.

Recently I was involved in an Oracle Adaptive Case Management (ACM) project. Although some people involved knew about case management in general, it turned out that not everyone immediately understood how case management works with Oracle ACM. As you may be one of them, I will walk you through some of the concepts, using a format that differs from what I have seen so far, and seemed to work well for my audience.

I will not discuss the more general concept of case management. There are sufficient other references that probably do a better job than I could (for example Case Management Model and Notation, or CMMN for short, as defined by the Object Management Group ). For this article I will restrict myself to explaining that, unlike a "normal" BPMN process, case management supports a much more flexible "flow" of a process, for example supporting paths (flows) that were not thought of before, activity types that were not identified before, as well as stakeholders that were not known yet during the initial design. The "A" of Adaptive in ACM refers to the fact that some of this behavior can be configured run-time (after the system is out of development).

A typical example used in the context of case management is a complaints management process. Depending on the nature of the complaint, such a process can go back and forth a couple of times when more information about the complaint becomes available, coming from the filer or after consultations of experts that were not yet recognized.

Case Life Cycle
The first concept to discuss is that of a Case Life Cycle. A case can have the state open, suspended and closed. Unlike a BPMN process, after being closed a case can be reopened again (if required).

Mile StonesThe second concept is that of Milestones. In real life, you probably are used to defining 1 single milestone for every phase or "stage" (as it is called in CMMN) of a case, marking a significant result reached at the end of the it. With case management there may be milestones that are not always applicable, and therefore one should be able to skip them, or even close them halfway. It may also be necessary to revoke a previously reached milestone.

For Oracle ACM it therefore typically works best to identify the beginning and end of a stage as a milestone. So instead of defining 1 milestone "Complaint Evaulated" you define 2 milestones "Complaint Evaluation Started" and "Complaint Evaluation Ended". With Oracle ACM one can flag a milestone as being reached at any point during the stage.

Especially unpredictable behavior concerning the way milestones are reaches, it something that is very hard to model in BPMN.

As a third concept there are the Activities to discuss. From the perspective of the case, an activity is an atomic step that happens within a stage (in CMMN this is called a "task"). Normally, during its execution the case manager is not interested in the way the activity is executed, only in the result. Think for example about the consultation of an external expert: the only thing the complaints manager should be interested in, is the expert's report.

Within a stage an activity can be mandatory (Activity 1), for example the Initial Complaints Assessment, or optional (Activity 5), for example Request Info from Filer. Some activities may happen more than once within a stage (Activity 4), for example Request Expert Input. An activity may also apply to more than one stage (Activity 3), for example Update Complaints Manager. Activities may be sequential (Activity 1 and 2) or parallel (Activity 3 is parallel with 1 and 2). There can also be ad-hoc activities, like for example notifying some external party that turns out to be a stakeholder of the complaint.

As I will explain in the next article, an activity can be anything from a simple human task to perform, to a complete and even complex business process of its own.

The fourth and last concept that I would like to discuss in this article, is that of Stakeholders. In real life a stakeholder is anyone that has some interest in the case, but may not always be involved as a case worker (someone doing something for the case) or case reviewer.

In Oracle ACM however, you are only interested in stakeholders that are case workers or case reviewers (very similar to the CMMN notion of case workers, except for that CMMN does not recognize case reviewers). As I will explain later, with Oracle ACM there can still be people that play a part in case while not being defined as a stakeholder.

Handling the Leap Second – Linux

Pythian Group - Tue, 2015-06-02 13:31

Last week I published a blog post titled “Are You Ready For the Leap Second?“, and by looking at the blog statistics I could tell that many of you read it, and that’s good, because you became aware of the risks that the leap second on June 30th, 2015 introduces. On the other hand, I must admit I didn’t provide clear instructions that you could use to avoid all possible scenarios. I’ve been looking into this for a good while and I think the official RedHat announcements and My Oracle Support notes are confusing. This blog post is my attempt to explain how to avoid the possible issues.

Update (June 9th, 2015): Made it clear in the text below that ntp’s slewing mode (ntp -x) is mandatory from Oracle Grid Infrastructure and therefore for RAC too.

The complexity of solving these problems comes from the fact that there are multiple contributing factors. The behavior of the system will depend on a combination of these factors.
In the coming sections I’ll try to explain what exactly you should pay attention to and what you should do to avoid problems. The content of this post is fully theoretical and based on the documentation I’ve read. I have NOT tested it, so it may behave differently. Please, if you notice any nonsense in what I’m writing, let me know by leaving a comment!

1. Collect the data

The following information will be required for you to understand what you’re dealing with:

  1. OS version and kernel version:
    $ cat /etc/issue
    Oracle Linux Server release 6.4
    Kernel \r on an \m
    $ uname -r
  2. Is NTP used and which version of NTP is installed:
    $ ps -ef | grep ntp
    oracle    1627  1598  0 02:06 pts/0    00:00:00 grep ntp
    ntp       7419     1  0 May17 ?        00:00:17 ntpd -u ntp:ntp -p /var/run/ -g
    $ rpm -qa | grep ntp-
  3. Version of tzdata and the configuration of /etc/localtime:
    $ rpm -qa | grep tzdata-
    $ file /etc/localtime
    /etc/localtime: timezone data, version 2, 5 gmt time flags, 5 std time flags, no leap seconds, 235 transition times, 5 abbreviation chars
2. Check the kernel

Here’s a number of bugs that are related to leap second handling on Linux:

  1. System hangs on printing the leap second insertion message – This bug will hang your server at the time when the NTP notifies kernel about the leap second, and that can happen anytime on the day before the leap second (in our case anytime on June 30th, 2015). It’s fixed in kernel-2.6.9-89.EL (RHEL4) and kernel-2.6.18-164.el5 (RHEL5).
  2. Systems hang due to leap-second livelock – Because of this bug systems repeatedly crash due to NMI Watchdog detecting a hang. This becomes effective when the leap second is added. The note doesn’t exactly specify which versions fix the bug.
  3. Why is there high CPU usage after inserting the leap second? – This bug causes futex-active applications (i.e. java) to start consuming 100% CPU. Based on what’s discussed in this email in Linux Kernel Mailing List Archive, it’s triggered by a mismatch between timekeeping and hrtimer structures, which the leap second introduces. The document again does not clearly specify which versions fix the problem, however this “Kernal Bug Fix Update” mentions these symptoms to be fixed in 2.6.32-279.5.2.el6.

MOS Note: “How Leap Second Affects the OS Clock on Linux and Oracle VM (Doc ID 1453523.1)” mentions that kernels 2.4 to 2.6.39 are affected, but I’d like to know the exact versions. I’ve searched a lot, but I haven t found much, so here are the ones that I did find:

I’m quite sure by reading this you’re thinking: “What a mess!”. And that’s true. I believe, the safest approach is to be on kernel 2.6.39-200.29.3 or higher.

3. NTP is used

You’re using NTP if the ntpd process is running. In the outputs displayed above it’s running and has the following arguments: ntpd -u ntp:ntp -p /var/run/ -g. The behavior of the system during the leap second depends on which version of NTP you use and what’s the environment.

  • ntp-4.2.2p1-9 or higher (but not ntp-4.2.6p5-19.el7, ntp-4.2.6p5-1.el6 and ntp-4.2.6p5-2.el6_6) configured in slew mode (with option “-x”) – The leap second is not added by kernel, but the extra time is added by increasing the length of each second over ~2000 second period based on the differences of the server’s time and the time from NTP after the leap second. The clock is never turned backward. This is the configuration you want because:
    • Time never goes back, so there will be no impact to the application logic.
    • Strange time values like 23:59:60 are not used, so you won’t hit any DATE and TIMESTAMP datatype limitation issues.
    • As the leap second is not actually added, It should be possible to avoid all 3 kernel bugs that I mentioned by using this configuration. In many cases updating NTP is much simpler than a kernel upgrade, so if you’re still on an affected kernel use this option to bypass the bugs.

    The drawbacks of this configuration are related to the fact that the leap second is smeared out over a longer period of time:

    • This probably is not usable for applications requiring very accurate time.
    • This may not be usable for some clusters where all nodes must have exactly the same clocktime, because NTP updates are usually received every 1 to 18 minutes, plus giving the ~2000 seconds of time adjustment in slew mode the clocks could be off for as long as ~50 minutes. Please note, the slewing mode is (ntp -x) is mandatory for Oracle Grid Infrastructure as documented in Oracle® Grid Infrastructure Installation Guides 11g Release 2 and 12c Release 1.
  • ntp-4.2.2p1-9 or higher configured without slew mode (no “-x” option) – The NTP will notify the kernel about the upcoming leap second some time during June 30th, and the leap second will be added as an extra “23:59:59″ second (time goes backward by one second). You will want to be on kernel with all fixes present.
  • below ntp-4.2.2p1-9 – The NTP will notify the kernel about the upcoming leap second some time during June 30th, and depending on the environment, the leap second will be added as an extra “23:59:59″ second (time goes backward by one second), or the time will freeze for one second at midnight.

Extra precaution: if you’re running NTP make sure your /etc/localtime does not include leap seconds by running “file /etc/localtime” and confirming it lists message “no leap seconds”.

4. NTP is NOT used

If NTP is not used the time is managed locally by the server. The time is most likely off already, so I really do recommend enabling NTP in slew mode as described above, this is the right moment to do so.

If you have tzdata-2015a or higher installed, the information about the leap second on June 30th, 2015 is also available locally on the server, but it doesn’t mean yet it’s going to be added. Also if NTP is not used and the leap second is added locally, it will appear as “23:59:60″, which is an unsupported value for DATE and TIMESTAMP columns, so this is the configuration you don’t want to use. Here are the different conditions:

  • You’re below tzdata-2015a – the leap second will not be added.
  • You’re on tzdata-2015a or higher and “file /etc/localtime” includes message “X leap seconds”, where X is a number – the leap second will be added as “23:59:60″ and will cause problems for your DATE/TIMESTAMP datatypes. You don’t want this configuration. Disable leap second by copying the appropriate timezone file from /usr/share/zoneinfo over /etc/localtime. It’s a dynamic change, no reboots needed. (Timezone files including the leap seconds are located in /usr/share/zoneinfo<strong>/right</strong>)
  • “file /etc/localtime” includes message “no leap seconds” – the leap second will not be added.
The recommendations

Again I must say this is a theoretical summary on how to avoid leap second issues on Linux, based on what’s written above. Make sure you think about it before implementing as you’re the one who knows your own systems:

  • Single node servers, or clusters where time between nodes can differ – Upgrade to ntp-4.2.2p1-9 or higher and configure it in slew mode (option “-x”). This should avoid the kernel bugs too, but due to lack of accurate documentation it’s still safer to be on kernel 2.6.39-200.29.3 or higher.
  • Clusters or applications with very accurate time requirements – NTP with slew mode is not suitable as it’s unpredictable when it will start adjusting the time on each server. You want to be on kernel 2.6.39-200.29.3 or higher. NTP should be enabled. Leap second will be added as an extra “23:59:59″ second (the time will go backward by one second). Oracle Database/Clusterware should detect time drifting and should deal with it. Check MOS for any bugs related to time drifting for the versions you’re running.
  • I don’t care about the time accuracy, I can’t update any packages, but need my systems up at any cost – The simplest solution to this is stopping the NTP on June 29th and starting it up on July 1st, so that the server was left unaware of the leap second. Also, you need to make sure the /etc/localtime does not contain the leap second for June 30th, 2015 as explained above.
    -- on June 29th (UTC)
    # /etc/init.d/ntpd stop
    # date -s "`date`"    (reset the system clock)
    -- on July 1st (UTC)
    # /etc/init.d/ntpd start
  • Very accurate time requirements + time reduction is not allowed – I don’t know. I can’t see how this can be implemented. Does anyone have any ideas?
Post Scriptum

Initially I couldn’t understand why this extra second caused so much trouble. Don’t we change the time by a round hour twice a year without any issues? I found the answers during the research, and it’s obvious. Servers work in UTC time, which does not have daylight saving time changes. The timezone information is added just for representation purposes later on. UTC Time is continuous and predictable, but the leap second is something which breaks this normal continuity and that’s why it is so difficult to handle it. It’s also a known fact that Oracle Databases rely heavily on gettimeofday() system calls and these work in UTC too.


Discover more about Pythian’s Oracle Ace Maris Elsins.

Categories: DBA Blogs

Predicate Order

Jonathan Lewis - Tue, 2015-06-02 12:10

A recent OTN post demonstrated a very important point about looking at execution plans – especially when you don’t use the right data types. The question was:

We’ve this query which throws invalid number

WHERE A.corporate_id IN (59375,54387) AND TRUNC(created_dt) BETWEEN '19-DEC-14' AND '25-DEC-14';

However it works fine if we use not in instead of in

SELECT * FROM table A  
WHERE A.corporate_id  NOT IN (59375,54387) AND TRUNC(created_dt) BETWEEN '19-DEC-14' AND '25-DEC-14';

Please assist.

A follow-up post told us that corporate_id was a varchar() type – so the root cause of the ORA-01722: invalid number error is simply that you shouldn’t be mixing data types. Either the corporate_id should have been defined as numeric or the in-list should have been a list of varchar2() values. (And, of course, the character strings that look like dates should have been converted explicitly to date data types using either the to_date() function with a 4-digit year or the date ‘yyyy-mm-dd’ syntax; and using “created_dt >=  19th Dec and created_dt < 26th Dec” would have given the optimizer a chance to get a better cardinality estimate)

The answer to the slightly more specific problem – why does changing NOT IN to IN allow the query to run rather than crashing – is (probably) one that I first addressed in an article in Oracle Magazine just over eleven years ago: with CPU costing enabled Oracle can change the order in which it applies filter predicates to a table. It’s also a question that can easily be answered by my commonest response to many of the optimizer questions that appear on OTN – look at the execution plan.

In this example it’s a fairly safe bet that there’s a reasonable small volume of data (according to the optimizer’s estimate) where to_number(corporate_id) is one of the required values, and a much larger volume of data where it is not; with some intermediate volume of data where the created_dt falls in the required date range. With CPU costing enabled (optional in 9i, enabled by default in 10g) the optimizer would then do some arithmetic to calculate the most cost-effective order of applying the filter predicates based on things like: the number of CPU cycles it takes to walk along a row to find a particular column. the number of CPU cycles it takes to convert a character column to a number and compare it with a number; the number of CPU cycles it takes truncate a date column and compare it with a string, the number of rows that would pass the numeric test hence requiring the first-applied date test, compared with the number of rows that would survive the first-applied date test hence requiring either the second date test or the numeric test to take place.

Here’s some code to demonstrate the point. It may require the system stats to be set to a particular values to ensure that it is probably repeatable, but there’s probably some flexibility in the range, which is why I’ve called dbms_stats.set_system_stats() in the first few lines:

drop table t1 purge;

create table t1 (
        v1      varchar2(10),
        d1      date

insert into t1 values(1,'01-Jan-2015');

insert into t1 values('x','02-Jan-2015');

insert into t1 values(3,'03-Jan-2015');
insert into t1 values(4,'04-Jan-2015');
insert into t1 values(5,'05-Jan-2015');
insert into t1 values(6,'06-Jan-2015');
insert into t1 values(7,'07-Jan-2015');
insert into t1 values(8,'08-Jan-2015');
insert into t1 values(9,'09-Jan-2015');
insert into t1 values(10,'10-Jan-2015');

execute dbms_stats.gather_table_stats(user,'t1');

First we create a table, load some data, and gather stats. You’ll notice that I’ve got a varchar2(10) column into which I’ve inserted numbers for all rows except one where it holds the value ‘x’. Now we just run some code to check the execution plans for a couple of queries.

explain plan for
from    t1
where   v1 in (4,6)
and     d1 between '03-Jan-2015' and '09-Jan-2015'

select * from table(dbms_xplan.display);

explain plan for
from    t1
where   v1 not in (4,6)
and     d1 between '03-Jan-2015' and '&1-Jan-2015'

select * from table(dbms_xplan.display);

As with the original question I’ve take a query with an IN operator and changed it to NOT IN. The in-list is numeric even though the relevant column is varchar2(10). The first query crashes with ORA-01722: invalid number, the second one runs and returns the correct result. You’ll notice, of course, that the “bad” value for v1 is not in the set of rows
where d1 is between 3rd and 9th Jan 2015. You’ll also notice that in my code I’ve used &1 for the end day in the query with the NOT IN clause so that I can re-run the query a few times to show the effects of changing the date range. Here are the execution plans – first with the IN clause:

| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT  |      |     2 |    20 |     2   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |     2 |    20 |     2   (0)| 00:00:01 |

Predicate Information (identified by operation id):
   1 - filter((TO_NUMBER("V1")=4 OR TO_NUMBER("V1")=6) AND
              "D1">=TO_DATE(' 2015-01-03 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "D1"<=TO_DATE(' 2015-01-09 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

The optimizer arithmetic predicts 2 rows returned using a full tablescan – it doesn’t know the query is going to crash. Notice the predicate information, though. The first predicate says Oracle will attempt to convert v1 to a number and compare it with 4 and then (if the first test fails) with 6. The query will crash as soon as it hits a row with a non-numeric value for v1. In outline, the optimizer has decided that the numeric conversion and test is very cheap (on CPU) and only a few rows will survive to take the more expensive date comparison; wherease either of the (expensive) date comparisons would leave a lot of rows that would still have to be checked with the numeric test. It makes sense to do the numeric comparison first.

Here’s the plan for the query with the NOT IN clause when I set the date range to be 3rd Jan to 7th Jan.

Execution plan for NOT IN:  7th Jan
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT  |      |     5 |    50 |     2   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |     5 |    50 |     2   (0)| 00:00:01 |

Predicate Information (identified by operation id):
   1 - filter("D1"<=TO_DATE(' 2015-01-07 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss') AND "D1">=TO_DATE(' 2015-01-03 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss') AND TO_NUMBER("V1")<>4 AND TO_NUMBER("V1")<>6)

The plan is still a full tablescan – there are no indexes available – and the estimated number of rows has gone up to 5. The important thing, though, is the predicate section. In this case the optimizer has decided that the first thing it will apply is the (relatively expensive) predicate “d1 >= 3rd Jan” before worrying about the “NOT IN” numeric predicate. The optimizer has worked out that almost all the data will survive the NOT IN predicate, so it’s not efficient to apply it before using other predicates that eliminate more data.

By a stroke of luck my simple example happened to be a very good example. Here’s what happened when I set the end date to 8th Jan:

| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT  |      |     6 |    60 |     2   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |     6 |    60 |     2   (0)| 00:00:01 |

Predicate Information (identified by operation id):
   1 - filter("D1">=TO_DATE(' 2015-01-03 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss') AND "D1"<=TO_DATE(' 2015-01-08 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss') AND TO_NUMBER("V1")<>4 AND TO_NUMBER("V1")<>6)

The estimated rows has gone up to 6 – but the interesting thing, as before, is the predicate section: in the previous example Oracle did the tests in the order “upper bound”, “lower bound”, “numeric”; in this test it has done “lower bound”, “upper bound”, “numeric”.

And this is what I got when I ran the test with 9th Jan:

| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT  |      |     7 |    70 |     2   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |     7 |    70 |     2   (0)| 00:00:01 |

Predicate Information (identified by operation id):
   1 - filter("D1">=TO_DATE(' 2015-01-03 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss') AND TO_NUMBER("V1")<>4 AND TO_NUMBER("V1")<>6 AND
              "D1"<=TO_DATE(' 2015-01-09 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

Again the estimated rows has gone up by one, but the ever-interesting predicate section now shows the evaluation order as: “lower bound”, “numeric”, “upper bound”.

There are only 6 possible orders for the predicate evaluation for the query with the NOT IN clause, and we’ve seen three of them. Three will fail, three will succeed – and I got all three of the successful orders. It wouldn’t take much fiddling around with the data (careful choice of duplicate values, variation in ranges of low and high values, and so on) and I could find a data set where small changes in the requested date range would allow me to reproduce all six variations. In fact when I changed my nls_date_format to “dd-mon-yy” and used a 2 digit year for testing I got two of the three possible failing predicate evaluation orders – “numeric”, “lower bound”, “higher bound” and “higher bound”, “numeric”, “lower bound” without changing the data set. (To be able to get all six orders with a single data set I’d probably need a data set where the “bad” v1 value corresponded to a d1 value somewhere mear the middle of the d1 range.)

The bottom line – use the correct data types; make sure your date literals are dates with a 4-digit year; check the predicate section to see if the optimizer did any implicit conversions with your predicates and what order it used them in. If you don’t do this you may find that a query can work perfectly for months, then crash because you finally got unlucky with the arithmetic.

Partner Corner: TeamInformatics on What Oracle Documents Cloud Service Means for You?

WebCenter Team - Tue, 2015-06-02 08:58

Thanks to all who attended our last month's executive webcast on Introducing Documents Cloud Service. We appreciate your feedback and all the follow up. For those who missed it or would like a replay of the webcast, you can watch it here.

To follow up on the discussion of Oracle Documents Cloud Service and what that means for your organization, one of our valued partners, TeamInformatics recently published an educational post on what the solution offers and how it can be leveraged to solve your day to day content needs. Given the informational value of the post, we are re-posting their original piece here in the hope that you find it useful. Looking forward to hearing your comments and questions.

Originally published on TeamInformatics blog

What Oracle’s Documents Cloud Service Means for You

By: Jon Chartrand - Solution Architect, TeamInformatics

The sphere of influence that is Enterprise Content Management has been steadily expanding to encompass areas such as records, digital assets, web content, and others. This has meant your ECM solution suite has had to grow and mature to support and maintain these activities. The newest is cloud-based document management, sharing, and collaboration. Now, I bet you’re thinking, “We don’t need that nor do we support that in our enterprise.” Here’s the trick though: Your users are already doing it and they’re very likely making it happen with software that’s not a part of your enterprise ecosystem. That means it’s probably unsupported, potentially insecure, and generally out of your control – not a good combination.

The rapid growth of this field has led to many solutions which attempt to enhance the consumer-level products and businessify them by offering a few more features at a wildly increased price. While these options can seem appealing, they still represent a gap in enterprise coverage as they aren’t themselves enterprise applications. Oracle, however, has expanded their Public Cloud offering – already the largest in the world – to not only fill the gap of Enterprise File Sync & Share, but also to expand cloud content management to your on-premises solutions, as well as mesh seamlessly with other applications. Now it’s possible to keep your users happy and productive while maintaining control and even expanding the capabilities of your enterprise. Introducing Oracle’s Documents Cloud Service, also known as DOCS.

DOCS for File Sync & Share

DOCS represents a trident of capability, the first tine of which is as an enterprise-grade file sync and share replacement for the consumer-grade applications your users may already be utilizing. Before you can sync or share content, however, you have to manage it and Oracle provides a modern, intuitive web interface, for access across every device, to do just that. From here users can upload, preview, revision, delete, and share content and folders with ease making this the front line in our EFSS battle.

On the syncing front, native desktop applications for both Windows and MacOS allows users to seamlessly sync folders of their choosing with the local file system. This means files are available for viewing and editing when and where users demand them and without the need for an Internet connection. When connectivity is restored the sync application automatically updates the cloud with any changes, removing a step for the user.

On the sharing front, sharing content internally and externally has been rendered both simple and secure. Internally, named users can be shared to folders as one of four roles; Manager, Contributor, Downloader, or Reader. This means you have control over who has access and what kind of permissions they receive. When sharing to an external, non DOCS, user Oracle has provided several capabilities to make the process simple and safe. First, public link accesses are carefully tracked and an audit trail is provided. Each public link can also be assigned an expiration date so you don’t have to worry about forever managing every link that’s been distributed. Even more, each public link can be created with a required passcode so that even if the link is improperly distributed, the materials remain secure. Finally, each public link can be assigned a role which is granted to those who use it. All these features combine to allow incredibly granular control over who can access what content when and with what privileges.

The last point is for those on-the-go. For mobile users Oracle provides native applications for both Android and iOS which enable feature-parity between the mobile and web platforms. This means users can access their content from virtually any device, at any time, and maintain the full suite of capabilities no matter what method they’re using. This represents an unprecedented level of access to and control over enterprise content for your users.

DOCS for Hybrid Content Management

File Sync & Share is a great step forward in content management, however we’re still potentially left with a cache of content that stands apart from your Enterprise Content repository. DOCS addresses this through a process whereby your ECM repository is “tethered” to your DOCS repository through a 3rd party solution and content is shuttled between the two applications when edits are made, ensuring both repositories have the appropriate version available. This process allows your current ECM solution to remain the single point of truth in your enterprise for all content but enables users to access that content from beyond the firewall in a safe and secure manner.

The use cases for this method are almost endless but imagine a contract package being worked on by a CMO, a salesperson in the field, and a client with contributor access via a shared link. The CMO, working from within the company, can make edits to the documents and upload them to the ECM system. The salesperson in the field accesses the documents via DOCS and can also make changes and suggestions. As revisions are made, the CMO is kept in the loop as the document updates back to the ECM system as well. Finally, when complete, the client can access the documents, digitally sign them, and upload new versions to DOCS. Within moments of uploading the CMO has access and can move them to the appropriate next step.

Hybrid Content Management takes the premise of EFSS and keeps it a truly enterprise endeavor by ensuring that content is reflective of only one repository. This ensures that all users are working with the same materials without fear of unknown changes or missing versions. It also guarantees that content owned by the enterprise is continually merged into the enterprise so there’s reduced anxiety over content ownership and location.

DOCS for PaaS Application Integration

Finally, DOCS takes an even longer and wider view of its role in the enterprise by enabling you to integrate other Software as a Service (SaaS) applications. The idea here is that any application to which users are uploading content represents another repository in the enterprise. Why should contracts uploaded to SalesForce live in that application? It’s not a content management application and it doesn’t have the metadata, workflows, and processes that your ECM system has. Documents Cloud Service works to solve this issue by providing a rich API foundation and an accessible embedded interface to allow you to merge applications with it and utilize its capabilities as a content platform. This Platform as a Service (PaaS) functionality allows you to keep your enterprises’ content in a single location – especially if you’re utilizing the Hybrid CM capabilities and merging your DOCS repository with your ECM platform.

With the embedded interface method you can add a simple iframe to any updateable UI to create an almost seamless merging of the two applications. While it looks like a user is uploading documents to the primary application, in reality they’re uploading to DOCS. With the API method, much more elaborate services can be written to customize the functionality of virtually any application, creating a background integration with Documents Cloud Service that is completely transparent to users. In either case, you’re removing another disparate cache of content and centralizing management into a single location. Ultimately this means less storage overhead for your SaaS applications and more complete control over your enterprise content.

Bringing It All Together

Consider a purchase order document uploaded to a contact entity in SalesForce. Though an integration with Document Cloud Services, the content item is actually seamlessly uploaded to DOCS. With the DOCS repository linked to your on-premises platform, the content is replicated to the appropriate folder in the ECM system and an automatic workflow is started, alerting the Director of Sales to the new purchase order and requesting approval. The Director makes a small edit and approves the content. This sends a notification to the sales agent and ends the workflow. The content, now being newer in the ECM system than on DOCS, then flows outward to the cloud, updating the version there. The sales agent happens to also use the desktop client to sync DOCS content with their laptop and so the version there is updated automatically. On receiving the notification, the agent goes to their Oracle Documents folder on the desktop and opens the purchase order to review the Director’s changes. Satisfied, the agent closes the document and then right-clicks on it to access DOCS’ sharing. The agent creates a public link with downloader privileges and sends this link to the purchaser.

In this scenario, the content is available through the SalesForce site, the DOCS site, the DOCS mobile apps, synced to the desktop, and through the on-premises ECM platform. Instead of having two, three, even four different copies of the content across various systems and on various workstations, all versions are centrally managed and maintained in the system of record. This degree of centralized control is precisely what Enterprise Content Management seeks to achieve and Documents Cloud Services bring us all one step closer to that goal.

Cassandra Update – Leap Second & JMX Security

Pythian Group - Tue, 2015-06-02 08:45

This is a short post about two things that should be on the to-do list for all Cassandra Administrators. The leap second issue and the new JMX default.

The Leap Second

Before we move on you should learn more about how the leap second affects Cassandra in more detail.

In short, you must update your JVM to version 7u60 or above. If you are on Cassandra 2.0.14+ or 2.1.x then all JVM on version 8 are safe and tested. One issue that the the JVM doesn’t solve is that time-series data might become interleaved. If is this is critical for your deployment (not for most cases) be aware of this.

JMX Security

Since Cassandra 2.0.14 and 2.1.4 the file sets the JMX to only listen to the localhost.  So unless you are fine with this you should enable remote access, while making sure you activate security!

Short version:

    1. Edit $CASSANDRA_CONF/ update and set LOCAL_JMX=no
    2. Create /etc/cassandra/jmxremote.password and add the username and password:
monitorRole QED
controlRole R&amp;D
    1. Change ownership to the user you run Cassandra with and permission to read only:
chown cassandra:cassandra /etc/cassandra/jmxremote.password
chmod 400 /etc/cassandra/jmxremote.password
    1. Add the username defined before with readwrite permission to $JAVA_HOME/lib/management/jmxremote.access:
monitorRole readonly
USERNAME readwrite
controlRole readwrite \
create, \
  1. Re-start Cassandra


Want to learn more about our Cassandra services? Contact us now!

Categories: DBA Blogs

Old Folks Boogie

Floyd Teter - Tue, 2015-06-02 08:42
And you knowThat you're over the hillWhen your mind makes a promiseThat your body can't fill...             - From Little Feat's "Old Folks Boogie"
I think I'm must be over the hill...a grumpy old man.  There was a time when, faced with an app that failed to work as promised, I would fuss and fight with that app to make it work.  No more.  Now, in the event that an app doesn't work as promised, I delete it and move on try something else.  No patience anymore.  I continually tell myself that I'll put the "fixer" hat back on my head, but I just never get to actually do so.
Mobile apps are the best example of my impatience.  There are many mobile apps for any outcome I care to achieve:  mobile meetings, tracking my heart rate, listening to music, taking notes... Plenty of alternatives.  So, when I run into an app that fails to work (or even fails to meet my expectations), I immediately junk it and move on to the next choice.  No effort, no feedback to the app developer, no nothing.  Just junked.  As my newest daughter-in-law would say: "ain't nobody got time for that".
In today's market, there is an expectation that apps just work.  Buggy apps die quick deaths in the market.  Reliability is not something special's simply a requirement to get a seat at the table.
Classic example.  Last week, I was in Kansas visiting my new granddaughter.  Having taken our pet dogs on the trip, I wanted to find the local dog park.  Google Maps failed to find the dog park recommended by my son...the town in Kansas is just too small for Google Maps to fuss with.  Waze took me right to it.  Any guess as to which app is still on my smartphone and which one got junked on the spot?
Of course, there is a downside.  If everyone took my approach, the developers would never get feedback on their app from the field.  So their app would never improve.  But I have a "grumpy old man" response for that too.  So what?  Why should I be the beta tester?  Build something that works in the first place.
So yeah, I have that grumpy old man attitude when it comes to apps...especially mobile apps.  It either meets my expectations or gets kicked to the curb without further thought.  If I don't immediately get the outcome I'm expecting, I move on.
What about you?  Are you another member of the figurative "grumpy old man" or "grumpy old woman" club (no gender bias here - we accept everyone)?  Or are you willing to provide feedback and work with an app to help make it better?  Respond in the comments.

Recap SharePoint Event Paris 2015

Yann Neuhaus - Tue, 2015-06-02 03:12



Guillaume Meunier and I went to SharePoint Event 2015 in Paris on Saturday 30, May.

This event was well organized and on time! We learned about the news features regarding SharePoint, related to B.I, SQL, governance, collaboration & Communication, Workflow and Process System.

The sessions we followed are:

How OneDrive Company revolutionized storing files in my business
Almost all companies use the SMB file system and deployment networks on workstations to store and centralize the documents produced by the company. With SharePoint 2013 it is OnPremise Online or you can reach this feature by providing mobility, the Offline mode and Full Web mode. As part of implementation of document management, the first step to store and organize documents. In addition, when audit quality, work for the listener is that easier. This session was presented by Trelohan kevin and Hans Brender.


Making SharePoint Governance work for Businesses, IT and Users
With every SharePoint implementation comes the issue of governance. We all need it, but no one wants to do it. This session show us - in a practical way - how to implement a good governance practice that will engage users, it and business users throughout the entire lifetime of your SharePoint Platform. We had an overview of the practical tools and methods to overcome most issues we have to deal with, and a complete framework for SharePoint governance was shared. This session was the latest revision of the SharePoint Governance session delivered by Anders Skjoenaa.


Integrating SharePoint into everyday working methods for a successful user adoption
Regarding SharePoint implementation, for any organization it means being aware of a significant investment: Human and Financial.
Once the platform deployment and content migration achieved, many companies face with the same issue: low usage by their business users.
AvePoint show us theirs solutions in order to enhanced Business Users daily methods: better management of internal meetings and projects directly in SharePoint, sharing content from SharePoint with external stakeholders, using Outlook/Office to register directly documents into SharePoint, a portal to access individualized services catalogs depending on the business needs.Top of Form
A session presented by Emmanuel Deletang from Avepoint.


High Availability & Disaster Recovery in SharePoint 2013 with SQL Server Always On Availability Groups!
SQL Server is really the brain of SharePoint; in this session, Serge Luca (SharePoint MVP) and Isabelle Van Campenhoudt (SQL Server MVP) gave us an overview of what any SharePoint consultant and DBA need to know regarding business continuity in SharePoint 2013. Of course SQL Server plays a major role in this story.
Topics covered:

  • Concepts of business continuity
  • SharePoint and Business continuity
  • Patterns and anti-patterns
  • SharePoint and SQL Server Always on Availability groups: what works, what doesn’t work (demos) (HA and DR)


If you need more information regarding SQL Server, please feel free to contact our exterts: Stephane Haby, David Barbarin & Stephane Savorgano.

Power BI 365
Power BI is THE BI brick Office 365. Power BI is THE BI brick Office 365.
Resolutely oriented Self-Service BI, it is intended for users who handle data in their trade. But how, when we are not specialist in Business Intelligence?
This session make a complete tour of the Power of BI functionalities: Power Query, Power Pivot, Power View, Power Map, Power Q & A Site Power BI, BI Power App. It makes us discover and understand the value added of the Self-service BI for Users.

A quick reminder: dbi services BI specialist is Matthieu Munch, do not hesitate to contact him if you need more information regarding BI tools and consulting.


Automate processes with SharePoint?
When talking about business process automation or BPM solutions, the SharePoint limits are often reached. Then there several possibilities: develop custom solutions, transform customer needs, and use third-party solutions (as Nintex for example). There is never a single right way to go, but, the most succeed process, I would say: the BEST PRACTICES for the Business Owner drives to satisfaction. We had a presentation of various methods and their impact on costs, the capabilities and constraints they induce.


The latest session was driven by all the MVP Team, it was a summary around MS Ignite Session relating the New Features for SharePoint 2016.

Please have a look at this blog regarding this subject: SharePoint 2016: What’s new? And What to expect?

SAP HANA SQL scripting optimization: the CE Functions

Yann Neuhaus - Tue, 2015-06-02 02:15

In SAP HANA, you have two possibilities to create the Calculated Views:

  • Using the graphical method
  • Using the scripting method with CE functions

In this blog, I will demonstrate that CE Functions can improve performances from a Calculated View.

First, I will give you some general information regarding the CE Functions. After that, I will show you the two different ways to create a Calculated View. And at the end, I will compare their performances using a SQL select.

If you want to have more information regarding the SAP HANA technology, don't hesitate to assist at the next dbi services event:

CE Functions

The CE Functions encapsulate data-transformation functionalities. They constitute an alternative to using SQL statements as their logic is directly implemented in the Sap HANA CALC engine. Direct use of the CALC engine allows implementers to influence the execution of a procedure or a query which, in some cases, is more efficient.

In the table below, you can find the list of the CE Functions and their use cases:


How to create a Calculation View

As I say at the beginning, you have two methods to create a Calculation View in Sap HANA:

  • Using the graphical method
  • Using the Scripting method
Using the graphical Method

Right click on a package and select “New / Calculation View”


Select the “Graphical” type


Create your view following the steps below

        1. Select type of operation you want to perform

        2. Select your sources

        3. Join your sources

        4. Select the object you want to use in your Calculation view

        5. Join your “Aggregation operation” to the “Aggregation” box


Create your Calculation View Layout

          1. Click on the “Aggregation” box

          2. Select the object you want to have in your layout

          3. Check if the object are selected and add, if needed, calculated columns


 Check the objects

          1. Click on “Semantics” box

          2. Select the type of the objects

          3. Validate and activate the view


Using the “SQL scripting” Method

Right click on a package and select “New / Calculation View”


Select the “SQL Script” type


Create your view following the steps below

          1. Click on the “Script View” box

          2. Type our SQL script

          3. Introduce the CE functions


Check the objects

          1. Click on “Semantics” box

          2. Select the type of the objects

          3. Validate and activate the view


SQL Performance comparison Goal of the test

In this part, I will compare the SQL performance from two calculated views that have been built with the two different methods:

  • “Graphical” method
  • “SQL scripting” method
Description of the test

The same SELECT query will be send to the database and we will check the Server time response. The two SELECT will use a different calculated view as data source:

  • CAL_AMOUNT (graphical method)
  • CAL_AMOUNT_CE (SQL Scripting method)
Test with the “Graphical” calculated view SQL Query




Test with the “SQL Scripting” calculated view SQL Query




Performance decoding

Why the use of these CE functions boost the performances of your queries? The explanation is located in the query execution plan and especially in the use of the CALC engine from the SAP HANA database.

When you send a “normal” SQL query in the SAP HANA database, the CALC engine is not used. The SQL parser send the query directly to the “Database optimizer” to optimize the execution of the query (1).


When your SELECT query uses a calclated view with CE functions, the CALC Engine optimizes the calculation model.


In our case, when we analyze the “normal” SQL query, the “calculation search” task has been split in 3 different sub-queries that can’t start at the same time.

But with the optimized calculated view, we can remark that there is only 2 sub-queries.



The use of CE functions in the creation of calculated views can significantly accelerate the execution of your SQL queries. The CALC engine from SAP HANA is optimize to use these functions.

There’s only one restriction using this kind of functions. The performance will dramatically reduce if you try to create a SQL query mixing “normal” and “optimized” calculated views.












APEX 5.0 - Run Applications in New Tabs Part 2

Denes Kubicek - Tue, 2015-06-02 01:44
In this blog posting I was showing how to get the Firefox Browser to work with tabs in APEX 5.0. It looks like APEX 5.0 was "optimized" to work in Google Chrome and the other browsers like IE and Firefox were not fully considered. I was able to make it working using the settings I described in the first post. However, the old style popup windows would also open in a new tab. Yesterday I discovered a feature I wasn't aware of. Changing the settings using that feature helped me to get everything working as it worked before. Here is what I did:
  • 1. I uninstalled all Tab-Addons in Firefox.
  • 2. The standard settings in Tools / Options / Tabs are shown in the following screenshot:
  • 3. I installed Tab Mix Plus Add-On and changed the settings as in the screenshot:
  • 4. Finaly, I changed the preferences in my workspace:
Now, the consequences were that clicking on Run Application button, would automatically open the application in a new tab:

and the classic popups like image popup would open in a new window:

The only thing is that you need to set the options for each workspace you use.

Categories: Development

<b>Contributions by Angela Golla,

Oracle Infogram - Mon, 2015-06-01 11:01
Contributions by Angela Golla, Infogram Deputy Editor

Oracle OpenWorld 2015 / San Francisco October 25th - 29th
99.5% of Attendees Said Oracle OpenWorld Delivers on Their Objectives
Critical objectives like:
  • Discovering new product and technology solutions
  • Improving technical proficiency and knowledge
  • Learning tips and tricks for currently installed products
  • Understanding where the industry is headed
You probably already know that your investment of time and money in Oracle OpenWorld will pay for itself and then some. Register before August 30th to receive hundreds of dollars in early bird discounts

Sumner Technologies: Take Two

Scott Spendolini - Mon, 2015-06-01 10:44

About a month ago, I left my position at Accenture Enkitec Group. I had a couple of ideas as to what I wanted to do next, but nothing was 100% solid.  After considering a couple of different options, I'm happy to announce that together with Doug Gault & Tim St. Hilaire, we're re-launching Sumner Technologies.

Much like last time, the focus will be on Oracle APEX; but we’re going to refine that focus a little bit.  In addition to traditional consulting, we’re going to focus more on higher-level services, such as security reviews and APEX health checks, as well as produce a library of on-demand training content.  APEX has matured tremendously over the past few years, and we feel that these services will complement the needs of the marketplace.

It’s exciting to be starting things over, so to speak.  Lots will be the same, but even more will be different.  There’s a lot of work to be done (yes, I know the site is not in APEX - yet), but we’re excited at the potential of what we’re going to offer APEX customers, as the APEX marketplace is not only more mature, but it’s also grown and will continue to do so.

Feel free to check out what we’re up to on Facebook, Twitter, LinkedIn and our website.  Or find us at KScope in a couple of weeks!

User Engagement – Why Does it Matter?

Rittman Mead Consulting - Mon, 2015-06-01 09:25

The value of any system can be measured by how often you interact with it. The perfect example is Slack, which is now seen as the most successful collaboration app ever and is valued at $1.2bn. The key metric for this is DAU/MAU (the ratio of daily users over monthly users), which scales from 0 (no visits) to 1 (every user visits daily) based on how active your users are.

How does this relate to your BI or analytics system? We believe that the same concept applies. The more you interact with your BI/analytics system, the more valuable it is to your organisation.

Return on investment (ROI) of BI/analytics systems has always been hard to measure. There is widespread belief that the battle was either won or lost during the development stage of its lifecycle so, agile project approaches, stakeholder involvement, and effective change management programs are employed to closer align the system to users’ true requirements. However, even that may not drive users to use the system.

What can you do if your system is already live?

Focus on user engagement. The BI Scorecard describes typical BI/analytics systems as having only 22% user adoption. Gartner, in its The Consumerization of BI Drives Greater Adoption paper, puts the typical user adoption rate at 30%. This leaves a lot of room for improvement.

We believe that there are 5 drivers for user engagement:

  • User interface and user experience
  • Quality, relevance, and confidence in data
  • Performance
  • Ability to use the system
  • Accessibility – is the system available in the right way, at the right time?

Addressing these can drive low cost and low risk improvements to the user engagement of your system and, in turn, increase its value.

Over the next few months, we are focusing on user engagement and will be launching a set of articles and initiatives to assist organisations. We will also launch a User Engagement Service that baselines your current user engagement levels, makes a series of recommendations for improvements based on the drivers above, implements them and then measures their impact on the system. The results are a demonstrable increased ROI on your BI/analytics system.

Over the next few months we will be putting a lot of focus into user engagement, plus developing a set of tools to extend OBIEE to use technologies like D3. If you are interested in hearing more, please sign up to our mailing list below.

#mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; width:500px;} /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */

Categories: BI & Warehousing

Untrusted X11 forwarding

Laurent Schneider - Mon, 2015-06-01 08:08

I wrote a while ago about my security concerns regarding

xhost +
xterm -display mypc:0

Way back then, I suggested ssh tunnel. SSH is pretty easy to set up, by enabling the X11Forwarding option.

In OpenSSH 3.8 release note, 2004, there was a new default .

ssh(1) now uses untrusted cookies for X11-Forwarding

In the man ssh_config page, it’s still documented as being the default

ForwardX11Trusted The default is ‘no’

But it actually isn’t on most *ix derivates, e.g. RedHat /etc/ssh/ssh_config

# If this option is set to yes then
# remote X11 clients will have full access
# to the original X11 display. As virtually
# no X11 client supports the untrusted
# mode correctly we set this to yes.
ForwardX11Trusted yes

Who is we?

Okay, let’s go back.

If you use the unsafest method, xhost + and xterm -display pc:0, then you grant everybody the right to manipulate X.

If you use trusted ssh, which is the _undocumented_ default in Linux, then you grant this right only to anyone with access to your authority, most probably located in the file $HOME/.Xauthority. So root and yourself, at least.

If you trust neither yourself nor root, you could restrict access to your resource, preventing one hacker from switching your mouse buttons or doing a screenshot. But this is probably going to prevent most of your applications from working. Also, it probably won’t work at all if you use putty, reflection and (virtually any?) other client tools.

If you want to force Trusted mode, use -Y or -o ForwardX11Trusted=yes.

If you want to force Untrusted mode, use -X and -o ForwardX11Trusted=no.

If you use only -X, it may transparently defaults to the more convenient but less secure -Y. Sometimes. At least on Linux OpenSSH. But if you use different Unix / SSH flavours, the -X may ends with an error message like connection to “localhost:10.0″ refused by server. In that case, simply use -Y. Actually, always use -Y if you want Trusted.

Exclusive: The Future of PeopleSoft with Marc Weintraub

Duncan Davies - Mon, 2015-06-01 08:00

Marc Weintraub (Senior Director of Product Strategy for PeopleSoft) appears in this exclusive Cedar Consulting video and discusses the Future of PeopleSoft with Cedar’s star HR Optimisation guru, Jo Randles.

They cover items such as the size of the PeopleSoft team, competing with cloud-only solutions, avoiding the ‘upgrade for compliance’ trap, self-funding quick wins, co-existence options and putting PeopleSoft in the Cloud.

If you’re curious about the direction that PeopleSoft is taking, or interested in the options available to you now, this is a must watch.

Databases Running in the Cloud

Tim Hall - Mon, 2015-06-01 07:39

cloudI’ve been playing around with running databases in the cloud recently. It’s quite simplistic stuff, just to get a feel for it and investigate the possibilities of using it for some projects at work. Here’s what I’ve got so far.

Overview: Oracle: MySQL: SQL Server:

It’s hard to differentiate between the cloud providers if you are just using them to provide a VM and self managing a system on it. It’s just another box provider.

In contrast the DBaaS offerings are much more interesting. I really like what Amazon are doing with RDS for Oracle/MySQL/SQL Server. I think these would work very well for *our* MySQL and SQL Server installations, which tend to be quite simple. I’m not sure I can live with some of the restrictions for RDS for Oracle, but that’s probably because I’m a snobby DBA type, who thinks he knows best. :) The DBaaS for SQL Server on Azure is also really nice. You get less control than the RDS version, but maybe that’s a good thing.

You might have noticed I’ve not written much about Oracle Cloud yet. I should be getting a trial of the platform this month, so I will be able to fill in those gaps then.



Databases Running in the Cloud was first posted on June 1, 2015 at 2:39 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Release of ASU Case Study on e-Literate TV

Michael Feldstein - Mon, 2015-06-01 06:55

By Phil HillMore Posts (327)

Today we are thrilled to release the third case study in our new e-Literate TV series on “personalized learning”. In this series, we examine how that term, which is heavily marketed but poorly defined, is implemented on the ground at a variety of colleges and universities.

We are adding three episodes from Arizona State University (ASU), a school that is frequently in the news. Rather than just talking about the ASU problems, we are talking with the ASU people involved. What problems are they trying to solve? How do students view some of the changes? Are faculty being replaced by technology or are they changing roles? For that matter, how are faculty members involved in designing some of these changes?

You can see all the case studies (either 2 or 3 per case study) at the series link, and you can access individual episodes below.

ASU Case Study: Ambitious Approach to Change in R1 University

ASU Case Study: Rethinking General Education Science for Non-Majors

ASU Case Study: The Changing Role of Faculty and Teaching Assistants

e-Literate TV, owned and run by MindWires Consulting, is funded in part by the Bill & Melinda Gates Foundation. When we first talked about the series with the Gates Foundation, they agreed to give us the editorial independence to report what we find, whether it is good, bad, or indifferent.

As with the previous series, we are working in collaboration with In the Telling, our partners providing the platform and video production. Their Telling Story platform allows people to choose their level of engagement, from just watching the video to accessing synchronized transcripts and accessing transmedia. We have added content directly to the timeline of each video, bringing up further references, like e-Literate blog posts or relevant scholarly articles, in context. With In The Telling’s help, we are crafting episodes that we hope will be appealing and informative to those faculty, presidents, provosts, and other important college and university stakeholders who are not ed tech junkies.

We will release two more case studies over the next month, and we also have two episodes discussing the common themes we observed on the campuses. We welcome your feedback, either in comments or on Twitter using the hashtag #eLiterateTV.


The post Release of ASU Case Study on e-Literate TV appeared first on e-Literate.