Feed aggregator

Single-Sign-On to Oracle ERP Cloud

Amis Blog - Tue, 2017-03-21 10:04

More and more enterprises are using Single-Sign-On (SSO) for there on-premise applications today, but what if they want to use SSO for there cloud applications as well?

This blog post is addressing this topic for Single-Sign-On to Oracle ERP Cloud in a hybrid environment.

First of all lets focus on SSO on-premise and introduce some terminology.

A user (aka principal) wants to have access to a particular service. This service can be found at the Service Provider (SP). The provided services are secured so the user needs to authenticate itself first. The Identity Provider (IdP) is able to validate (assert) the authenticity of the user by asking, for instance, the username and password (or using other methods).

So for authentication we always have three different roles: User, Service Provider (SP) and Identity Provider (IdP), as shown below.

For Single-Sign-On we should have a centralized IdP and we should have a standardized way to assert the authentication information.

In an on-premise landscape there is plenty of choice for an IdP. Some common used ones are: Microsoft Active Directory (AD) (closed source), Oracle Identity & Access Management (closed source) and Shibboleth (open source). For now we assume we are using AD.

Kerberos

The most used standard for doing SSO is Kerberos. In that case a Kerberos ticket is asserted by the IdP which is used towards all the Service Providers to be able to login.

This Kerberos method is suited for an on-premise landscape and also suited if the connection to a private cloud is via a VPN (the two are effectively part of the internal network and everything should work ok for the cloud as well). But what if we want to integrate a public cloud such as Oracle Fusion Cloud then things get messy.

Arguably the reason Kerberos isn’t used over the public Internet doesn’t have to do with the security of the protocol itsef, but rather that it’s an authentication model that doesn’t fit the needs of most “public Internet” applications. Kerberos is a heavy protocol and cannot be used in scenarios where users want to connect to services from unknown clients as in a typical Internet or Public Cloud computer scenario, where the authentication provider typically does not have knowledge about the users client system.

The main standards to be able to facilitate SSO for the internet are:

  • OpenID
  • OAuth
  • Security Assertion Markup Language (SAML)
SAML 2.0
Oracle Fusion Cloud is based on SAML 2.0 so let’s go on with this standard for now.
Conceptually the SAML handshake looks like Kerberos; you can also see the different roles for User, SP and IdP and the assertion of a SSO ticket.
Identity Federation
But how can we integrate Kerberos with SAML?
Now a different concept comes in: Identity Federation. This means linking and using the identity of a user across several security domains (on-premise and public cloud). In simpler terms, an SP does not necessarily need to obtain and store the user credentials in order to authenticate them. Instead, the SP can use an IdP that is already storing this. In our case the IdP is on-premise (Active Directory for example) and the SP is the Oracle Fusion Cloud application.
Now there are two things to be done:
  • The on-premise Kerberos ticked should be translated to SAML. Because we want SSO.
  • There is need for trust between IdP en SP. Only trusted security domains can access the SP. Trust configuration should be done at both sites (on-premise vs cloud)
Translation of Kerberos ticked is performed by a Security Token Service (STS). This is the broker that sits between a SP and the user. An STS is an issuer of security tokens. “Issuer” is often a synonym of an STS. STS’s can have different roles: as IdP when they authenticate users or as Federation Provider (FP) when they sit in the middle of a trust chain and act as “relying parties” for other IdPs.
In our case the STS translates Kerberos to SAML and Microsoft Active Directory Federation Server (ADFS) and Oracle Identity Federation Server (part of Oracle Identity Governance Suite) are examples of doing this.
So the picture look like this now:
Trust
But how is the Trust achieved?
Trust is just metadata about the SP and the IdP. So the metadata from the IdP should be uploaded in Oracle ERP Cloud and visa versa.When you create metadata for the IdP, the IdP entity is added to a circle of trust. A circle of trust is used to group SP’s and IdP’s in a secure, trusted environment. Other remote provider entities can be added to the circle of trust.
Metadata is defined in XML. A SP uses the Metadata to know how to communicate with the IdP and vise versa. Metadata define things like what service is available, addresses and certificates:
  • Location of its SSO service.
  • An ID identifying the provider.
  • Signature of the metadata and public keys for verifying and encrypting further communication.
  • Information about if the IdP wants the communication signed or encrypted.
There is no protocol how the exchange is done, but there are no secret information in the metadata so the XML can be freely distributed by mail or published in clear text on the Internet.
It is however highly recommended that the metadata is protected from unauthorized modification, this could be a good start on a Man-In-The-Middle attack.
The integrity of the Metadata could be protected using for example digital signatures or by transporting the metadata using some secure channels.
Metadata could contain lots of other information. For a full description have a look at the SAML specifications http://saml.xml.org/saml-specifications
Oracle ERP Cloud Security
Application Security in Oracle ERP Cloud consists of two parts:
  1. Oracle Identity Management (OIM) running in the cloud (Oracle Identity Federation is part of this).
  2. Authorization Policy Manager (APM).
Oracle Identity Management is responsible for the user accounts management. Authorization Policy Manager is responsible for the fine grained SaaS role mapping (aka Entitlements).
See this blog post from Oracle how this works: http://www.ateam-oracle.com/introduction-to-fusion-applications-roles-concepts/
Remark: the application security in Oracle ERP Cloud will change with R12 and will benefit from the following new capabilities:
  • Separation between OIM and APM is no longer available. A new simplified Security Console will contain both.

  • Configuration of SSO integration (with IdP) is simplified and can be performed from a single screen.
  • REST API’s based on SCIM (System for Cross-Domain Identity Management) 2.0 are available for Identity Synchronization with IdP.
Another remark: Oracle Identity Cloud Service is released in Q1 2017. Integration with Oracle ERP Cloud is not the case yet because Identity Federation functionality is not implemented yet. The release date isn’t clear, so we have to deal with the functionality presented above.
Configuring SSO for Oracle ERP Cloud
 For SSO the following aspects should be taken into account:
  • Users and Entitlements
  • Initial upload of identities and users
  • Identity and user Synchronization
  • Exchange of SP and IdP metadata
Users and Entitlements

Before going into this I must explain the difference between users and employees.

  • When talking about users we mean the user login account. As explained before these accounts are the domain of IAM.
  • Users have access rights based on Role Based Access Controls (RBAC). Also IAM is handling this.
  • Users have entitlements to use particular ERP functionality. This is handled in APM.
  • When talking about employees we mean the business employee with it’s associated business job. This is the domain of Oracle HCM Cloud (even when you don’t have a HCM full-use license). An employee can access Oracle ERP Cloud when it’s having an user account in IAM and the proper entitlements in APM.
Initial user upload

To establish SSO between the customer’s on-premises environment and the Oracle ERP Cloud environment, the customer must specify which identity attribute (aka GUID) (user name or email address) will be unique across all users in the customer’s organization. The SAML token should pass this attribute so the SP could determine which user is asserted (remember the first picture in this blog post).

But before this could work the SP should have all users loaded. This is a initial step in the Oracle ERP Cloud on-boarding process.
Currently (Oracle ERP Cloud R11) the following options are available:
  • If running Oracle HCM Public Cloud, you may need to use HR2HR Integration
  • If running Non-HCM Public Cloud, use Spreadsheet Upload [Document Note 1454722.1] or if you are running CRM Public Cloud, use the CRM upload utility for HCM employees. You could also manually enter the employee.

Do the following tasks to load the initial user data into Oracle ERP Cloud:

  1. Extract user data from your local LDAP directory service to a local file by using the tools provided by your LDAP directory service vendor.
  2. Convert the data in the file into a format that is delivered and supported by Oracle ERP Cloud.
  3. Load the user data into Oracle ERP Cloud by using one of the supported data loading methods.

Data loaders in Oracle ERP Cloud import data in the CSV format. Therefore, you must convert user data extracted from your local LDAP directory into the CSV format. Ensure that the mandatory attributes are non-empty and present.

From Oracle ERP Cloud R12 the initial load can also be performed by using the SCIM 2.0 REST API’s. For details about this see: https://docs.oracle.com/cd/E52734_01/oim/OMDEV/scim.htm#OMDEV5526

Identity and user Synchronization

The IdP should always have the truth about the users and business roles. So there should be something in place to push them to the Oracle ERP Cloud.

For R12 the SCIM REST API’s are the best way to do that. For R11 it’s a lot more complicated as explained below.

Now the concept of employee and job comes in again. As explained earlier in this blog post this is the domain of Oracle HCM Cloud (which is also part of Oracle ERP Cloud).

Oracle HCM Cloud is having REST API’s for read and push of Employee and Job data to Oracle HCM Cloud:

  • GET /hcmCoreApi/resources/11.12.1.0/emps
  • POST /hcmCoreApi/resources/11.12.1.0/emps
  • PATCH /hcmCoreApi/resources/11.12.1.0/emps/{empsUniqID}
  • GET /hcmCoreSetupApi/resources/11.12.1.0/jobs
  • GET /hcmCoreSetupApi/resources/11.12.1.0/jobs/{jobsUniqID}

For more details about these see (which are also available in R11) see: https://docs.oracle.com/cloud/latest/globalcs_gs/FARWS/Global_HR_REST_APIs_and_Atom_Feeds_R12.html

But how can we provision IAM/APM? For that Oracle HCM Cloud have standard provisioning job:

  • Send Pending LDAP Requests: Sends bulk requests and future-dated requests that are now active to OIM. The response to each request from OIM to Oracle Fusion HCM indicates transaction status (for example, Completed).
  • Retrieve Latest LDAP Changes: Requests updates from OIM that may not have arrived automatically because of a failure or error, for example.

For details see: http://docs.oracle.com/cloud/farel8/common/OCHUS/F1210304AN1EB1F.htm

Now the problem could arise that an administer has changed user permissions in ERP Cloud (HCM or IAM/APM) which are not reflected in the IdP (which should always reflect the truth), so these are out-of-sync.

To solve this the IdP should first read all employee and job data from Oracle HCM Cloud and based on that creates the delta with it’s own administration. This delta is pushed to Oracle HCM Cloud so all manually changes are removed. This synchronization job should be performed at least every day.

The whole solution for Identity and user synchronization for R11 could look like this:

 

Exchange metadata for SSO
In R11 of Oracle ERP Cloud the exchange of SAML metadata for SSO is a manual process. In R12 there is a screen to do this. So for R12 skip the rest of this blog.
For R11, generation of SP metadata.xml (to setup the Federation for IdP) and upload of your IdP metadata.xml into the SP is performed by the Oracle Cloud Operations team. To start the integration process you should create a Service Request and provide the following information:
  • Which type Federation Server is used on-premise.
  • Which SaaS application you want to integrate.
  • How many users will be enabled.
  • URL’s for IdP production and IdP non-production.
  • Technical contacts.

The following should also be taken into account (at both sites):

  • The Assertion Consumer Service URL of the SP, where the user will be redirected from the IdP with SAML Assertion.
  • The Signing Certificate corresponding to the private key used by the SP to sign the SAML Messages.
  • The Encryption Certificate corresponding to the private key used by the SP to decrypt the SAML Assertion, if SAML encryption is to be used.
  • The Logout service endpoint.
The Oracle Cloud Operations team document delivers a document how to configure the on-premises IdP (Microsoft Active Directory Federation Server (ADFS) 2.0 or Oracle Identity Federation Server 11g).
Be aware that the Oracle Cloud Operations team needs two weeks as least to do the configuration in Oracle SSO Cloud.
For detailed information about this see Oracle Support Document: Co-Existence and SSO: The SSO Enablement Process for Public Cloud Customers (Doc ID 1477245.1).

The post Single-Sign-On to Oracle ERP Cloud appeared first on AMIS Oracle and Java Blog.

Deception

Jonathan Lewis - Tue, 2017-03-21 09:41

One of the difficulties with trouble-shooting is that’s it very easy to overlook, or forget to go hunting for, the little details that turn a puzzle into a simple problem. Here’s an example showing how you can read a bit of an AWR report and think you’ve found an unpleasant anomaly. I’ve created a little model and taken a couple of AWR snapshots a few seconds apart so the numbers involved are going to be very small, but all I’m trying to demonstrate is a principle. So here’s a few lines of one of the more popular sections of an AWR report:

SQL ordered by Gets                       DB/Inst: OR32/or32  Snaps: 1754-1755
-> Resources reported for PL/SQL code includes the resources used by all SQL
   statements called by the code.
-> %Total - Buffer Gets   as a percentage of Total Buffer Gets
-> %CPU   - CPU Time      as a percentage of Elapsed Time
-> %IO    - User I/O Time as a percentage of Elapsed Time
-> Total Buffer Gets:         351,545
-> Captured SQL account for   65.0% of Total

     Buffer                 Gets              Elapsed
      Gets   Executions   per Exec   %Total   Time (s)  %CPU   %IO    SQL Id
----------- ----------- ------------ ------ ---------- ----- ----- -------------
      8,094          20        404.7    2.3        0.0 114.1   2.3 017r1rur8atzv
Module: SQL*Plus
UPDATE /*+ by_pk */ T1 SET N1 = 0 WHERE ID = :B1

We have a simple update statement which, according to the hint/comment (that’s not a real hint, by the way) and guessing from column names, is doing an update by primary key; but it’s taking 400 buffer gets per execution!

It’s possible, but unlikely, that there are about 60 indexes on the table that all contain the n1 column; perhaps there’s a massive read-consistency effect going on thanks to some concurrent long-running DML on the table; or maybe there are a couple of very hot hotspots in the table that are being constantly modified by multiple sessions; or maybe the table is a FIFO (first-in, first-out) queueing table and something funny is happening with a massively sparse index.

Let’s just check, first of all, that the access path is the “update by PK” that the hint/comment suggests (cut-n-paste):


SQL> select * from table(dbms_xplan.display_cursor('017r1rur8atzv',null));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------
SQL_ID  017r1rur8atzv, child number 0
-------------------------------------
UPDATE /*+ by_pk */ T1 SET N1 = 0 WHERE ID = :B1

Plan hash value: 1764744892

----------------------------------------------------------------------------
| Id  | Operation          | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | UPDATE STATEMENT   |       |       |       |     3 (100)|          |
|   1 |  UPDATE            | T1    |       |       |            |          |
|*  2 |   INDEX UNIQUE SCAN| T1_PK |     1 |    14 |     2   (0)| 00:00:01 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("ID"=:B1)

The plan is exactly as expected – so where do we look next to find out what’s going on? I’m a great believer in trying to make sure I have as much relevant information as possible; but there’s always the compromise when collecting information that balances the benefit of the new information against the difficulty of gathering it – sometimes the information that would be really helpful is just too difficult, or time-consuming, to collect.

Fortunately, in this case, there’s a very quick easy way to enhance the information we’ve got so far. The rest of the AWR report – why not search for that SQL_ID in the rest of the report to see if that gives us a clue ? Unfortunately the value doesn’t appear anywhere else in the report. On the other hand there’s the AWR SQL report (?/rdbms/admin/awrsqrpt.sql – or the equivalent drill-down on the OEM screen), and here’s a key part of what it tells us for this statement:


Stat Name                                Statement   Per Execution % Snap
---------------------------------------- ---------- -------------- -------
Elapsed Time (ms)                                36            1.8     0.0
CPU Time (ms)                                    41            2.0     0.1
Executions                                       20            N/A     N/A
Buffer Gets                                   8,094          404.7     2.3
Disk Reads                                        1            0.1     0.0
Parse Calls                                      20            1.0     0.4
Rows                                          2,000          100.0     N/A
User I/O Wait Time (ms)                           1            N/A     N/A
Cluster Wait Time (ms)                            0            N/A     N/A
Application Wait Time (ms)                        0            N/A     N/A
Concurrency Wait Time (ms)                        0            N/A     N/A
Invalidations                                     0            N/A     N/A
Version Count                                     1            N/A     N/A
Sharable Mem(KB)                                 19            N/A     N/A
          -------------------------------------------------------------

Spot the anomaly?

We updated by primary key 20 times – and updated 2,000 rows!

Take another look at the SQL – it’s all in upper case (apart from the hint/comment) with a bind variable named B1 – that means it’s (probably) an example of SQL embedded in PL/SQL. Does that give us any clues ? Possibly, but even if it doesn’t we might be able to search dba_source for the PL/SQL code where that statement appears. And this is what it looks like in the source:

        forall i in 1..m_tab.count
                update  /*+ by_pk */ t1
                set     n1 = 0
                where   id = m_tab(i).id
        ;

It’s PL/SQL array processing – we register one execution of the SQL statement while processing the whole array, so if we can show that there are 100 rows in the array the figures we get from the AWR report now make sense. One of the commonest oversights I (used to) see in places like the Oracle newsgroup or listserver was people reporting the amount of work done but forgetting to consider the equally important “work done per row processed”. To me it’s also one of the irritating little defects with the AWR report – I’d like to see “rows processed” in various of the “SQL ordered by” sections of the report (not just the “SQL ordered by Executions” section), rather than having to fall back on the AWR SQL report.

Footnote:

If you want to recreate the model and tests, here’s the code:


rem
rem     Script:         forall_pk_confusion.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Mar 2017
rem
rem     Last tested
rem             12.1.0.2
rem

create table t1
nologging
as
with generator as (
        select  --+ materialize
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        cast(rownum as number(8,0))                     id,
        2 * trunc(dbms_random.value(1e10,1e12))         n1,
        cast(lpad('x',100,'x') as varchar2(100))        padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6 -- > comment to avoid WordPress format problem
;

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1', 
                method_opt       => 'for all columns size 1'
        );
end;

alter table t1 add constraint t1_pk primary key(id);

declare

        cursor c1 is
        select  id
        from    t1
        where   mod(id,10000) = 1
        ;

        type c1_array is table of c1%rowtype index by binary_integer;
        m_tab c1_array;

begin

        open c1;

        fetch c1
        bulk collect
        into m_tab
        ;

        dbms_output.put_line('Fetched: ' || m_tab.count);

        close c1;

        forall i in 1..m_tab.count
                update  /*+ by_pk */ t1
                set     n1 = 0
                where   id = m_tab(i).id
        ;

        dbms_output.put_line('Updated: ' || sql%rowcount);

end;
/

select
        v.plan_table_output
from
        v$sql   sql,
        table(dbms_xplan.display_cursor(sql.sql_id, sql.child_number)) v
where
        sql_text like 'UPDATE%by_pk%'
;

select
        executions, rows_processed, disk_reads, buffer_gets
from    v$sql  
where   sql_id = '017r1rur8atzv'
;


Real World OBIEE: Demystification of Variables Pt. 2

Rittman Mead Consulting - Tue, 2017-03-21 09:00

In part one of this blog series, I went over using bins and presentation variables to dynamically create groups and switch between them in a report and on a dashboard. In part two, I am going to talk about making reports dynamic for periods of time using repository, system and presentation variables. Before I dive into an example, there are a couple of things I would like to cover first.

SYSDATE

The sysdate function returns the current datetime set by the system where the database resides. Sysdate is a really useful function for creating repository variables for use with date dimensions. If I go into SQL Developer, I can write a query to return the current sysdate:

select sysdate from dual;

CURRENT_DATE

The current_date functions returns the current datetime set by the system where the bi server resides. This datetime may differ from sysdate depending on the geographical location of the database vs. the system that OBIEE resides on. I can write a query using sql developer to return the datetime using the current_date function:

select current_date from dual;

Since my database and OBIEE instance are on the same system, sysdate and current_date are the same.

TRUNCATE

When using sysdate or current_date to create repository variables for dates (which I am going to show in an upcoming example), you have to keep something in mind. While the date may match, the time may not. To show an example of this, I am going to join one of my date columns with sysdate.

select sysdate, dim_date_key from dual, 
gcbc_pef.dim_date
where sysdate = dim_date_key;

If I run this query, I don't get an error but I get no results.

Why? To answer this, I need to write a query to inspect my date column.

select dim_date_key from gcbc_pef.dim_date;

As you can see by the results of my query, the DIM_DATE_KEY column does have the same format as sysdate but all the times are set to 00:00:00 (or midnight). To further demonstrate the difference between my date column and sysdate, I am going to write a new query and use the TRUNC (or TRUNCATE) function.

select sysdate, dim_date_key from dual, 
gcbc_pef.dim_date
where trunc(sysdate) = dim_date_key;

As you can see, the query runs successfully but notice how sysdate and DIM_DATE_KEY still have different times. How is the join possible? Because I used the truncate function in the where clause in my query for sysdate. Without going into too much detail, using truncate on a date function without any formatting (which I will cover later) will set (or truncate) the datetime to the start (or midnight) of the current day. For example, if I run another query that just selects the truncated sysdate from dual, I get this result.

select trunc(sysdate) from dual;

Now, lets dive into an example.

Note: For all of the examples in this blog series I am using OBIEE 12.2.1.2.0

The Scenario

In this example, I have been asked to create a report that is going to reside on a products dashboard. It needs to have the same product grouping as the report I used part one of this series, needs to contain Gross Rev $, Net Rev $ and # of Orders and have a prompt that can select between the first and current day of the month and every day in-between. The person who requested the report wants the prompt to change dynamically with each month and does not want users to be able to select future dates.

There are two foreseeable challenges with this report. The first, and probably the most obvious, is how to make the date prompt for the current month and have it change dynamically with each month. The second is how to pass the dates into the report.

There is one more challenge that I will have to tackle. There is a gap in the data loads for # of Orders. Data does not update until the 2nd or 3rd of each new month. This wouldn't be a big deal except the person who requested the report wants a summary of the previous months # of Orders to be shown until the data is updated for the current month.

Fortunately, by using Repository, System and Presentation Variables, I can accomplish all of the requirements of this report.

The Example

For this example, I am going to start by creating Repository Variables to use with my date column in order to make the dates dynamic. There are other ways to make dates dynamic using functions within Answers but they are a little bit trickier to use and are less common. I am going to go over some of those functions in part three of this blog series.

Repository Variables are created using the Admin Tool. By launching the Admin Tool and opening my RPD in online mode (can also be created offline), I can go to Manage > Variables to start creating my first Repository Variable.

From the Variable Manager window, I can create a Repository Variable by selecting Action > New > Repository > Variable.

I am going to start by creating the Repository Variable for the current date. Since this variable will be dynamic, I need to make sure I select the option 'Dynamic' and I am going to give it the name USCurDate.

Now I need to create a new init block. I can do this by clicking New...

Once in the Repository Variable Initialization Block screen, I need to give the init block a name, set the schedule for when variable or variables will be refreshed then click Edit Data Source to define the connection pool the init block will use as well as the initialization string (query) the init block will use to populate the Repository Variable.

In the data source window, I am going to set my connection pool to one I have created just for my init blocks and then type in the following into the initialization string window:

select TRUNC(sysdate) from dual;

If I click Test, the query will execute and will return a result.

Notice how the result is the same as the query I ran using SQL Developer earlier.

Now I need to create a Repository Variable for the first day of every month. I am going to use the same method as before and name it USMoBeginDate. The query I am going to use is slightly different from the previous query. I still need to use the TRUNC function but I also need to apply formatting so that it truncates to the start of the month. I am going to enter the following into the initialization string window:

select TRUNC(sysdate, 'MM') from dual;

Some other useful queries I can use are:

First Day of the Current Year

select TRUNC(sysdate, 'YY') from dual;

Last Day of the Previous Year

select TRUNC(sysdate, 'YY') -1 from dual;

Previous Year Date

select TRUNC(ADD_MONTHS(sysdate, -12)) from dual;

Now I need to create a Repository Variable for the previous month to use with my # of Orders measure column. Upon inspection, I discover that the column I need to use is called Calendar Year Month and is a VARCHAR or character type. If I go into Answers and pull in the Calendar Year Month column, I can see the format is 'YYYYMM'

To create the Repository Variable, I am going to use the same method as with the current date and first day of the current month Repository Variables and issue a new query. Because the Calendar Year Month column is a VARCHAR, I need to use the to_char function to change sysdate from a date type to a character type, use some formatting syntax and use some basic arithmetic. The query is as follows:

select to_char(to_number(to_char(sysdate, 'YYYY')) * 100 + to_number(to_char(sysdate, 'MM') -1)) from dual;

To break down each part of this query, lets start with the year. In order to use the 'YYYY' format I must first cast sysdate to a character (to_char(sysdate, 'YYYY')). Then I need to cast that result back to and int so that I can multiply by 100. This will give me the result 201500.00. The reason for this is when I add the month number to my yearx100, there will always be a leading 0 for month numbers 1-9. To get the previous month number, I have to first cast sysdate to a character and use the formatting 'MM'. I then have to cast it back to an int and subtract 1 to get the previous month number (to_number(to_char(sysdate, 'MM') -1) then cast the entire statment back to a character type so that it matches the type for the Calendar Year Month column. When I run the query, I get this result.

Now that I have my three repository variables (USCurDate, USMoBeginDate and Prev_Month) I can start to create the report.

Im going to fast forward a little bit to the part of the report creation process where I will use my Repository Variables I created using the Admin Tool. Since I am using virtually the same report as part one of this blog series, please refer back for how to create custom groups using bins and presentation variables and custom value prompts.

Because of the delay in the data load for the # of Orders at the beginning of the month, I can not use a global report filter. Instead, I am going to have to use something called a Filter Expression within each measure column formula.

About Filter Expressions

Unlike global report filters, column formula level filter expressions are used when you need to specify a particular constraint within the column formula itself. Because the filter is at the column formula level, it is independent of any subsequent column filters.

Note: When using a column formula filter for a measure, you can not add a global filter of the same data subject on top of it. For example, if using a column level filter for a particular Year and Month, I can not add a global filter for a particular year. The two filters contradict each other and the result will be null.

To add a filter in the column formula, go to Edit formula, make sure the column syntax is highlighted and click Filter.

From here the Insert Filter window will pop up and I can select the attribute column to filter the measure by. Here, I want to use the column Day Date to filter Gross Rev $ by the day.

I can add a column by double clicking it in the the Subject Areas pane. When a column is added, I will be prompted with a New Filter window and from here, everything is exactly the same process as adding a global report filter.

Here I need to define the operator as is between since we are dealing with date ranges. I could call my Repository Variables for current_date and first day of the month here but, because the request is for a prompt to select between date ranges, I am going to have to call Presentation Variables and use the prompt to populate the actual values.

Note: If you are unsure about the functionality of Presentation Variables, see part one of this blog series

To add Presentation Variables to the filter expression, click Add More Options and select Presentation Variable from the dropdown.

When a Presentation Variable is added to the filter, two new text boxes appear. The Variable Expr box is where you define the variable to be used and the (default) box is used to add a default value. The default value is optional but, when defining a Presentation Variable within a filter, you have to specify a default value in order to get any results. The reason for this is because, when the report is run, the query issued will use the Presentation Variable placeholder that is defined unless a default value is specified. In other words, the default value will always be used unless the Presentation Variable is populated with a value or a list of values.

Because I want the users to be able to specify a date range, I need to define two Presentation Variables: one for the start date and one for the end date. I can add another place for a Presentation Variable by simply clicking Add More Options again and selecting Presentation Variable.

Now I need to add both my start and end date Presentation Variables in the Variable Expr boxes. I’m going to call my start date presentation variable pv_start_dt and my end date presentation variable pv_end_dt. I am also going to specify a default date range from the beginning of the current month (10/01/2015) to yesterday's date (10/15/2015).

If I click OK, I will be taken back to the Insert Filter screen where I can see the filter expression previously defined.

Clicking OK again will return me to Edit Column Formula which shows the column formula with the filter expression defined in the previous steps.

Now I have to do the exact same thing for the Net Rev $ column. Since the filter expression is identical, I can simply copy and paste the column formula for Gross Rev $ and replace the column name in the expression.

Now I need to take care of the # of Orders column. This column is tricky because of the gap between the 1st and the 2nd or 3rd of every month. I could use a filter expression that defaults to the previous month by using the previous month repository variable I created in a previous step, but this alone wouldn’t switch over when the data became available.

So how can we fulfill the requirement of the report if we don’t know the exact date in which the data will be available? This can be accomplished by using a CASE statement as shown previously in part one of this series. We can break the Case statement down into two parts or two conditions:

1. When the day for the current month is less than or equal to 2 OR if # of Orders is null, then filter # of Orders by Calendar Year Month using the value of the Prev_Month Repository Variable.

2. When condition one is not true, then filter # of Orders by Day Date between the values of the pv_start_date and the pv_end_date Presentation Variables

Putting both conditions together and using the correct syntax for Column Formula results in the following formula:

Note that I am using CURRENT_DATE in my column formula. In this case, I am extracting the day number from the current date by using the extract day function (DAY(CURRENT_DATE)). I am going to talk about this in further detail when I talk about using built in functions in Answers to make reports dynamic in part 3 of this series.

Now I need to create my dashboard prompt. I am going to start by clicking on New > Dashboard Prompt.

I need to create two prompts: One for the start date and one for the end date. Because I am using presentation variables as placeholders for the date between values, I have to use a Variable Prompt instead of a Column Prompt. Variable Prompts allow us to define a presentation variable and then define a list of values for the users to select from.

To create a Variable Prompt for Start Date, I can click on the new prompt icon and select Variable Prompt.

There a few things I need to do in order to make this prompt function for the report. First, I have to define the same presentation variable name (pv_start_dt) that I used in the filter expressions for the Gross Rev $, Net Rev $ and # of Orders columns.

Because this is not a column prompt, I have to manually specify the values I want the user to be able to select from. Rather than typing in each value, I can use the SQL Results option from the Choice List Values dropdown and use a SQL statement to select the exact values that I want.

This may seem daunting at first but there is a very straightforward way to accomplish this. Rather than manually writing out a SQL query, we can make use of the Advanced Tab within a new report.

I’m going to start by clicking New > Analysis and selecting the column that I want values for: Day Date.

I need to add a filter to Day Date so that it returns only the values I want to user to select from.

Now I need to select the operator to be is between and add two Repository Variables that I have set up: one for the first date of the current month and one for the current date of the current month.

If I go to results, I can see the data returned with the filter I have specified.

As you can see, the Day Date column only contains the values from the first of the month to the current date (October, 16th 2015 in this example)

Now for the good stuff. I can navigate to the Advanced Tab and copy the SQL statement used to generate these values and paste them into the SQL Results text box in my prompt.

You will notice that within the SQL Statement generated by OBI,
there are numbers and s_# between the SELECT and Day Date column, after the Day Date column and there is also an order by clause that uses a number “2”. Without going into too much detail, this what OBI uses to make the query more efficient when retrieving results from the database. In order to allow the values to populate the prompt, these have to be removed in OBIEE 12c and the “ORDER BY” clause has to be rewritten in order to make it work.

This

SELECT
   0 s_0,
   "Sales - Fact Sales"."Periods"."Day Date" s_1
FROM "Sales - Fact Sales"
WHERE
("Periods"."Day Date" BETWEEN VALUEOF("USMoBeginDate") AND  VALUEOF("USCurDate"))
ORDER BY 2 ASC NULLS LAST
FETCH FIRST 65001 ROWS ONLY

Changed to this

SELECT
   "Sales - Fact Sales"."Periods"."Day Date"
FROM "Sales - Fact Sales"
WHERE
("Periods"."Day Date" BETWEEN  VALUEOF("USMoBeginDate") AND  VALUEOF("USCurDate"))
ORDER BY "Periods"."Day Date" ASC
FETCH FIRST 65001 ROWS ONLY

This can be a bit confusing if you are not very familiar with SQL but just remember:

When populating a prompt using an SQL statement in OBIEE 12c, take out any number and anything that begins with “s” between the SELECT and first column and anything that begins with “s” after any subsequent columns and make sure the “ORDER BY” clause contains the actual column name of the column you want to order by.

Note: If you do not require any values to be in order, you can omit the “ORDER BY” clause all together.

If I expand Options in the Edit Prompt window, I can add a default selection or a default value that the prompt will start with. I can use the USMoBeginDate here as well so that the prompt always starts with the first date of every month as the start date.

Note: You will notice that under Options in the Edit Prompt window there is a Variable Data Type option with a dropdown selector. This can be used if the data type needs to be specified to something other than the default which is ‘text’ or character type. If you are getting an error when running the report that says “Selected value does not match datatype. Expected [this value] but got [this value]” you need to change the Variable Data Type to the datatype of the column you are prompting on. In this example, we are prompting a date datatype so therefore it needs to be set to date.

If I click OK, I can check the values in the display window by clicking the dropdown for the Start Date prompt I just created.

The blue checkmark indicates the value that is selected which, because the first date of every month was set by using the USMoBeginDate Repository Variable as the default value, defaults to the first date of the current month (October, 1st 2015) in this example.

Now I need to create another Variable Prompt for the End Date. The SQL statement used for Start Date can be reused for the values as we want the exact same values to be available for selection. I am going to specify the presentation variable to be named pvenddt, and the default value to be the USCurDate Repository Variable so that the End Date prompt always defaults to the current date.

Now all that’s left to do is put the prompt and report on the Dashboard. Here is the result.

So that concludes part 2 of Demystification of Variables. Please feel free to ask questions or leave me a comment! In part 3, I am going to talk about using built in front end functions and presentation variables to make reports dynamic for any series of time. Until next time.

Categories: BI & Warehousing

Oracle BPM: Hiding Faults from BPM? Don't use Service Activity!

Jan Kettenis - Tue, 2017-03-21 08:18
In the following I explain how you can hide faults from BPM by not using (synchronous) Service activities, but (asynchronous) Send/Receive activities instead.

When calling services from a BPM process, you should think about where you want faults to show up and to be handled. This is specifically of interest when you have some integration layer between your BPM processes and external services that you call to abstract the external services from the BPM process. Let's call this layer the Service Layer. I have seen such a layer in various formats, ranging from a Reusable Subprocess, a BPEL process in the same composite as the BPM process, or a BPEL process in a separate composite, or instead of BPEL a Mediator. You may have such a layer to hide technical details from the business process, to cover some sort of custom exception handling, or to hide the message format from these external services from the BPM process (or a combination of all that). The latter might be because you don't have the luxury to do message transformation in a service bus.

In case the BPM process calls the Service Layer through a (synchronous) Service activity and that fails, then this will result in the main BPM instance to get into an errored state, and you will have to handle the error in the BPM process. This behavior might be exactly what you wanted to prevent with the Service Layer, for example because the Service call is in a parallel flow and you want to be sure that the fault does not impact processing of the other, parallel threads.

The following example shows what happens. It concerns a main BPM process, that calls synchronous ServicePS from the Service Layer, which on its turn calls some other ServiceA that (finally) calls a FailingService that always fails. The example is a bit over complicated because I configured a fault policy in the synchronous services. You may be aware that I wrote some other article explaining that this is not a good practice, but when creating this example I did not had that insight yet ;-) So bear with me and just ignore these synchronous services still being in a "Running" state after they failed.

The following shows the synchronous BPEL of the ServicePS.


Because the whole chains of calls is synchronous from beginning to the end, you will see that all synchronous services have the "Faulted" state. Because of the fault policy in the BPM (the only one that makes sense in this case) it is still running, but because the fault bubbled up to the BPM instance that shows the error as well.



Now lets refactor this to a solution where the Service Layer will hide the fault from the BPM process. To do so, all calls from the BPM process to the Service Layer will have to be asynchronous.

The following shows the asynchronous BPEL of ServiceAsyncPS_NP. 

Learning from my earlier mistake with the fault policy, this asynchronous service now is the only one in the chain with a fault policy. Because the FailingService failed, also the (synchronous) ServiceA_NP failed. But because ServicePSAsync_PS is asynchronous, that is where it stopped.


The error can be recovered from there, and in the meantime, the BPM process runs like there is no cloud in the sky.


Because of the asynchronous nature of the ServiceLayer, this is not a decision you should take lightly. For example, statefull BPEL cannot be migrated, so any error in it cannot be fixed for running instances. It therefore might not be the silver bullet you were looking for.

Oracle Cloud Platform Continues to Gain Momentum with Customers, Partners, and Developers

Oracle Press Releases - Tue, 2017-03-21 07:00
Press Release
Oracle Cloud Platform Continues to Gain Momentum with Customers, Partners, and Developers Developers at Global Enterprises, SMBs, and ISVs leverage Oracle’s PaaS and IaaS to develop and run modern Web, mobile, and cloud-native applications

Oracle Code, NEW YORK—Mar 21, 2017

Today at its premier developer event, Oracle announced that an increasing number of global enterprises, SMBs, and ISVs are choosing the Oracle Cloud Platform to speed innovation, simplify IT, reduce costs, and deliver stellar customer experiences. 7-Eleven, Altair, Astute Business Solutions, Bitnami, Calypso Technology, Ford Motor Company, GE Capital Business Process Management Services, HashiCorp, Infotech, SAS, Vertiv Corporation, and Zensar Technologies are just a few of the many organizations that are using Oracle Cloud’s PaaS and IaaS services to easily develop, test, and deploy high-performance applications in the cloud. Additionally, Oracle continues to expand its cloud portfolio, making it even more compelling for customers to move to the cloud.

The Oracle Cloud Platform provides customers, partners, and developers with everything they need to build, deploy, and extend applications and run business-critical workloads in a low-latency, highly available, secure cloud environment. For developers, the Oracle Cloud Platform provides the foundation they need to provide cutting-edge applications that leverage the latest technology innovations. Developers can use the platform to quickly create applications with microservices, APIs, containers, machine learning, mobile backends, and chatbots using modern DevOps processes. With pre-integrated big data and analytics, integration, management and monitoring, mobility and Internet of Things (IoT) capabilities, developers can easily connect these applications to other cloud or on-premises systems and devices to gain new levels of intelligence and customer success. In the past two years alone, Oracle has delivered more than 50 PaaS and IaaS services to market and introduced new deployment options such as Oracle Cloud at Customer, providing unparalleled opportunity for customers.

“Organizations across industries and geographies are increasingly taking advantage of the Oracle Cloud Platform to quickly develop and deploy business-critical applications,” said Amit Zavery, senior vice president, Oracle Cloud Platform. “By delivering the most advanced cloud capabilities, Oracle is helping customers and partners out-innovate the competition, transform their businesses, and increase profitability.”

To participate in upcoming Oracle Code events in a city near you, visit: http://www.developer.oracle.com/code

Customers and Partners Benefit from Oracle Cloud Platform

“Altair has a unique combination of engineering applications and enterprise computing offerings for high performance computing in the cloud,” said Sam Mahalingam, chief technical officer, Altair. “Our solutions performed extremely well on Oracle Cloud Platform in terms of accuracy, consistency, and cost efficiency. We will continue building solutions for the Oracle Cloud Platform that can leverage the latest infrastructure technology along with Altair’s leading cloud solutions for HPC.”

“We continue to be impressed by the breadth and depth of the Oracle Cloud Platform,” said Daniel Lopez, CEO, Bitnami. “Bitnami provides open source images to cover all the functionality a modern enterprise demands: from easy to use tools for developers to production-ready solutions, across Oracle Cloud’s IaaS services, virtualized and container-based platforms.”

“The shift to cloud has created a unique opportunity for us to bring technological innovation to the forefront of the financial industry,” said Pascal Xatart, CEO, Calypso Technology. “We are deploying these transformative solutions with Oracle, whose technology stack is an excellent combination of performance, security and agility that allows us to increase the value of our cloud services.”

“HashiCorp and Oracle share many common customers,” said David McJannet, CEO of HashiCorp. “Oracle Cloud’s growing support for HashiCorp Terraform will enable our customers to use a common approach to infrastructure provisioning for those workloads that they choose to run in Oracle Cloud. This will enable our joint customers to deliver better applications, faster on the infrastructure of their choice.”

“Our market is changing, and so must our industry-leading life science labeling solutions. So we decided to start delivering our ROBAR service via the cloud to make it easier for new customers to access our world class labeling solutions. For us, high availability, reliability and security are top priorities, especially in the highly regulated industry in which we operate,” said Ardi Batmanghelidj, president and CEO, Innovatum. “We considered several solutions, including Amazon, but chose Oracle Cloud Infrastructure as a Service due to Oracle’s reputation as a leading technology provider. Oracle’s focus on security helped to tip the scales, along with the cost-effectiveness of its offerings.”

“Enterprises building modern apps rely on data services, and they want these data services on any infrastructure or cloud to support hybrid cloud and multi-cloud configurations,” said Edward Hsu, vice president of product marketing at Mesosphere. “Mesosphere DC/OS enables IT organizations to become hybrid cloud service providers, and we’re excited to have DC/OS on the Oracle IaaS to give businesses the broadest set of choices on where to run their modern applications.”

“Qubole Data Service (QDS) is a turnkey, Big Data-as-a-service platform that accelerates data insight across companies leveraging open source technologies like Spark, Hadoop, Hive, etc. We believe Big Data belongs in the cloud, and we’re excited to support QDS on the Oracle Cloud Platform,” said Ashish Thusoo, co-founder and CEO of Qubole. “Most of the world’s enterprise data touches the Oracle ecosystem, and we believe running on the Oracle Cloud Platform provides an ideal solution for turning data into valuable business insights at a compelling price/performance value.”

“SUSE, as an enterprise-focused ‘open’ open source company, has been collaborating with Oracle on many joint technology innovations and for many years,” said Dr. Thomas Di Giacomo, CTO at SUSE. “Today we’re very pleased and excited to announce that the SUSE Linux Enterprise Server image is available on the Oracle Cloud Marketplace. This opens the door to future cloud activities in the Oracle Cloud ecosystem around IaaS, PaaS, and enterprise cloud native DevOps, areas in which SUSE is rapidly expanding our solutions.”

“As as public cloud implementer for many years, and as early adopters of Oracle Cloud IaaS and PaaS, in the last quarter we have seen significant adoption of Oracle Cloud across our large managed services customer base in the UK and Ireland. We have projects on-going on Oracle IaaS, PaaS and SaaS in multiple industries including public sector, utilities and the financial services,” said Ken MacMahon, Head of Oracle Cloud Services at Version 1. “A real sweet-spot for its adoption for us has been customers who wish to leverage public cloud for deployment of key application workloads that are built on Oracle technology. This leverages both our Oracle and public cloud expertise to give customers the optimum benefits and power of the Oracle IaaS and PaaS."

Technology Innovations and Industry Accolades

Continuing its rich history of delivering game-changing cloud services, Oracle recently announced new enhancements to the Oracle Cloud Platform, making it even more compelling for customers to move their business-critical applications to the cloud. Most recently, Oracle became the first to offer Oracle Database Cloud on bare metal compute, and new virtual machine (VM) compute, load balancing, and storage capabilities, all on the same low latency, high performance modern IaaS platform. Oracle Cloud Platform now delivers differentiated database performance at every scale, and deeply integrated IaaS capabilities, for customers of any size to easily develop, test, and deploy their business-critical applications in the cloud.

Additionally, Oracle was recently recognized as a one of the 20 Coolest Cloud Infrastructure Vendors of 2017 in CRN Magazine’s Cloud 100 report.

Try Oracle Cloud for Free

Customers interested in trying Oracle Cloud can sign up for $300 in free credits at: https://cloud.oracle.com/tryit

Oracle Cloud

Oracle Cloud is the industry’s broadest and most integrated public cloud, offering a complete range of services across SaaS, PaaS, and IaaS. It supports new cloud environments, existing ones, and hybrid, and all workloads, developers, and data. The Oracle Cloud delivers nearly 1,000 SaaS applications and 50 enterprise-class PaaS and IaaS services to customers in more than 195 countries around the world and supports 55 billion transactions each day.

For more information, please visit us at http://cloud.oracle.com.

Contact Info
Nicole Maloney
Oracle
+1.415.235.4033
nicole.maloney@oracle.com
Kristin Reeves
Blanc & Otus
+1.415.856.5145
kristin.reeves@blancandotus.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Nicole Maloney

  • +1.415.235.4033

Kristin Reeves

  • +1.415.856.5145

Oracle Delivers Industry-Leading Retail Merchandising Solutions via Oracle Cloud

Oracle Press Releases - Tue, 2017-03-21 07:00
Press Release
Oracle Delivers Industry-Leading Retail Merchandising Solutions via Oracle Cloud Offering Deployment Choice to Retailers Worldwide via Oracle’s Integrated Cloud

Oracle Industry Connect, Orlando, Fla.—Mar 21, 2017

Oracle today announced that its industry leading Oracle Retail Merchandising Solutions are now available as a service via the Oracle Cloud.  The introduction of Oracle Retail Merchandising cloud services reflects Oracle’s ongoing investment in helping retailers drive strategic growth in a fast-paced global retail market by enhancing mobility, simplifying the user experience, inspiring customer engagement and creating a flexible, scalable business environment. 

“Oracle Retail Merchandising accelerates the critical day-to-day operations that impact service levels, inventory margins and key business metrics for retailers worldwide.  Combined with the support, performance and security of Oracle Cloud, we are providing retailers of all sizes with the most flexible and powerful merchandising solution available today,” said Ray Carlin, Senior Vice President and General Manager, Oracle Retail.

Getting to the Work that Matters, Faster

The Oracle Retail Merchandising cloud services are a suite of software-as-a-service solutions that provides retailers with the breakthrough capabilities introduced earlier this year in Oracle Retail Release 16, including role-based dashboards that surface relevant buying, inventory, and financial information to the user and then leverage retail science and data analytics to accelerate critical decision making and take action. By using Oracle’s modern exception-based retailing methodology, and the over 500 additional metrics introduced with Oracle Retail Insights Cloud Service Release 16, to identify situations that require attention, the solution vastly reduces the amount of time merchandising professionals spend on nonproductive tasks and frees more time to focus on strategic business goals.

The complete suite of Oracle Retail Merchandising services available in the Oracle Cloud includes Oracle Retail Merchandising Foundation Cloud Service (end to end merchandising operations including sales auditing), Oracle Retail Integration Cloud Service, Oracle Retail Allocation Cloud Service, and Oracle Retail Invoice Matching Cloud Service.

Delivering Game-Changing Capabilities

As an example, Oracle Retail 16 features such as style-level invoice matching and best-source allocations are powerful tools enabling merchants to better protect margins and improve service levels. To better serve mobile teams in a fast-paced retail environment, the Oracle Retail solutions allow managers to view and approve transactions on their preferred mobile device and create more granular segments to generate customer centric store clusters.  These capabilities reflect Oracle’s ongoing commitment to listen to its retail community and to continue to deliver the power of new solutions.

Oracle Provides Retailers Steady Cadence of Innovation

By providing its renowned merchandising, analytics and insights solutions via Oracle Cloud, Oracle helps ensure that retailers benefit immediately from the steady investment in new features and functionality across the retail solutions, while taking full advantage of Oracle’s world-class cloud services platform, performance improvements, science, analytics and security. Oracle now offers more than 34 Oracle Retail cloud services including the Oracle Retail Advanced Science, Oracle Retail Insights, Oracle Retail Customer Engagement Cloud Services and many more.

“For Oracle Retail, customers are at the heart of everything we do. It is through customers that we prove the value of our solutions and the Merchandising Cloud Services, Analytics Cloud Services and Insights Cloud Services are no exception,” said Carlin. “We are excited to be partnering with one of the largest fashion retailers in the world as they set a strategic path to leverage cloud technology to enable their global operations.”

Oracle Cloud Delivers Reliability and Scale

Oracle Cloud is the industry's broadest and most integrated public cloud, offering a complete range of services across SaaS, PaaS, and IaaS. It supports new and existing cloud environments, and all workloads, developers, and data.  The Oracle Cloud delivers nearly 1,000 SaaS applications and 50 enterprise-class PaaS and IaaS services to customers in more than 195 countries around the world and supports 55 billion transactions each day.

Contact Info
Matt Torres
Oracle
+1.415.595.1584
matt.torres@oracle.com
Dan Brady
Burson-Marsteller
+1.516.650.7354
dan.brady@bm.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Matt Torres

  • +1.415.595.1584

Dan Brady

  • +1.516.650.7354

Mӧvenpick Hotels & Resorts Becomes First Hotel Group to Globally Implement Oracle Hospitality OPERA Cloud Amidst Accelerated Growth Phase

Oracle Press Releases - Tue, 2017-03-21 07:00
Press Release
Mӧvenpick Hotels & Resorts Becomes First Hotel Group to Globally Implement Oracle Hospitality OPERA Cloud Amidst Accelerated Growth Phase There are now over 30 properties on four continents using the cloud-based property management software to drive Business Intelligence, guest recognition and Revenue Management.

Oracle Industry Connect, Orlando, Fla.—Mar 21, 2017

Today Oracle announced the successful implementation of Oracle Hospitality OPERA Cloud property management software at over 30 Mӧvenpick hotels in 10 countries across Africa, Asia, the Middle East and Europe. Oracle Hospitality OPERA Cloud was selected to enable the brand to optimize synergies in distribution, marketing, guest recognition and operations. An initial implementation test was successfully completed in five properties in Jordan and Switzerland, and now nearly a third of the portfolio uses it. The hotel company plans to have all its properties using OPERA Cloud by 2018.

With continued growth over the last five years, Mӧvenpick needed a suite of technology that could further enhance its vision and plans for expansion. With 83 locations, multiple property management vendors and different configurations of software to manage, reducing the complexity of the Swiss hotel company’s IT investment was critical. Turning to a cloud-based property management solution has enabled its IT departments to focus on innovation instead of maintaining decentralized setups.

“As Mӧvenpick Hotels & Resorts is on track to operate 100 properties by 2020, we were particularly interested in the potential benefits of a cloud-based property management system. Considering the brand’s simultaneous growth in four different regions, mobility and scalability were priorities. We needed a cost-effective, low-upkeep system that is lightweight enough to provide the same responsiveness to island resorts in Asia, as it does to city hotels in Europe for example,” said Floor Bleeker, Chief Information Officer for Mövenpick Group.

“We also wanted a solution that could provide enhanced guest recognition. Our company’s vision is to ‘create Natural Enjoyment for our guests and partners around the world’. In OPERA Cloud, we found a system that ultimately benefits our guests—through recognition and improved operations,” he added.

With OPERA Cloud as a platform, Mӧvenpick Hotels & Resorts has been able to provide critical business intelligence insights to marketing and revenue management teams that drive decision making from the center and at property level. Analysis of guest needs and stay trends generate information that can help to enhance guest experience, encourage repeat visits and improve direct bookings. With a singular view of the customer, Mӧvenpick can ensure that global guests are recognized across all properties.

Implementation benefits experienced by Mӧvenpick include:

  • A lightweight system enabling accessibility from anywhere on any device
  • Standardized hotel operations and processes across an international footprint
  • The ability to implement business enhancing decisions via a singular cloud platform

“Mövenpick Hotels & Resorts is the first hotel chain globally to embrace a complete transition to the cloud for its hotel operations platform,” said Mike Webster, senior vice president and general manager Oracle Hospitality. “OPERA Cloud was designed to support hoteliers like Mövenpick focus on elevating their signature guest experience while removing the complexity and cost of traditional IT investments. We look forward to Mövenpick’s continued global growth with OPERA Cloud.”

Oracle Hospitality OPERA Cloud services are a cloud-based, mobile-enabled platform for next generation hotel management. Based on Oracle’s OPERA, one of the leading enterprise solution suites for the hospitality industry, OPERA Cloud offers an intuitive user interface, comprehensive functionality for all areas of hotel management; secure data storage and hundreds of key partner interfaces to meet the needs of hotels of all types and sizes. By moving property management technology to the cloud, OPERA Cloud simplifies the IT infrastructure in properties, allowing hotel management and staff to focus on delivering exceptional experiences for their guests.

Contact Info
Matt Torres
Oracle
+1.415.595.1584
matt.torres@oracle.com
Dan Brady
Burson-Marsteller
+1.516.650.7354
dan.brady@bm.com
About Oracle Hospitality:

Oracle Hospitality brings 35 years of experience in providing technology solutions to food and beverage operators. We provide hardware, software, and services that allow our customers to deliver exceptional guest experiences while maximizing profitability. Our solutions include integrated point-of-sale, loyalty, reporting and analytics, inventory and labor management, all delivered from the cloud to lower IT cost and maximize business agility. For more information about Oracle Hospitality, please visit www.Oracle.com/Hospitality.

About Mövenpick Hotels & Resorts:

Mövenpick Hotels & Resorts, an international upscale hotel management company with over 16,000 staff members, is represented in 24 countries with 83 hotels, resorts and Nile cruisers currently in operation. Around 20 properties are planned or under construction, including those in Chiang Mai (Thailand), Al Khobar (Kingdom of Saudi Arabia) and Nairobi (Kenya).

Focusing on expanding within its core markets of Europe, Africa, the Middle East and Asia, Mövenpick Hotels & Resorts specialises in business and conference hotels, as well as holiday resorts, all reflecting a sense of place and respect for their local communities. Of Swiss heritage and with headquarters in central Switzerland (Baar), Mövenpick Hotels & Resorts is passionate about delivering premium service and culinary enjoyment – all with a personal touch. Committed to supporting sustainable environments, Mövenpick Hotels & Resorts has become the most Green Globe-certified hotel company in the world.

The hotel company is owned by Mövenpick Holding (66.7%) and the Kingdom Group (33.3%). For more information, please visit www.movenpick.com.

About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Matt Torres

  • +1.415.595.1584

Dan Brady

  • +1.516.650.7354

CSPs See Cloud Technology Investments as Critical to Improving Customer Experience

Oracle Press Releases - Tue, 2017-03-21 07:00
Press Release
CSPs See Cloud Technology Investments as Critical to Improving Customer Experience New global study by Oracle reveals the top challenges and opportunities for CSPs as they navigate changing customer expectations

Oracle Industry Connect, Orlando, Fla.—Mar 21, 2017

A new Oracle study announced today conveys that Communication Service Providers (CSPs) top challenge is improving customer experiences, followed closely by keeping up with technological advancements. The survey, "The Communications Cloud: CSPs Take on Tomorrow", polled communications service providers around the world to understand how initiatives such as Network Function Virtualization (NFV) and the Cloud might help them overcome these obstacles and capitalize on new market opportunities.

“CSPs are leveraging cloud technologies to transform their networks and create more compelling experiences for their customers, partners, and employees,” said Doug Suriano, senior vice president and general manager, Oracle Communications. “The move to cloud will continue to play a key role in helping CSPs improve their service agility and expand their business with new innovations in IoT, digital, and cloud services.”

CSPs are under attack from increased competition and challenged by shifting customer demands, while aiming to monetize new opportunities and managing network growth. Whether a market leader, or emerging challenger, managing customer expectations was equally challenging to all.

As such, they are turning to technology to help meet increasing customer expectations while managing reduced budgets. NFV and the Cloud were cited as two of the technologies that are helping CSPs improve their competitive position, however they are looking for more than just technology from vendors. CSPs are increasingly looking for partners that can go beyond technology alone to ensure successful implementations and long-term outcomes.

The survey found that CSPs are:

  • Looking to the Cloud: Seventy-one percent of CSPs believe that a communications cloud could simplify operations, speed time to market, and reduce overall effort.
  • Making Progress with Network Function Virtualization: Sixty percent of CSPs noted they have made progress with NFV and many believe it will achieve many of their objectives around cost savings and time to market.

     

  • Expecting More from Their Vendors: Regardless of competitive market position, CSPs generally feel best prepared to handle technical challenges. While quality, cost, and, reducing risk are critical, more than half of CSPs believe the most challenging communications cloud migration hurdles are nontechnical. The survey showed that it’s more important than ever for a cloud partner to provide access to information and expertise, rather than best technology alone.

To learn more about the study and download the Infographic, please visit: Cloud Survey Infographic.

Contact Info
Katie Barron
Oracle
+1.202.904.1138
katie.barron@oracle.com
Kristin Reeves
Blanc & Otus PR for Oracle
+1.415.856.5145
kristin.reeves@blancandotus.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Katie Barron

  • +1.202.904.1138

Kristin Reeves

  • +1.415.856.5145

Oracle Financial Services and Numerix to Leverage Cross Asset Analytics in Oracle FRTB Compliance Solution

Oracle Press Releases - Tue, 2017-03-21 07:00
Press Release
Oracle Financial Services and Numerix to Leverage Cross Asset Analytics in Oracle FRTB Compliance Solution Oracle to deploy Numerix analytics in new solution to help customers comply with the Bank for International Settlements fundamental review of the trading book capital requirements

Oracle Industry Connect, Orlando, Fla.—Mar 21, 2017

Oracle Financial Services Analytical Applications (OFSAA) has announced a collaboration with Numerix, provider of innovative capital markets technology, to develop and bring to market solutions that enable financial institutions to comply with the Fundamental Review of the Trading Book (FRTB). The offering implements complementary solutions from both Numerix and Oracle to effectively meet the regulatory and business challenges accompanying the new FRTB framework.

“By working with Numerix, we are able to offer the best of both worlds: a comprehensive solution that helps financial institutions meet the computational and business challenges in FRTB compliance,” explains Ambreesh Khanna, Group Vice President and General Manager for Oracle Financial Services. “To develop the highest quality solutions for our customers, Oracle always seeks to work with the best and brightest.”

With the objective of improving the design and coherence of capital standards, the FRTB framework issued by the Bank for International Settlements has introduced fundamental changes in the capital calculation process. Banks are now required to make a significant investment in models and technologies to comply with changes by the end of 2019.

Oracle’s capabilities in financial risk, data management and regulatory reporting, underpinned with deep expertise and analytics from Numerix, will enable financial institutions to meet these new FRTB framework challenges effectively. The collaboration will employ Numerix solutions for mark-to-market, Greeks, market risk and counterparty credit risk. Key features of the offering include:

  • Comprehensive cover of all FRTB requirements, including profit and loss attribution, back-testing, risk factor identification, stress period identification and risk computations
  • Pre-configured methods for computing expected shortfall, CVA capital charge, default risk charge, stress expected shortfall and standardized approach
  • Industry-standard library of models and methods, spanning all asset classes (fixed income, equities, FX, credit, commodities, energy, inflation and hybrids)
  • Full integration with OFSAA’s data foundation and reporting capabilities

“At Numerix we’re proud of the diverse and dynamic ways that the Numerix model library and pricing architecture is integrated into other systems, such as Oracle’s. Combined expertise across the financial services industry ensures end users have the most comprehensive suite of capabilities at their fingertips,” said Steve O’Hanlon, CEO of Numerix. “As one of the industry’s most sophisticated model and pricing library, Numerix technology can easily be used in a wide range of solutions including for FRTB compliance. We’re proud to support Oracle with our world class analytics as they expand their capital markets applications and risk reporting solutions for FRTB.”

Contact Info
Alex Moriconi
Oracle
+1.650.722.0678
alex.moriconi@oracle.com
Emily Jean-Pierre
Numerix
+1.646.898.1294
ejeanpierre@numerix.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

About Numerix

Numerix is the leading provider of innovative capital markets technology solutions and real-time intelligence capabilities for trading and risk management. Committed to out-of-the box thinking, the exploration and adoption of latest technologies, Numerix is dedicated to driving a more open, fintech oriented, digital financial services market. Built upon a 20+ year analytical foundation of deep practical knowledge, experience and IT understanding, Numerix is uniquely positioned in the financial services ecosystem to help its users reimagine operations, modernize business processes and capture profitability. For more information please visit www.numerix.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Alex Moriconi

  • +1.650.722.0678

Emily Jean-Pierre

  • +1.646.898.1294

dotnet publish - ASP.NET Core app deployed to Pivotal Cloud Foundry

Pas Apicella - Tue, 2017-03-21 06:16
I previously showed how to push a ASP .NET Core application to Pivotal Cloud Foundry by just using the source code files itself. It turns out this creates a rather large droplet and hence slows down the deployment. So here we are going to take the same demo and use "dotnet publish" to make this a lot faster. The previous post is here which is the base for this blog entry itself.

ASP.NET Core app deployed to Pivotal Cloud Foundry
http://theblasfrompas.blogspot.com.au/2017/03/aspnet-core-app-deployed-to-pivotal.html

First we need to make some changes to our project

1. Open "dotnet-core-mvc.csproj" and add "RuntimeIdentifiers" inside the "PropertyGroup" tag
  
<PropertyGroup>
<TargetFramework>netcoreapp1.0</TargetFramework>
<RuntimeIdentifiers>osx.10.10-x64;osx.10.11-x64;ubuntu.14.04-x64;ubuntu.15.04-x64;debian.8-x64</RuntimeIdentifiers>
</PropertyGroup>



2. Perform a "dotnet restore" as shown below either form a terminal windows/prompt or from Visual Studio Code itself , this step is vital and is required

pasapicella@pas-macbook:~/pivotal/software/dotnet/dotnet-core-mvc$ dotnet restore
....

3. Now lets publish this as Release and ensure we target the correct runtime. For Cloud Foundry (CF) that will be "ubuntu.14.04-x64" and the framework version is 1.0 as we created the application using 1.0 , we could of used 1.1 here if we wanted to.

pasapicella@pas-macbook:~/pivotal/software/dotnet/dotnet-core-mvc$ dotnet publish --output ./publish --configuration Release --runtime ubuntu.14.04-x64  --framework netcoreapp1.0
Microsoft (R) Build Engine version 15.1.548.43366
Copyright (C) Microsoft Corporation. All rights reserved.

  dotnet-core-mvc -> /Users/pasapicella/pivotal/software/dotnet/dotnet-core-mvc/bin/Release/netcoreapp1.0/ubuntu.14.04-x64/dotnet-core-mvc.dll

4. Finally cd into the "Publish" folder and verify there are the required DLL's as well as project files, JSON files , everything ready to run your application.

pasapicella@pas-macbook:~/pivotal/software/dotnet/dotnet-core-mvc/publish$ ls -lartF
total 116848
-rwxr--r--    1 pasapicella  staff    25992 Jun 11  2016 Microsoft.Win32.Primitives.dll*

..

-rwxr--r--    1 pasapicella  staff      168 Mar 16 22:33 appsettings.Development.json*
drwxr-xr-x    7 pasapicella  staff      238 Mar 21 08:01 wwwroot/
-rwxr--r--    1 pasapicella  staff     1332 Mar 21 08:01 dotnet-core-mvc.pdb*
-rwxr--r--    1 pasapicella  staff     8704 Mar 21 08:01 dotnet-core-mvc.dll*
drwxr-xr-x    6 pasapicella  staff      204 Mar 21 08:01 Views/
drwxr-xr-x   16 pasapicella  staff      544 Mar 21 08:01 ../
-rwxr--r--    1 pasapicella  staff      362 Mar 21 08:01 web.config*
drwxr-xr-x   79 pasapicella  staff     2686 Mar 21 08:01 refs/
-rwxr--r--    1 pasapicella  staff       92 Mar 21 08:01 dotnet-core-mvc.runtimeconfig.json*
-rwxr--r--    1 pasapicella  staff   297972 Mar 21 08:01 dotnet-core-mvc.deps.json*
drwxr-xr-x  212 pasapicella  staff     7208 Mar 21 08:01 ./

5. Now this time lets "cf push" using the files in the "Publish" folder and

pasapicella@pas-macbook:~/pivotal/software/dotnet/dotnet-core-mvc/publish$ cf push pas-dotnetcore-mvc-demo -b https://github.com/cloudfoundry/dotnet-core-buildpack -m 512m
Creating app pas-dotnetcore-mvc-demo in org apples-pivotal-org / space development as papicella@pivotal.io...
OK

Using route pas-dotnetcore-mvc-demo.cfapps.io
Binding pas-dotnetcore-mvc-demo.cfapps.io to pas-dotnetcore-mvc-demo...
OK

Uploading pas-dotnetcore-mvc-demo...
Uploading app files from: /Users/pasapicella/pivotal/software/dotnet/dotnet-core-mvc/publish
Uploading 14.8M, 280 files
Done uploading
OK

Starting app pas-dotnetcore-mvc-demo in org apples-pivotal-org / space development as papicella@pivotal.io...
Creating container
Successfully created container
Downloading app package...
Downloaded app package (23.7M)
-----> Buildpack version 1.0.13
ASP.NET Core buildpack version: 1.0.13
ASP.NET Core buildpack starting compile
-----> Restoring files from buildpack cache
       OK
-----> Restoring NuGet packages cache
-----> Extracting libunwind
       libunwind version: 1.2
       OK
       https://buildpacks.cloudfoundry.org/dependencies/manual-binaries/dotnet/libunwind-1.2-linux-x64-f56347d4.tgz
       OK
-----> Saving to buildpack cache
       Copied 38 files from /tmp/app/libunwind to /tmp/cache
       OK
-----> Cleaning staging area
       OK
ASP.NET Core buildpack is done creating the droplet
Exit status 0
Uploading droplet, build artifacts cache...
Uploading build artifacts cache...
Uploading droplet...
Uploaded build artifacts cache (995K)
Uploaded droplet (23.8M)
Uploading complete
Destroying container
Successfully destroyed container

1 of 1 instances running

App started


OK

App pas-dotnetcore-mvc-demo was started using this command `cd . && ./dotnet-core-mvc --server.urls http://0.0.0.0:${PORT}`

Showing health and status for app pas-dotnetcore-mvc-demo in org apples-pivotal-org / space development as papicella@pivotal.io...
OK

requested state: started
instances: 1/1
usage: 512M x 1 instances
urls: pas-dotnetcore-mvc-demo.cfapps.io
last uploaded: Mon Mar 20 21:05:08 UTC 2017
stack: cflinuxfs2
buildpack: https://github.com/cloudfoundry/dotnet-core-buildpack

     state     since                    cpu    memory          disk          details
#0   running   2017-03-21 08:06:05 AM   0.0%   39.2M of 512M   66.9M of 1G

Categories: Fusion Middleware

MobaXterm 10.2

Tim Hall - Tue, 2017-03-21 05:36

MobaXterm 10.2 has just been released.

The downloads and changelog are in the usual places.

The previous version (10.0) was pulled as it was getting false positives with some AV software. I’m glad to report this one doesn’t get flagged and installs fine!

Happy upgrading!

Cheers

Tim…

MobaXterm 10.2 was first posted on March 21, 2017 at 11:36 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

SQL Server 2016: Does Dynamic Data Masking works with INSERT INTO and SELECT INTO commands?

Yann Neuhaus - Tue, 2017-03-21 02:55

I wonder how works Dynamic Data Masking (DDM) with these two commands INSERT INTO  and SELECT INTO.

First, I create a table and add some “sensitive data”:

USE [DDM_TEST]
GO

CREATE TABLE [dbo].[Confidential](
[ID] [int] IDENTITY(1,1) NOT NULL PRIMARY KEY CLUSTERED,
[Name] [nvarchar](70)NULL,
[CreditCard] [nvarchar](16)NULL,
[Salary] [int] NULL,
[Email] [nvarchar](60)NULL)  


insert into [dbo].[Confidential]([Name],[CreditCard],[Salary],[Email]) values (N'Stephane',N'3546748598467584',113459,N'sts@dbi-services.com')
insert into [dbo].[Confidential]([Name],[CreditCard],[Salary],[Email]) values (N'David',N'3546746598450989',143576,'dab@dbi-services.com')
insert into [dbo].[Confidential]([Name],[CreditCard],[Salary],[Email])  values (N'Nathan',N'3890098321457893',118900,'nac@dbi-services.com')
insert into [dbo].[Confidential]([Name],[CreditCard],[Salary],[Email])  values (N'Olivier',N'3564890234785612',98000,'olt@dbi-services.com')
insert into [dbo].[Confidential]([Name],[CreditCard],[Salary],[Email])  values (N'Alain',N'9897436900989342',85900,'ala@dbi-services.com')
insert into [dbo].[Confidential]([Name],[CreditCard],[Salary],[Email])  values (N'Fabrice',N'908323468902134',102345,'fad@dbi-services.com')

select * from [dbo].[Confidential]

DDM_Into01

After, I create all masking rules and add a user:

Use DDM_TEST
ALTER Table Confidential
ALTER COLUMN NAME ADD MASKED WITH (FUNCTION='default()')
ALTER Table Confidential
ALTER COLUMN SALARY ADD MASKED WITH (FUNCTION='default()')
ALTER Table Confidential
ALTER COLUMN creditcard ADD MASKED WITH (FUNCTION='partial(1,"XXXX",2)')
ALTER Table Confidential
ALTER COLUMN email ADD MASKED WITH (FUNCTION='email()')

CREATE USER TestDemo WITHOUT LOGIN
GRANT SELECT ON Confidential TO TestDemo

-- Execute a select statement as TestDemo 
EXECUTE AS USER='TestDemo'
SELECT * FROM [dbo].[Confidential] 
REVERT

DDM_Into02

INSERT INTO

This command is used to copy a table.
What’s happens when I copy data from a table with masked columns to a table without mask?
First, I create a second table [dbo].[Confidential2] and give permissions SELECT and INSERT to the user “TestDemo”

USE [DDM_TEST]
GO

CREATE TABLE [dbo].[Confidential2](
[ID] [int] IDENTITY(1,1) NOT NULL PRIMARY KEY CLUSTERED,
[Name] [nvarchar](70)NULL,
[CreditCard] [nvarchar](16)NULL,
[Salary] [int] NULL,
[Email] [nvarchar](60)NULL)  

GRANT SELECT ON Confidential2 TO TestDemo
GRANT INSERT ON Confidential2 TO TestDemo

I execute the query to insert data from [dbo].[Confidential] to [dbo].[Confidential2] with the INSERT INTO command:

USE [DDM_TEST]
GO
EXECUTE AS USER='TestDemo'
SELECT * FROM [dbo].[Confidential]
INSERT INTO [dbo].[Confidential2]([Name],[CreditCard],[Salary],[Email])
	SELECT [Name],[CreditCard],[Salary],[Email] FROM [dbo].[Confidential]
SELECT * FROM [dbo].[Confidential]
REVERT

DDM_Into03
As you can see data are also masked in the second table [dbo].[Confidential2].
But are they really?
I execute the query with the activation on the query plan.
DDM_Into04
As you can see the masking step is missing in the query plan from the select on [dbo].[Confidential2].
If I Select data from [dbo].[Confidential2] with my admin account, data are “masked data” and not real data…
DDM_Into05
Finally, the goal is reached, you cannot read sensitive data if you copy data from a table to another.
Keep in mind that the copied data are not masked for the user.
These data are copied as “masked data” and guarantee the anonymization and a good security for your sensitive data.
But on the other side, if you use the same WHERE clause in [dbo].[Confidential2], you don’t have the same result… :-(
DDM_Into05a

SELECT INTO

With this command, I test also the copy to a temporary table.
These two cases will be interesting…
I recreate the same table [dbo].[Confidential] with the same masking rules, the user with create table and alter any schema permissions to do the select into:

EXECUTE AS USER='TestDemo'
SELECT * FROM [dbo].[Confidential] 
SELECT * INTO [dbo].[Confidential2] FROM [dbo].[Confidential] ;
REVERT

DDM_Into06
In the query plan, you can see that the masking is between the select and the insert.
We are in the same case as previously: copied data are “masked data”.
To see it, I read data from the table [dbo].[Confidential2] with my sysadmin login:
DDM_Into07
And the result is that all copied data are masked. The data remain anonymous.

Finally, let’s test it with a temporary table and let’s see what happens:

USE [DDM_TEST]
GO
EXECUTE AS USER='TestDemo'
SELECT * FROM [dbo].[Confidential] 
SELECT * INTO #Confidential2 FROM [dbo].[Confidential] ;
REVERT
EXECUTE AS USER='TestDemo'
SELECT * FROM #Confidential2 
REVERT

DDM_Into08

The same query plan is applied and masked data are copied and remained anonymous.

Finally, these two commands INSERT INTO and SELECT INTO keep your data anonymous in the case of a Table copy.

Sorry but cheat mode is disabled … :evil:

 

Cet article SQL Server 2016: Does Dynamic Data Masking works with INSERT INTO and SELECT INTO commands? est apparu en premier sur Blog dbi services.

Change value of CONSTANT declaration

Tom Kyte - Mon, 2017-03-20 20:06
Hi, in my code I defined a constant through a custom function that fetches some data from the DB and creates an instance of a custom type. if the return value of the function changes over time, I'm wondering which event trigger a refresh on that ...
Categories: DBA Blogs

Validate all required fields before committing form

Tom Kyte - Mon, 2017-03-20 20:06
Dears, I have a master-detail form in which I want to validate all required fields before committing. When I press Save (Key-Commit), I have a procedure that loops on records in the detail block. If a required field is null in the detail blo...
Categories: DBA Blogs

Table e Index Partitioning

Tom Kyte - Mon, 2017-03-20 20:06
Hi all at ASK, we have a 180 gb table with 340 gb index and we want partioning index and table. Is possible ? There is a year column and i suppose that column is ok for partioning table and index. Which strategy is best ? Thanks in advanc...
Categories: DBA Blogs

REST API from PLSQL

Tom Kyte - Mon, 2017-03-20 20:06
I have one more requirement where I need to change one particular user's password belongs to an application which is hosted outside of our network. External application team provided information about REST API that need to used to search user and ...
Categories: DBA Blogs

Difference between Correlated and Non-Correlated Subqueries

Tom Kyte - Mon, 2017-03-20 20:06
Hi, In Many Website and Question answer communities like Quora etc i read about difference between Non and Co-related Sub queries, the basic difference is Co-relate execute outer query first then sub query Example <code>select * from departmen...
Categories: DBA Blogs

adrci purging

Michael Dinh - Mon, 2017-03-20 17:31

I did not know this.

Is there a way to control Auto_Purge Frequency done by the MMON ? (Doc ID 1446242.1)

The automatic purge cycle is designed as follows.
(1) The first actual purge action will be 2 days after instance startup time
(2) The next automatic purge actions following this first purge is done once every 7 days

If you like to purge more often, then it will need to be done manually.

The blog below was every helpful for creating adrci scripts.
https://grepora.com/2016/08/03/adrci-retention-policy-and-ad-hoc-purge-script-for-all-bases/

Here is what I have created.

$ ./adrci_show_control.sh

SHOW CONTROL diag/crs/arrow1/crs:

ADR Home = /u01/app/oracle/diag/crs/arrow1/crs:
*************************************************************************
ADRID                SHORTP_POLICY        LONGP_POLICY         LAST_MOD_TIME                            LAST_AUTOPRG_TIME                        LAST_MANUPRG_TIME                        ADRDIR_VERSION       ADRSCHM_VERSION      ADRSCHMV_SUMMARY     ADRALERT_VERSION     CREATE_TIME
-------------------- -------------------- -------------------- ---------------------------------------- ---------------------------------------- ---------------------------------------- -------------------- -------------------- -------------------- -------------------- ----------------------------------------
1344875867           720                  8760                 2016-11-24 19:05:55.164304 -08:00                                                 2017-02-28 19:56:23.753525 -08:00        1                    2                    82                   1                    2016-11-24 19:05:55.164304 -08:00
1 rows fetched

SHOW CONTROL diag/rdbms/hawka/HAWKA:

ADR Home = /u01/app/oracle/diag/rdbms/hawka/HAWKA:
*************************************************************************
ADRID                SHORTP_POLICY        LONGP_POLICY         LAST_MOD_TIME                            LAST_AUTOPRG_TIME                        LAST_MANUPRG_TIME                        ADRDIR_VERSION       ADRSCHM_VERSION      ADRSCHMV_SUMMARY     ADRALERT_VERSION     CREATE_TIME
-------------------- -------------------- -------------------- ---------------------------------------- ---------------------------------------- ---------------------------------------- -------------------- -------------------- -------------------- -------------------- ----------------------------------------
1630649358           1                    1                    2017-03-04 10:01:39.568251 -08:00        2017-03-18 07:00:21.124556 -07:00        2017-02-28 19:55:26.148874 -08:00        1                    2                    80                   1                    2016-11-27 18:22:12.601136 -08:00
1 rows fetched

SHOW CONTROL diag/rdbms/test/test:

ADR Home = /u01/app/oracle/diag/rdbms/test/test:
*************************************************************************
ADRID                SHORTP_POLICY        LONGP_POLICY         LAST_MOD_TIME                            LAST_AUTOPRG_TIME                        LAST_MANUPRG_TIME                        ADRDIR_VERSION       ADRSCHM_VERSION      ADRSCHMV_SUMMARY     ADRALERT_VERSION     CREATE_TIME
-------------------- -------------------- -------------------- ---------------------------------------- ---------------------------------------- ---------------------------------------- -------------------- -------------------- -------------------- -------------------- ----------------------------------------
2768052777           720                  8760                 2017-03-04 18:10:18.197875 -08:00                                                                                          1                    2                    80                   1                    2017-03-04 18:10:18.197875 -08:00
1 rows fetched

$ ./adrci_set_control.sh

SET CONTROL diag/crs/arrow1/crs:
SET CONTROL diag/rdbms/hawka/HAWKA:
SET CONTROL diag/rdbms/test/test:

$ ./adrci_purge.sh

PURGE diag/crs/arrow1/crs:

ADR Home = /u01/app/oracle/diag/crs/arrow1/crs:
*************************************************************************
ADRID                SHORTP_POLICY        LONGP_POLICY         LAST_MOD_TIME                            LAST_AUTOPRG_TIME                        LAST_MANUPRG_TIME                        ADRDIR_VERSION       ADRSCHM_VERSION      ADRSCHMV_SUMMARY     ADRALERT_VERSION     CREATE_TIME
-------------------- -------------------- -------------------- ---------------------------------------- ---------------------------------------- ---------------------------------------- -------------------- -------------------- -------------------- -------------------- ----------------------------------------
1344875867           2160                 2880                 2017-03-20 15:02:48.861513 -07:00                                                 2017-03-20 15:03:01.019503 -07:00        1                    2                    82                   1                    2016-11-24 19:05:55.164304 -08:00
1 rows fetched

PURGE diag/rdbms/hawka/HAWKA:

ADR Home = /u01/app/oracle/diag/rdbms/hawka/HAWKA:
*************************************************************************
ADRID                SHORTP_POLICY        LONGP_POLICY         LAST_MOD_TIME                            LAST_AUTOPRG_TIME                        LAST_MANUPRG_TIME                        ADRDIR_VERSION       ADRSCHM_VERSION      ADRSCHMV_SUMMARY     ADRALERT_VERSION     CREATE_TIME
-------------------- -------------------- -------------------- ---------------------------------------- ---------------------------------------- ---------------------------------------- -------------------- -------------------- -------------------- -------------------- ----------------------------------------
1630649358           2160                 2880                 2017-03-20 15:02:48.879455 -07:00        2017-03-18 07:00:21.124556 -07:00        2017-03-20 15:03:01.348572 -07:00        1                    2                    80                   1                    2016-11-27 18:22:12.601136 -08:00
1 rows fetched

PURGE diag/rdbms/test/test:

ADR Home = /u01/app/oracle/diag/rdbms/test/test:
*************************************************************************
ADRID                SHORTP_POLICY        LONGP_POLICY         LAST_MOD_TIME                            LAST_AUTOPRG_TIME                        LAST_MANUPRG_TIME                        ADRDIR_VERSION       ADRSCHM_VERSION      ADRSCHMV_SUMMARY     ADRALERT_VERSION     CREATE_TIME
-------------------- -------------------- -------------------- ---------------------------------------- ---------------------------------------- ---------------------------------------- -------------------- -------------------- -------------------- -------------------- ----------------------------------------
2768052777           2160                 2880                 2017-03-20 15:02:48.894455 -07:00                                                 2017-03-20 15:03:01.442372 -07:00        1                    2                    80                   1                    2017-03-04 18:10:18.197875 -08:00
1 rows fetched


$ cat adrci_show_control.sh
for f in $( adrci exec="show homes" | grep -v "ADR Homes:" );
do
echo "SHOW CONTROL ${f}:";
adrci exec="set home $f; show control;" ;
done

$ cat adrci_set_control.sh
for f in $( adrci exec=”show homes” | grep -v “ADR Homes:” );
do
echo “set control ${f}:”;
adrci exec=”set home $f; set control \(SHORTP_POLICY=2160, LONGP_POLICY=2880\);” ;
done

$ cat adrci_purge.sh
for f in $( adrci exec=”show homes” | grep -v “ADR Homes:” );
do
echo “purge ${f}:”;
adrci exec=”set home $f; show control; purge” ;
done


ADRCI Retention Policy and Ad-Hoc Purge Script for all Bases

Michael Dinh - Mon, 2017-03-20 17:31

|GREP ORA

As you know, since 11g we have a Automatic Diagnostic Repository (ADR). To better manage it, we also have a Command-line Interface, called ADRCI.
ADR contains all diagnostic information for database (logs, traces, incidents, problems, etc).

adr1

ADR Structure

View original post 349 more words


Oracle HCM Cloud Extensibility - The Easiest Win

Floyd Teter - Mon, 2017-03-20 16:09
I've been doing quite a bit of work lately with Oracle HCM Cloud user experience extensibility...presenting, helping partners and customers, etc.  Seems like a hot subject of late, with lots of folks wanting to know more.  So let's get into it a bit.

Working in the Oracle HCM Cloud Center of Excellence, I see quite a few opportunities for wins that come up repeatedly.  You know what kind of win I mean: something that's easy to do and scores big points with your customer/boss/fellow users.

The one I see with almost every HCM Cloud implementation is actually pretty simple to deliver:  an organization wants to extend the user interface appearance and structure.  You'll hear requirements like the following:

  • Appearance:  We want the UI to reflect our brand and identity (which typically means show our logo and use our color scheme).
  • Structure:  We want the home page (aka springboard) to show actions and information in a structure relevant to the way we work.  The structure out of the box doesn't fit us.
  • Text:  We have our own terminology and we want that terminology in the UI.

So you'll hear about one or more of these types of requirements.  And they're important to that organization - sometimes they're deal breakers.  And the solutions are easy to deliver.  Most can be delivered and ready for review in 15 to 30 minutes.  Let's take each of these use cases individual and walk through how it works.

Appearance

As an administrator, I can define the logo, background image, icon style, and color scheme here.  Note that I can pull both the logo and the background image from a URL, which may eliminate the need to recreate the image altogether.  Even better, with the exception of the logo and image URLs, you can utilize drop down lists for your entire appearance design.


And yes, as a matter of fact, you can see the colors before you make your choices.


Easy peasy.  Responsive to the device you're using for access...including some nifty enhancements for your phone in R12, like this:



Structure

Editing the UI information and action presentation structure in HCM Cloud is pretty simple.  You're presented with a list of information and action choices.  Do you want it visible for all roles or a particular role?  Do you want it visible on the Welcome Springboard (aka the home page)?  In what order to you want the visible items to appear? 


By the way, you can also click on the Names to drill down make edits to lower-level pages.  You can also create new pages from here.  So you are the master of your structure.

Text

In all honesty, Text is so easy that there is no need for a dedicated administration page.  That Structure administration page just above?  Click on the Name and make your text edits.  Done.  Or drill down to the appropriate page and make your text edits.  Done.  Now you've included terminology specific to an organizational culture.   That's one change management issue you can cross off the list.  No fuss, no muss.  Done.

So, with a little bit of effort, you can move the UI from something like this:



... to something with a little more corporate and seasonal context like this:



A Few More Thoughts

First, because I know you're going to ask, the changes we've discussed here survive upgrades for the most part.  I've seen a few glitches regarding text changes, but they're easily fixed without much effort.

Second, I know this all appears to be pretty easy stuff.  But you'd be amazed how often I find myself helping customers and partners in tailoring their Appearance and/or Structure and/or Text.  So it seemed like a good idea to share some of this here.  So now you know.

Third, note that all the screen shots of changes I've made are deployed to a sandbox.  Best practice, folks...deploy to a sandbox, let the customer/end users review (and rest assured they'll change it a bit), and deploy to production after you obtain approval.

UI extensibility is the easiest win...small effort leading to big value for your users.  And this is about as easy as it gets.

As always, your comments are appreciated.  Let me know what you're thinking.


Pages

Subscribe to Oracle FAQ aggregator