Feed aggregator

SQL Server 2016: Does Dynamic Data Masking works with INSERT INTO and SELECT INTO commands?

Yann Neuhaus - Tue, 2017-03-21 02:55

I wonder how works Dynamic Data Masking (DDM) with these two commands INSERT INTO  and SELECT INTO.

First, I create a table and add some “sensitive data”:

USE [DDM_TEST]
GO

CREATE TABLE [dbo].[Confidential](
[ID] [int] IDENTITY(1,1) NOT NULL PRIMARY KEY CLUSTERED,
[Name] [nvarchar](70)NULL,
[CreditCard] [nvarchar](16)NULL,
[Salary] [int] NULL,
[Email] [nvarchar](60)NULL)  


insert into [dbo].[Confidential]([Name],[CreditCard],[Salary],[Email]) values (N'Stephane',N'3546748598467584',113459,N'sts@dbi-services.com')
insert into [dbo].[Confidential]([Name],[CreditCard],[Salary],[Email]) values (N'David',N'3546746598450989',143576,'dab@dbi-services.com')
insert into [dbo].[Confidential]([Name],[CreditCard],[Salary],[Email])  values (N'Nathan',N'3890098321457893',118900,'nac@dbi-services.com')
insert into [dbo].[Confidential]([Name],[CreditCard],[Salary],[Email])  values (N'Olivier',N'3564890234785612',98000,'olt@dbi-services.com')
insert into [dbo].[Confidential]([Name],[CreditCard],[Salary],[Email])  values (N'Alain',N'9897436900989342',85900,'ala@dbi-services.com')
insert into [dbo].[Confidential]([Name],[CreditCard],[Salary],[Email])  values (N'Fabrice',N'908323468902134',102345,'fad@dbi-services.com')

select * from [dbo].[Confidential]

DDM_Into01

After, I create all masking rules and add a user:

Use DDM_TEST
ALTER Table Confidential
ALTER COLUMN NAME ADD MASKED WITH (FUNCTION='default()')
ALTER Table Confidential
ALTER COLUMN SALARY ADD MASKED WITH (FUNCTION='default()')
ALTER Table Confidential
ALTER COLUMN creditcard ADD MASKED WITH (FUNCTION='partial(1,"XXXX",2)')
ALTER Table Confidential
ALTER COLUMN email ADD MASKED WITH (FUNCTION='email()')

CREATE USER TestDemo WITHOUT LOGIN
GRANT SELECT ON Confidential TO TestDemo

-- Execute a select statement as TestDemo 
EXECUTE AS USER='TestDemo'
SELECT * FROM [dbo].[Confidential] 
REVERT

DDM_Into02

INSERT INTO

This command is used to copy a table.
What’s happens when I copy data from a table with masked columns to a table without mask?
First, I create a second table [dbo].[Confidential2] and give permissions SELECT and INSERT to the user “TestDemo”

USE [DDM_TEST]
GO

CREATE TABLE [dbo].[Confidential2](
[ID] [int] IDENTITY(1,1) NOT NULL PRIMARY KEY CLUSTERED,
[Name] [nvarchar](70)NULL,
[CreditCard] [nvarchar](16)NULL,
[Salary] [int] NULL,
[Email] [nvarchar](60)NULL)  

GRANT SELECT ON Confidential2 TO TestDemo
GRANT INSERT ON Confidential2 TO TestDemo

I execute the query to insert data from [dbo].[Confidential] to [dbo].[Confidential2] with the INSERT INTO command:

USE [DDM_TEST]
GO
EXECUTE AS USER='TestDemo'
SELECT * FROM [dbo].[Confidential]
INSERT INTO [dbo].[Confidential2]([Name],[CreditCard],[Salary],[Email])
	SELECT [Name],[CreditCard],[Salary],[Email] FROM [dbo].[Confidential]
SELECT * FROM [dbo].[Confidential]
REVERT

DDM_Into03
As you can see data are also masked in the second table [dbo].[Confidential2].
But are they really?
I execute the query with the activation on the query plan.
DDM_Into04
As you can see the masking step is missing in the query plan from the select on [dbo].[Confidential2].
If I Select data from [dbo].[Confidential2] with my admin account, data are “masked data” and not real data…
DDM_Into05
Finally, the goal is reached, you cannot read sensitive data if you copy data from a table to another.
Keep in mind that the copied data are not masked for the user.
These data are copied as “masked data” and guarantee the anonymization and a good security for your sensitive data.
But on the other side, if you use the same WHERE clause in [dbo].[Confidential2], you don’t have the same result… :-(
DDM_Into05a

SELECT INTO

With this command, I test also the copy to a temporary table.
These two cases will be interesting…
I recreate the same table [dbo].[Confidential] with the same masking rules, the user with create table and alter any schema permissions to do the select into:

EXECUTE AS USER='TestDemo'
SELECT * FROM [dbo].[Confidential] 
SELECT * INTO [dbo].[Confidential2] FROM [dbo].[Confidential] ;
REVERT

DDM_Into06
In the query plan, you can see that the masking is between the select and the insert.
We are in the same case as previously: copied data are “masked data”.
To see it, I read data from the table [dbo].[Confidential2] with my sysadmin login:
DDM_Into07
And the result is that all copied data are masked. The data remain anonymous.

Finally, let’s test it with a temporary table and let’s see what happens:

USE [DDM_TEST]
GO
EXECUTE AS USER='TestDemo'
SELECT * FROM [dbo].[Confidential] 
SELECT * INTO #Confidential2 FROM [dbo].[Confidential] ;
REVERT
EXECUTE AS USER='TestDemo'
SELECT * FROM #Confidential2 
REVERT

DDM_Into08

The same query plan is applied and masked data are copied and remained anonymous.

Finally, these two commands INSERT INTO and SELECT INTO keep your data anonymous in the case of a Table copy.

Sorry but cheat mode is disabled … :evil:

 

Cet article SQL Server 2016: Does Dynamic Data Masking works with INSERT INTO and SELECT INTO commands? est apparu en premier sur Blog dbi services.

Change value of CONSTANT declaration

Tom Kyte - Mon, 2017-03-20 20:06
Hi, in my code I defined a constant through a custom function that fetches some data from the DB and creates an instance of a custom type. if the return value of the function changes over time, I'm wondering which event trigger a refresh on that ...
Categories: DBA Blogs

Validate all required fields before committing form

Tom Kyte - Mon, 2017-03-20 20:06
Dears, I have a master-detail form in which I want to validate all required fields before committing. When I press Save (Key-Commit), I have a procedure that loops on records in the detail block. If a required field is null in the detail blo...
Categories: DBA Blogs

Table e Index Partitioning

Tom Kyte - Mon, 2017-03-20 20:06
Hi all at ASK, we have a 180 gb table with 340 gb index and we want partioning index and table. Is possible ? There is a year column and i suppose that column is ok for partioning table and index. Which strategy is best ? Thanks in advanc...
Categories: DBA Blogs

REST API from PLSQL

Tom Kyte - Mon, 2017-03-20 20:06
I have one more requirement where I need to change one particular user's password belongs to an application which is hosted outside of our network. External application team provided information about REST API that need to used to search user and ...
Categories: DBA Blogs

Difference between Correlated and Non-Correlated Subqueries

Tom Kyte - Mon, 2017-03-20 20:06
Hi, In Many Website and Question answer communities like Quora etc i read about difference between Non and Co-related Sub queries, the basic difference is Co-relate execute outer query first then sub query Example <code>select * from departmen...
Categories: DBA Blogs

adrci purging

Michael Dinh - Mon, 2017-03-20 17:31

I did not know this.

Is there a way to control Auto_Purge Frequency done by the MMON ? (Doc ID 1446242.1)

The automatic purge cycle is designed as follows.
(1) The first actual purge action will be 2 days after instance startup time
(2) The next automatic purge actions following this first purge is done once every 7 days

If you like to purge more often, then it will need to be done manually.

The blog below was every helpful for creating adrci scripts.
https://grepora.com/2016/08/03/adrci-retention-policy-and-ad-hoc-purge-script-for-all-bases/

Here is what I have created.

$ ./adrci_show_control.sh

SHOW CONTROL diag/crs/arrow1/crs:

ADR Home = /u01/app/oracle/diag/crs/arrow1/crs:
*************************************************************************
ADRID                SHORTP_POLICY        LONGP_POLICY         LAST_MOD_TIME                            LAST_AUTOPRG_TIME                        LAST_MANUPRG_TIME                        ADRDIR_VERSION       ADRSCHM_VERSION      ADRSCHMV_SUMMARY     ADRALERT_VERSION     CREATE_TIME
-------------------- -------------------- -------------------- ---------------------------------------- ---------------------------------------- ---------------------------------------- -------------------- -------------------- -------------------- -------------------- ----------------------------------------
1344875867           720                  8760                 2016-11-24 19:05:55.164304 -08:00                                                 2017-02-28 19:56:23.753525 -08:00        1                    2                    82                   1                    2016-11-24 19:05:55.164304 -08:00
1 rows fetched

SHOW CONTROL diag/rdbms/hawka/HAWKA:

ADR Home = /u01/app/oracle/diag/rdbms/hawka/HAWKA:
*************************************************************************
ADRID                SHORTP_POLICY        LONGP_POLICY         LAST_MOD_TIME                            LAST_AUTOPRG_TIME                        LAST_MANUPRG_TIME                        ADRDIR_VERSION       ADRSCHM_VERSION      ADRSCHMV_SUMMARY     ADRALERT_VERSION     CREATE_TIME
-------------------- -------------------- -------------------- ---------------------------------------- ---------------------------------------- ---------------------------------------- -------------------- -------------------- -------------------- -------------------- ----------------------------------------
1630649358           1                    1                    2017-03-04 10:01:39.568251 -08:00        2017-03-18 07:00:21.124556 -07:00        2017-02-28 19:55:26.148874 -08:00        1                    2                    80                   1                    2016-11-27 18:22:12.601136 -08:00
1 rows fetched

SHOW CONTROL diag/rdbms/test/test:

ADR Home = /u01/app/oracle/diag/rdbms/test/test:
*************************************************************************
ADRID                SHORTP_POLICY        LONGP_POLICY         LAST_MOD_TIME                            LAST_AUTOPRG_TIME                        LAST_MANUPRG_TIME                        ADRDIR_VERSION       ADRSCHM_VERSION      ADRSCHMV_SUMMARY     ADRALERT_VERSION     CREATE_TIME
-------------------- -------------------- -------------------- ---------------------------------------- ---------------------------------------- ---------------------------------------- -------------------- -------------------- -------------------- -------------------- ----------------------------------------
2768052777           720                  8760                 2017-03-04 18:10:18.197875 -08:00                                                                                          1                    2                    80                   1                    2017-03-04 18:10:18.197875 -08:00
1 rows fetched

$ ./adrci_set_control.sh

SET CONTROL diag/crs/arrow1/crs:
SET CONTROL diag/rdbms/hawka/HAWKA:
SET CONTROL diag/rdbms/test/test:

$ ./adrci_purge.sh

PURGE diag/crs/arrow1/crs:

ADR Home = /u01/app/oracle/diag/crs/arrow1/crs:
*************************************************************************
ADRID                SHORTP_POLICY        LONGP_POLICY         LAST_MOD_TIME                            LAST_AUTOPRG_TIME                        LAST_MANUPRG_TIME                        ADRDIR_VERSION       ADRSCHM_VERSION      ADRSCHMV_SUMMARY     ADRALERT_VERSION     CREATE_TIME
-------------------- -------------------- -------------------- ---------------------------------------- ---------------------------------------- ---------------------------------------- -------------------- -------------------- -------------------- -------------------- ----------------------------------------
1344875867           2160                 2880                 2017-03-20 15:02:48.861513 -07:00                                                 2017-03-20 15:03:01.019503 -07:00        1                    2                    82                   1                    2016-11-24 19:05:55.164304 -08:00
1 rows fetched

PURGE diag/rdbms/hawka/HAWKA:

ADR Home = /u01/app/oracle/diag/rdbms/hawka/HAWKA:
*************************************************************************
ADRID                SHORTP_POLICY        LONGP_POLICY         LAST_MOD_TIME                            LAST_AUTOPRG_TIME                        LAST_MANUPRG_TIME                        ADRDIR_VERSION       ADRSCHM_VERSION      ADRSCHMV_SUMMARY     ADRALERT_VERSION     CREATE_TIME
-------------------- -------------------- -------------------- ---------------------------------------- ---------------------------------------- ---------------------------------------- -------------------- -------------------- -------------------- -------------------- ----------------------------------------
1630649358           2160                 2880                 2017-03-20 15:02:48.879455 -07:00        2017-03-18 07:00:21.124556 -07:00        2017-03-20 15:03:01.348572 -07:00        1                    2                    80                   1                    2016-11-27 18:22:12.601136 -08:00
1 rows fetched

PURGE diag/rdbms/test/test:

ADR Home = /u01/app/oracle/diag/rdbms/test/test:
*************************************************************************
ADRID                SHORTP_POLICY        LONGP_POLICY         LAST_MOD_TIME                            LAST_AUTOPRG_TIME                        LAST_MANUPRG_TIME                        ADRDIR_VERSION       ADRSCHM_VERSION      ADRSCHMV_SUMMARY     ADRALERT_VERSION     CREATE_TIME
-------------------- -------------------- -------------------- ---------------------------------------- ---------------------------------------- ---------------------------------------- -------------------- -------------------- -------------------- -------------------- ----------------------------------------
2768052777           2160                 2880                 2017-03-20 15:02:48.894455 -07:00                                                 2017-03-20 15:03:01.442372 -07:00        1                    2                    80                   1                    2017-03-04 18:10:18.197875 -08:00
1 rows fetched


$ cat adrci_show_control.sh
for f in $( adrci exec="show homes" | grep -v "ADR Homes:" );
do
echo "SHOW CONTROL ${f}:";
adrci exec="set home $f; show control;" ;
done

$ cat adrci_set_control.sh
for f in $( adrci exec=”show homes” | grep -v “ADR Homes:” );
do
echo “set control ${f}:”;
adrci exec=”set home $f; set control \(SHORTP_POLICY=2160, LONGP_POLICY=2880\);” ;
done

$ cat adrci_purge.sh
for f in $( adrci exec=”show homes” | grep -v “ADR Homes:” );
do
echo “purge ${f}:”;
adrci exec=”set home $f; show control; purge” ;
done


ADRCI Retention Policy and Ad-Hoc Purge Script for all Bases

Michael Dinh - Mon, 2017-03-20 17:31

|GREP ORA

As you know, since 11g we have a Automatic Diagnostic Repository (ADR). To better manage it, we also have a Command-line Interface, called ADRCI.
ADR contains all diagnostic information for database (logs, traces, incidents, problems, etc).

adr1

ADR Structure

View original post 349 more words


Oracle HCM Cloud Extensibility - The Easiest Win

Floyd Teter - Mon, 2017-03-20 16:09
I've been doing quite a bit of work lately with Oracle HCM Cloud user experience extensibility...presenting, helping partners and customers, etc.  Seems like a hot subject of late, with lots of folks wanting to know more.  So let's get into it a bit.

Working in the Oracle HCM Cloud Center of Excellence, I see quite a few opportunities for wins that come up repeatedly.  You know what kind of win I mean: something that's easy to do and scores big points with your customer/boss/fellow users.

The one I see with almost every HCM Cloud implementation is actually pretty simple to deliver:  an organization wants to extend the user interface appearance and structure.  You'll hear requirements like the following:

  • Appearance:  We want the UI to reflect our brand and identity (which typically means show our logo and use our color scheme).
  • Structure:  We want the home page (aka springboard) to show actions and information in a structure relevant to the way we work.  The structure out of the box doesn't fit us.
  • Text:  We have our own terminology and we want that terminology in the UI.

So you'll hear about one or more of these types of requirements.  And they're important to that organization - sometimes they're deal breakers.  And the solutions are easy to deliver.  Most can be delivered and ready for review in 15 to 30 minutes.  Let's take each of these use cases individual and walk through how it works.

Appearance

As an administrator, I can define the logo, background image, icon style, and color scheme here.  Note that I can pull both the logo and the background image from a URL, which may eliminate the need to recreate the image altogether.  Even better, with the exception of the logo and image URLs, you can utilize drop down lists for your entire appearance design.


And yes, as a matter of fact, you can see the colors before you make your choices.


Easy peasy.  Responsive to the device you're using for access...including some nifty enhancements for your phone in R12, like this:



Structure

Editing the UI information and action presentation structure in HCM Cloud is pretty simple.  You're presented with a list of information and action choices.  Do you want it visible for all roles or a particular role?  Do you want it visible on the Welcome Springboard (aka the home page)?  In what order to you want the visible items to appear? 


By the way, you can also click on the Names to drill down make edits to lower-level pages.  You can also create new pages from here.  So you are the master of your structure.

Text

In all honesty, Text is so easy that there is no need for a dedicated administration page.  That Structure administration page just above?  Click on the Name and make your text edits.  Done.  Or drill down to the appropriate page and make your text edits.  Done.  Now you've included terminology specific to an organizational culture.   That's one change management issue you can cross off the list.  No fuss, no muss.  Done.

So, with a little bit of effort, you can move the UI from something like this:



... to something with a little more corporate and seasonal context like this:



A Few More Thoughts

First, because I know you're going to ask, the changes we've discussed here survive upgrades for the most part.  I've seen a few glitches regarding text changes, but they're easily fixed without much effort.

Second, I know this all appears to be pretty easy stuff.  But you'd be amazed how often I find myself helping customers and partners in tailoring their Appearance and/or Structure and/or Text.  So it seemed like a good idea to share some of this here.  So now you know.

Third, note that all the screen shots of changes I've made are deployed to a sandbox.  Best practice, folks...deploy to a sandbox, let the customer/end users review (and rest assured they'll change it a bit), and deploy to production after you obtain approval.

UI extensibility is the easiest win...small effort leading to big value for your users.  And this is about as easy as it gets.

As always, your comments are appreciated.  Let me know what you're thinking.


Find out the true benefit of Dashboards or Business Intelligence Software

Nilesh Jethwa - Mon, 2017-03-20 12:45

In fact, the question should be "What should be the true benefit of Dashboards or Business Intelligence software? "
Whether you are using dashboard in excel, Dashboards in Microsoft Access, Quickbooks, business objects, Salesforce or big ERP like Oracle EBS or SAP, there are numerous tangible benefits of using InfoCaptor as your primary BI and dashboard app compared to other business intelligence software or reporting tools.

Now, well-designed dashboards are more interesting than most "old-fashioned" tabular reports [lot of the old sytle reporting software vendors have begun calling themselves dashboard bi tools] but we are not interested in just fancy looks. So let us see several notable benefits and also certain features or aspects within InfoCaptor that help you achieve them.

Read more at http://www.infocaptor.com/dashboard/excel-dashboard-benefit

A Performance Analytics Application Case Study: Challenges and Successes

Rittman Mead Consulting - Mon, 2017-03-20 10:00

The Performance Analytics application is a collection of open source technologies that aids users in: identifying performance bottlenecks, identifying causes for slow report execution, discovering areas for performance optimization, and gathering meaningful insights into the health of an OBIEE environment.

This post focuses on lessons learned after a successful Performance Analytics application installation, where within one day of being operational it enabled us to identify and isolate a long-standing memory issue. Here's how.

OBIEE Performance Analytics Dashboards

Overview

Rittman Mead recently undertook an engagement with the remit to:

  • Carry out a health check on the current state of an OBIEE platform, architecture, and development process
  • Install the Rittman Mead Performance Analytics application, enabling rapid and accurate diagnostics of OBIEE issues

The client was on OBIEE 11g, having previously upgraded from 10g. OBIEE Production environment was a three-node cluster running the latest version of the 11g release. It served around 150 users daily, of a registered user base of around 1000.

The client had a long-standing issue with memory alerts on the master node of OBIEE cluster, but no evident architectural, hardware capacity, or software configuration issues were found after completing the health check.

Challenges and successes Gather all relevant data

Performance Analytics gathers data from a variety of sources in order to provide a full stack view of the OBIEE environment.

  • Active Session History (ASH) - The Active Session History data is read from the v$ACTIVE_SESSION_HISTORY system database view. Access to this data allows Performance Analytics users to have an understanding of the performance and state of the database at a given point it time as it provides information such as the SQL operation being performed, which application is executing the query, whether the query is actively being performed or is waiting for service, what state of execution the query is in, and many other useful statistics.

  • Operating System Metrics - Unix-based Operating Systems contain several commands used to gather information about the performance or status of the server such as vmstat, iostat, netstat, top and df. Performance Analytics utilizes the output of these commands to display the current status of the OS at a given point in time.

  • Usage Tracking - The Oracle BI Server supports the collection of usage tracking data. When usage tracking is enabled, the Oracle BI Server collects usage tracking data for each query, and it writes statistics to a usage tracking log file or inserts them directly into a database table. Access to this data allows Performance Analytics users to have an understanding of the performance of the BI Server and specific reports in the OBIEE environment at any given point in time.

  • OBIEE metrics - OBIEE has the capability to expose internal performance data through the Dynamic Monitoring Service (DMS). The data exposed contains information such as Connection Pool statistics, JVM statistics, the number of active threads, and much more. Access to this data allows Performance Analytics to record the current status of many of the OBIEE components and intricacies found within the tool.

Performance Analytics was deployed using Docker in a couple of days, requiring absolutely no software installation on any machine other than the monitoring server. All configuration settings are held in one file, and it was sufficient to add connection details of each server to it in order to gather all aforementioned data.

Accurately diagnose issues

By combining operating system metrics (CPU, memory, etc.) with internal OBIEE metrics and those from the database, Performance Analytics gives a "single pane of glass" view on the behaviour of the whole stack. This enables correlations in behaviour to be easily identified, and issues drilled into using the analysis capabilities of the tool.

Within a day of being installed, Performance Analytics enabled the client to accurately diagnose a long-standing issue with memory alerts on OBIEE master node. The cause was traced to the export to Excel of a large dataset by a single user.

Workload Planning

Performance Analytics allows to capture system status and workload over time, so you can see how the system is responding to peak loads in real-time.

With Performance Analytics the client is now able to estimate maximum workload the current architecture can support before starting to see issues and whether it is going to cope with the next years workload.

Conclusion

Performance Analytics just paid for itself.

Performance Analytics collects all relevant data and makes it accessible from one central location, allowing users to investigate performance inquiries promptly and simply. Instead of forcing users to dig through database records or a linux server manually, they can access all of the same data through a set of dashboards engineered to facilitate discovery from the collected data.

If you’d like to find out more about the Performance Analytics service offered by Rittman Mead, please get in touch.

Categories: BI & Warehousing

Log Buffer #507: A Carnival of the Vanities for DBAs

Pythian Group - Mon, 2017-03-20 07:58

This Log Buffer edition covers Oracle, SQL Server and MySQL blog posts with a touch of Cloud.

Oracle:

Provisioning EBS on Oracle Cloud: Latest Enhancements

ORA-54002 when trying to create Virtual Column using REGEXP_REPLACE on Oracle 12cR2

Business rules, common sense and query performance

Problem with V$RECOVERY_AREA_USAGE view and FRA space not being reclaimed

Updated SQLcl: 2 New Commands, Bug Fixes, and 12cR2 Oh My!

SQL Server:

Windowing Functions: Tell me when that changes

SQL Server Bulk Insert Row Terminator Issues

Introducing DLM Techniques for a Multi-Database Multi-Server System

Ding – The World’s Largest Mobile Top-up Network Streamlines QA with SQL Server Containers

Enable Trace Flags in SQL Server

MySQL:

Migrating MySQL database from Amazon RDS to DigitalOcean

Monitoring Databases: A Product Comparison

New collations in MySQL 8.0.0

How to store date, time, and timestamps in UTC time zone with JDBC and Hibernate

MySQL Connector/NET 7.0.7 m6 development has been released

Categories: DBA Blogs

Oracle and the American Institute of CPAs Unveil the New Operating Model for Agile Finance

Oracle Press Releases - Mon, 2017-03-20 07:00
Press Release
Oracle and the American Institute of CPAs Unveil the New Operating Model for Agile Finance New report helps CFOs manage digital transformation initiatives by providing detailed insights into best practices of agile finance leaders

Redwood Shores, Calif.—Mar 20, 2017

To help management accountants and finance professionals successfully navigate the business and technology changes that are transforming the finance function, Oracle and the American Institute of CPAs (AICPA) today released a new report that details the blueprint for modern finance. The report, Agile Finance Revealed: The New Operating Model for Modern Finance, identifies the traits of agile finance leaders and benchmarks their success in creating a dynamic new operating model that is resilient, responsive, and predictive, helping CFOs and their finance teams to shape the future of the business.

“With so much data at their disposal, CFOs must embrace a new operating model for modern finance,” said John Windle, FCMA, CGMA, chief financial officer at the Association of International Certified Professional Accountants. “This means they must effectively use cloud and digital technologies as well as ensure that their finance staff develops a broader skill set that includes strong analytical and business partnering skills.”

Based on an extensive survey of senior finance executives and in-depth interviews with CFOs who have already embraced new operating models, the report found that the finance function must be transformed in order to remain relevant in a business environment that is increasingly defined by rapid, unpredictable, and constant change. According to the report findings, only 38 percent of respondents are currently agile finance leaders. To help CFOs and their teams meet the demands of this rapidly changing business environment, the report outlines the key attributes necessary for the successful implementation of a new operating model for finance:

  • Cross-functional, integrated teams: Centralized shared services or centers of excellence that are empowered by cloud and digital technologies like robotic process automation and machine learning to drive efficiency in accounting services and improve overall business performance.
  • Expertise in digital technologies: Use of new technologies such as big data analytics and artificial intelligence (AI) expand the Financial Planning & Analysis (FP&A) role to be more predictive and enable it to generate the insights organizations need to develop innovative strategies that will drive higher performance.
  • New non-traditional finance skillsets: Expertise in statistics, data analysis, data visualization, and business partnering to support rapid decision-making and new performance management models.

“At Oracle, we believe that digital transformation brings companies into the future, and it enables them to be more competitive,” said Ivgen Guner, senior vice president, Global Business Finance, Oracle. “We also believe that the cloud gives CFOs an opportunity to restructure their current operating models to support new business models and more agile ways of working. The scale of these changes can be daunting and that’s why we worked closely with the American Institute of CPAs to develop a clear and actionable blueprint for modern finance that enables CFOs and their finance teams to lead digital transformation—not just react to it.”

The report found that while some major organizations have already made the transformation to the new operating model for modern finance, more than 80 percent of organizations are still in the process of implementing the required initiatives. The report also revealed that more than a third (36 percent) of finance leaders believe the current skill set of the finance function is too narrow and nearly a half (42 percent) said they need more data analytics skills if they are to deliver more forward-looking analysis for the business. To identify what sets agile finance leaders apart, the report examined how these leaders differ from those who are less advanced in their transformation journey in terms of structure, systems, and skills.

  • Implementing New Organizational Structures: Agile finance leaders significantly outstrip others in fully implementing the structural and systems elements of the new operating model. They are more likely to have migrated their end-to-end processes to shared service centers (41 percent vs. 13 percent) and to have a fully implemented cloud ERP (45 percent vs. 17 percent).
  • Technology Centralization: These leaders are also much more likely to have set up centers of excellence to handle areas that are most important for modern finance like FP&A (81 percent vs. 56 percent) and financial skills development (54 percent vs. 32 percent).
  • New Skills for the Finance Function: Agile finance leaders are also more likely to rate the skills of their finance function as ‘excellent’, particularly when it comes to newer, less traditional finance skills like data visualization (60 percent vs. 24 percent), big data expertise (60 percent vs. 21 percent), and influencing skills (55 percent vs. 23 percent).

The report includes insights from 483 senior finance executives in large businesses or other organizations across the U.S. and Canada, including five key industries: financial services, manufacturing, retail, healthcare and finance. It also includes insights from CFOs from a range of organizations including GE Digital, Arby’s, HSBC Global Services Companies, Vanderbilt University, Rutgers University, and Wake Forest Baptist Medical Center.

Contact Info
Joann Wardrip
Oracle
+1.650.607.1343
joann.wardrip@oracle.com
About the American Institute of CPAs (AICPA)

The American Institute of CPAs (AICPA) is the world’s largest member association representing the accounting profession, with more than 412,000 members in 144 countries, and a history of serving the public interest since 1887. AICPA members represent many areas of practice, including business and industry, public practice, government, education and consulting.

The AICPA sets ethical standards for the profession and U.S. auditing standards for private companies, nonprofit organizations, federal, state and local governments. It develops and grades the Uniform CPA Examination, and offers specialty credentials for CPAs who concentrate on personal financial planning; forensic accounting; business valuation; and information management and technology assurance. Through a joint venture with the Chartered Institute of Management Accountants (CIMA), it has established the Chartered Global Management Accountant (CGMA) designation, which sets a new standard for global recognition of management accounting.

The AICPA maintains offices in New York, Washington, DC, Durham, NC, and Ewing, NJ. Media representatives are invited to visit the AICPA Press Center at aicpa.org/press.

About the Association of International Certified Professional Accountants

The Association of International Certified Professional Accountants (the Association) is the most influential body of professional accountants, combining the strengths of the American Institute of CPAs (AICPA) and The Chartered Institute of Management Accountants (CIMA) to power opportunity, trust and prosperity for people, businesses and economies worldwide. It represents 650,000 members and students in public and management accounting and advocates for the public interest and business sustainability on current and emerging issues. With broad reach, rigor and resources, the Association advances the reputation, employability and quality of CPAs, CGMAs and accounting and finance professionals globally.

About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Joann Wardrip

  • +1.650.607.1343

Oracle Unveils Latest Innovations in Oracle Prime Projects Cloud Service

Oracle Press Releases - Mon, 2017-03-20 07:00
Press Release
Oracle Unveils Latest Innovations in Oracle Prime Projects Cloud Service New Lean Construction and Capital Improvement Program capabilities enhance Oracle Construction and Engineering’s comprehensive project success platform

Oracle Industry Connect, Orlando, Fla.—Mar 20, 2017

Oracle today announced the latest innovations to Oracle Prime Projects Cloud Service, Oracle Construction and Engineering’s powerful project success platform.

The new Oracle Prime Projects capabilities, Oracle Lean Scheduling Solution and Oracle Capital Improvement Program Solution, add innovative project execution and capital planning features to Oracle Construction and Engineering’s suite of cloud solutions. Oracle Prime Projects enables users to manage all aspects of the project lifecycle from ideation to execution to delivery.

By providing real-time visibility into project cost, schedules, risk, and performance information, Oracle Prime Projects empowers stakeholders to make better decisions and deliver better outcomes across all stages of the project lifecycle. The cloud suite is mobile ready, offers built-in analytics, and features robust workflows and social tools to enable rich, real-time collaboration on mission-critical activities.

Oracle Lean Scheduling Solution

Oracle Lean Scheduling is a unique,-purpose-built cloud solution that integrates both critical path method (CPM) and Lean scheduling methods into a single, integrated engineering and construction production system. The solution simplifies the scheduling processes required to bridge the gap between CPM master schedulers and last planners, connecting the field office with the enterprise. The solution also breaks down similar silos in product manufacturing, connecting processes and data in an integrated system to optimize engineer-to-order and other complex manufacturing methods. Oracle Lean Scheduling is built to complement the power of Oracle’s industry-leading Primavera P6 Enterprise Project Portfolio Management platform, offering unmatched coordination, commitment, and community for engineering and construction projects, programs, and enterprises.

Oracle Capital Improvement Program Solution (Oracle CIP)

With a growing backlog of infrastructure demands and shrinking budgets, many public entities struggle to decide where to allocate scarce resources for the greatest impact. Oracle Capital Improvement Program leverages cloud architecture to help eliminate data silos and facilitate collaborative tasks across departments, improving enterprise decision making for customers such as cities, counties, airports, and school districts. Robust workflows and reporting capabilities enable organizations to automate, integrate and simplify steps in their multi-year portfolio planning and budgeting process. These process improvements help public entities better identify, prioritize, and plan their capital portfolios in a transparent, credible, and coordinated manner.

“Oracle Prime Projects provides organizations of any size with an easy-to-use, integrated cloud platform to manage the myriad complex processes associated with planning, building, and operating critical assets,” said Mike Sicilia, senior vice president and general manager of Oracle Construction and Engineering. “These latest innovations address key customer challenges and enhance the power of the Oracle Prime Projects platform to deliver superior outcomes in the cloud.”

For more information about Oracle Construction and Engineering, please visit http://bit.ly/2lTU3Zr.

Contact Info
Judi Palmer
Oracle
+1.650.784.7901
judi.palmer@oracle.com
Kris Reeves
Blanc & Otus PR for Oracle
+1.415.856.5145
kreeves@blancandotus.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Judi Palmer

  • +1.650.784.7901

Kris Reeves

  • +1.415.856.5145

Oracle SOA Suite: Find that composite instance!

Amis Blog - Mon, 2017-03-20 06:14

When executing BPM or BPEL processes, they are usually executed in the context of a specific entity. Sometimes you want to find instances involved with a specific entity. There are different ways to make this easy. You can for example use composite instance titles or sensors and set them to a unique identifier for your entity. If they have not been used, you can check the audit trail. However, manually checking the audit trail, especially if there are many instances, can be cumbersome. Also if different teams use different standards or standards have evolved over time, there might not be a single way to look for your entity identifier in composite instances. You want to automate this.

It is of course possible to write Java or WLST code and use the API to gather all relevant information. It would however require fetching large amounts of data from the SOAINFRA database to analyse. Fetching all that data into WLST or Java and combining it, would not be fast. I’ve created a database package / query which performs this feat directly on the 11g SOAINFRA database (and most likely with little alteration on 12c).

How does it work

The checks which are performed in order (the first result found is returned):

  • Check the composite instance title
  • Check the sensor values
  • Check the composite audit trail
  • Check the composite audit details
  • Check the BPM audit trail
  • Check the Mediator audit trail
  • Do the above checks for every composite sharing the same ECID.

It first looks for instance titles conforming to a specific syntax (with a regular expression), next it looks for sensor values of sensors with a specific name. After that it starts to look in the audit trail and if even that fails, it looks in the audit details where messages are stored when they become larger than a set value (look for Audit Trail threshold). Next the BPM and Mediator specific audit tables are looked at and as a last resort, it uses the ECID to find other composite instances in the same flow which might provide the required information and it does the same checks as mentioned above on those composite instances. Using this method I could find for almost any composite instance in my environment a corresponding entity identifier. The package/query has been tested on 11g but not on 12c. You should of course check to see if it fits your personal requirements. The code is mostly easy to read save the audit parsing details. For parsing the audit trail and details tables, I’ve used the following blog. The data is saved in a file which can be imported in Excel and can be scheduled on Linux with a provided sh script.

Getting the script to work for your case

You can download the script here. Several minor changes are required to make the script suitable for a specific use case.

  • In the example script getcomposites_run.sql the identification regular expressing: AA\d\d\.\d+ is used. You should of course replace this with a regular expression reflecting the format of your entity identification.
  • In the example script getcomposites_run.sql sensors which have AAIDENTIFICATION in the name will be looked at. This should be changed to reflect the names used by your sensors.
  • The getcomposites.sh contains a connect string: connect soainfra_username/soainfra_password. You should change this to your credentials.
  • The getcomposites.sh script can be scheduled. In the example script, it is scheduled to run at 12:30:00. If you do not need it, you can remove the scheduling. It can come in handy when you want to run it outside of office hours because the script most likely will impact performance.
  • The selection in getcomposites_run.sql only looks at running composites. Depending on your usecase, you might want to change this to take all composites into consideration.
  • The script has not been updated to 12g. If you happen to create a 12g version of this script (I think not much should have to be changed), please inform me so I can add it to the Github repository.
Considerations
  • The script contains some repetition of code. This could be improved.
  • If you have much data in your SOAINFRA tables, the query will be slow. It could take hours. During this period, performance might be adversely affected.
  • That I had to create a script like this (first try this, then this, then this, etc) indicates that I encountered a situation in which there was not a single way to link composite instances to a specific identifier. If your project uses strict standards and these standards are enforced, a script like this would not be needed. For example, you set your composite instance title to reflect your main entity identifier or use specific sensors. In such a case, you do not need to fall back to parsing audit data.

The post Oracle SOA Suite: Find that composite instance! appeared first on AMIS Oracle and Java Blog.

Red Samurai and Oracle PaaS JCS Success - JET/ADF BC REST Cloud Production Application

Andrejus Baranovski - Mon, 2017-03-20 00:34
I would like to share our success in Oracle PaaS service - Java Cloud (JCS). We have managed to implement JET/ADF BC REST system in short period of time and deploy it in production running on Oracle JCS. UI is implemented entirely with JET, while back-end REST services are running on top of ADF BC. Secure access is controlled by unique ID managed by server side ADF Security.

Production application implements custom invoice processing logic for startup candy factory/distributor in Lithuania. Oracle Cloud doesn't require administration and provides all mandatory services to run custom application. This is primary reason why decision was made to use Oracle Cloud. In the next phase we are going to implement warehouse management logic, tightly integrated with invoice processing.

JET UI

Home screen displays dashboard with financial data for orders, customers and invoice items:


Menu structure (JET Router) is focused around implemented business logic:

1. Dashboard - statistical data display
2. New Invoice - invoice creation module
3. Template Setup - invoice template setup, to speed up invoice creation
4. Invoice Search - search through all invoices
5. Customer Setup - customer data management
6. Supplier Setup - supplier data management

Invoice Search screen implements form block and results table with pagination. This screen is using a set of JET components, starting from input list to responsive UI:


User have option to select invoice for editing. Various invoice fields can be changed in this screen. UI remains responsive and can be rendered on mobile device or JET Hybrid application - this was one of the key requirements:


Invoice items are edited in JET dialog. This is very convenient approach, JET dialog works really fast and allows to switch from one item to another in a quick way:


Development process - Oracle Developer Cloud Service

Application was developed using Oracle Developer Cloud Service. We think Agile boards/sprints defined in DevCS are really helpful. Offered functionality simplifies task management/monitoring process and you can see the progress in the sprint:


DevCS provides Git source control repository, agile board and issue tracking. It also gives you Build Automation. We were using Build Automation to re-build changes committed in ADF BC REST repository:


After build process completes - there is a an option to redeploy latest EAR package directly to JCS. This helps to save time, no need to repeat redeployment routine steps yourself each time when new build completes:


Application Structure

Application is divided into two parts (two separate deployments): ADF BC REST and JET UI implementation.

ADF BC REST implements back-end REST services and provides authentication/authorisation control. We are using regular ADF BC development, together with REST interface provided out of the box starting from ADF 12.2.1:


JET UI application is following modular architecture approach. Each use case is implemented in separate module. Common logic (REST service definition, etc.) is moved out into controller modules. Client side business logic is implemented in JET module JavaScript functions and rendered with JET UI components:


Oracle Cloud Deployment

We are using Oracle Java Cloud Service instance to run both ADF BC REST app and JET. ADF can't be deployed to Oracle Application Container Cloud service, but you can host JET from Java Cloud. For this reason it was more sense for us to use only Java Cloud and run both server side and host client side from the same instance.

Both demo and prod environments run from the same Cloud instance, targeted to different Managed Servers (to simplify maintenance):


If we take a look into application runtime statistics, ADF BC REST application mainly executes ADF REST servlet (to produce REST request response), our custom PDF servlet and ADF authentication servlet to execute session logout:


JET wrapper application doesn't run any server side logic, it simply return JET application content to the client - only File Servlet is invoked in operation:


Performance

JET runs on client side, there are much less server side calls comparing to ADF Faces application. In this example we change invoice status to Submitted. In result several REST calls are executed, each running below 100 ms. and transferring just a bit of info. Key difference between JET and ADF Faces - REST call doesnt block client functionality. REST call may execute asynchronously and user can continue to work with the application:


In the example below, we navigate to invoice list screen. Invoice MAR-36 status was changed to submitted, invoice line data is re-fetched to display up-to date information in the table for particular row. Invoice row data is re-fetched below 100 ms.:


Let's check navigation in the table. We navigate to the last page. REST request is completed in around 100 ms., and it returns only a set of rows for the last page:


When user log's our from JET application, we execute request for adfAuthentication servlet with parameter logout=true. This allows to close ADF BC REST session on the server side:


Summary

1. JET UI and ADF BC REST server side can be used in production system

2. JET UI and ADF BC REST runs great on Oracle Java Cloud

3. Oracle Developer Cloud Service is useful for JET and ADF development

Cloudera’s Data Science Workbench

DBMS2 - Sun, 2017-03-19 19:41

0. Matt Brandwein of Cloudera briefed me on the new Cloudera Data Science Workbench. The problem it purports to solve is:

  • One way to do data science is to repeatedly jump through the hoops of working with a properly-secured Hadoop cluster. This is difficult.
  • Another way is to extract data from a Hadoop cluster onto your personal machine. This is insecure (once the data arrives) and not very parallelized.
  • A third way is needed.

Cloudera’s idea for a third way is:

  • You don’t run anything on your desktop/laptop machine except a browser.
  • The browser connects you to a Docker container that holds (and isolates) a kind of virtual desktop for you.
  • The Docker container runs on your Cloudera cluster, so connectivity-to-Hadoop and security are handled rather automagically.

In theory, that’s pure goodness … assuming that the automagic works sufficiently well. I gather that Cloudera Data Science Workbench has been beta tested by 5 large organizations and many 10s of users. We’ll see what is or isn’t missing as more customers take it for a spin.

1. Recall that Cloudera installations have 4 kinds of nodes. 3 are obvious:

  • Hadoop worker nodes.
  • Hadoop master nodes.
  • Nodes that run Cloudera Manager.

The fourth kind are edge/gateway nodes. Those handle connections to the outside world, and can also run selected third-party software. They also are where Cloudera Data Science Workbench lives.

2. One point of this architecture is to let each data scientist run the languages and tools of her choice. Docker isolation is supposed to make that practical and safe.

And so we have a case of the workbench metaphor actually being accurate! While a “workbench” is commonly just an integrated set of tools, in this case it’s also a place for you to use other tools your personally like and bring in.

Surely there are some restrictions as to which tools you can use, but I didn’t ask for those to be spelled out.

3. Matt kept talking about security, to an extent I recall in almost no other analytics-oriented briefing. This had several aspects.

  • As noted above, a lot of the hassle of Hadoop-based data science relates to security.
  • As also noted above, evading the hassle by extracting data is a huge security risk. (If you lose customer data, you’re going to have a very, very bad day.)
  • According to Matt, standard uses of notebook tools such as Jupyter or Zeppelin wind up having data stored wherever code is. Cloudera’s otherwise similar notebook-style interface evidently avoids that flaw. (Presumably, it you want to see the output, you rerun the script against the data store yourself.)

4. To a first approximation, the target users of Cloudera Data Science Workbench can be characterized the same way BI-oriented business analysts are. They’re people with:

  • Sufficiently good quantitative skills to do the analysis.
  • Sufficiently good computer skills to do SQL queries and so on, but not a lot more than that.

Of course, “sufficiently good quantitative skills” can mean something quite different in data science than it does for the glorified arithmetic of ordinary business intelligence.

5. Cloudera Data Science Workbench doesn’t have any special magic in parallelization. It just helps you access the parallelism that’s already out there. Some algorithms are easy to parallelize. Some libraries have parallelized a few algorithms beyond that. Otherwise, you’re on your own.

6. When I asked whether Cloudera Data Science Workbench was open source (like most of what Cloudera provides) or closed source (like Cloudera Manager), I didn’t get the clearest of answers. On the one hand, it’s a Cloudera-specific product, as the name suggests; on the other, it’s positioned as having been stitched together almost entirely from a collection of open source projects.

Categories: Other

Quick Check for Corrupt Block (file #, block #)

Michael Dinh - Sun, 2017-03-19 13:21

$ egrep -n '^Corrupt block relative dba| ^Reread of blocknum' alert.log| tail -50

124974:Corrupt block relative dba: 0x04c7d3be (file 12, block 312452)
124983:Reread of blocknum=312452, file=+DATA/db/datafile/db_data03.329.456465464. found valid data

124974|124983 are line number from alert.log

On reread found valid data, there is no need to proceed further since block has been validated. Typically, this occurs when there is a fractured block.

If you have doubts:
RMAN> backup validate check logical datafile 12;
SQL> select * from v$database_block_corruption;

Find corrupted segment.

select owner, segment_name, segment_type, partition_name
from dba_extents
where file_id = 12
and 312452 between block_id and block_id + blocks – 1
and rownum = 1
;

OWNER
------------------------------
SEGMENT_NAME
-----------------------------
SEGMENT_TYPE	   
------------------------------
PARTITION_NAME
------------------------------
MDINH
T1
TABLE PARTITION    
T1_P8

What is the true meaning and purpose of Business Intelligence?

Nilesh Jethwa - Sun, 2017-03-19 13:15

Companies in a variety of industries are still data-rich but information-poor and this ratio keeps increasing every day. The rate of data generation is so high that organizations simply cannot keep up with generating insights out of it.

These enterprises lack the kind of actionable information and analytical tools needed to improve profits and performance. Business intelligence (BI) is the next logical step by management to start thinking about how to capitalize on the potential of BI to improve profit and performance.

Many companies have embarked on the journey of data warehouses and yet failed to use them to achieve BI. If you think the purpose of data warehousing (DW) efforts is to simply keep producing more reports then the organization has failed to achieve true BI.

Read more at http://www.infocaptor.com/dashboard/business-intelligence

TNS Issues when connecting to database with Forms Builder 12.2.1.2.0

Tom Kyte - Sun, 2017-03-19 07:26
I am getting the following issue when trying to connect to the database with Oracle Forms Builder (12.2.1.2.0): ORA-12154: TNS: Could not resolve the connect identifier specified. I am using: username: hr password: hrpassword database: pdb...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator