Feed aggregator

Apache Impala Internals Deep Dive with Tanel Poder + Gluent New World Training Month

Tanel Poder - Tue, 2017-07-11 11:51

We are running a “Gluent New World training month” in this July and have scheduled 3 webinars on following Wednesdays for this!

The first webinar with Michael Rainey is going to cover modern alternatives to the traditional old-school “ETL on a RDBMS” approach for data integration and sharing. Then on the next Wednesday I will demonstrate some Apache Impala SQL engine’s internals, with commentary from an Oracle database geek’s angle (I plan to get pretty deep & technical). And in the end of the month, a Gluent customer Vistra Energy will talk about their journey towards a modern analytics platforms.

All together this should give a good overview of architectural opportunities that modern enterprise data platforms provide, with some technical Apache Impala hacking thrill too!

Offload, Transform & Present – The New World of Data Integration

Apache Impala Internals with Tanel Poder

  • Speaker: Tanel Poder, Gluent
  • Wednesday, July 19 @ 12 PM CDT

Building an Analytics Platform with Oracle & Hadoop

  • Speakers: Gerry Moore & Suresh Irukulapati, Vistra Energy
  • Wednesday, July 26 @ 9 AM CDT

You can see the abstracts and register for the webinars here.

We plan to run more technical sessions about different modern platform components and more customer case studies in the future too. See you soon!

NB! If you want to move to the "New World" - offload your data and workloads to Hadoop, without having to re-write your existing applications - check out Gluent. We are making history! ;-)

Check Concurrent Processing's Health with CP Analyzer

Steven Chan - Tue, 2017-07-11 11:10

In addition to helping customers resolve issues via Service Requests, Oracle Support also builds over 60 free diagnostic tools for Oracle E-Business Suite 12.2, 12.0, 12.1, and 11i. These Support Analyzers are non-invasive scripts that run health-checks on your EBS environments. They look for common issues and generate standardized reports summarizing that provide solutions for known issues and recommendations on best practices.

Here's an index to these tools:

Spotlight on Concurrent Processing Analyzer

I recently profiled the Workflow Analyzer. In addition to that tool, I'd recommend taking a look at the Concurrent Processing Analyzer:

The Concurrent Processing Analyzer reviews CP configurations and compares them against Oracle's best practices.

This tool can be run manually or configured to run as a concurrent request, so it can be scheduled to be run periodically and included in regular Workflow Maintenance cycles.

Can this script be run against Production?

Yes. There is no DML in the Analyzer Script, so it is safe to run against Production instances to get an analysis of the environment for a specific instance. As always it is recommended to test all suggestions against a TEST instance before applying to Production.

Related Articles

 

Categories: APPS Blogs

Streaming Global Cyber Attack Analytics with Tableau and Python

Rittman Mead Consulting - Tue, 2017-07-11 10:19
Streaming Global Cyber Attack Analytics with Tableau and Python

Streaming Global Cyber Attack Analytics with Tableau and Python

Introduction and Hacks

As grandiose a notion as the title may imply, there have been some really promising and powerful moves made in the advancement of smoothly integrating real-time and/or streaming data technologies into most any enterprise reporting and analytics architecture. When used in tandem with functional programming languages like Python, we now have the ability to create enterprise grade data engineering scripts to handle the manipulation and flow of data, large or small, for final consumption in all manner of business applications.

In this cavalcade of coding, we're going to use a combination of Satori, a free data streaming client, and python to stream live world cyber attack activity via an api. We'll consume the records as json, and then use a few choice python libraries to parse, normalize, and insert the records into a mysql database. Finally, we'll hook it all up to Tableau and watch cyber attacks happen in real time with a really cool visualization.


The Specs

For the this exercise, we're going to bite things off a chunk at a time. We're going to utilize a service called Satori, a streaming data source aggregator that will make it easy for us to hook up to any number of streams to work with as we please. In this case, we'll be working with the Live Cyber Attack Threat Map data set. Next, we'll set up our producer code that will do a couple of things. First it will create the API client from which we will be ingesting a constant flow of cyber attack records. Next, we'll take these records and convert them to a data frame using the Pandas library for python. Finally, we will insert them into a MySQL database. This will allow us to use this live feed as a source for Tableau in order to create a geo mapping of countries that are currently being targeted by cyber attacks.


The Data Source

Streaming Global Cyber Attack Analytics with Tableau and Python

Satori is a new-ish service that aggregates the web's streaming data sources and provides developers with a client and some sample code that they can then use to set up their own live data streams. While your interests may lie in how you can stream your own company's data, it then simply becomes a matter of using python's requests library to get at whatever internal sources you might need. Find more on the requests library here.

Satori has taken a lot of the guess work out of the first step of the process for us, as they provide basic code samples in a number of popular languages to access their streaming service and to generate records. You can find the link to this code in a number of popular languages here. Note that you'll need to install their client and get your own app key. I've added a bit of code at the end to handle the insertion of records, and to continue the flow, should any records produce a warning.


Satori Code
# Imports
from __future__ import print_function

import sys  
import threading  
from pandas import DataFrame  
from satori.rtm.client import make_client, SubscriptionMode

# Local Imports
from create_table import engine

# Satori Variables
channel = "live-cyber-attack-threat-map"  
endpoint = "wss://open-data.api.satori.com"  
appkey = " "

# Local Variables
table = 'hack_attacks'


def main():

    with make_client(
            endpoint=endpoint, appkey=appkey) as client:

        print('Connected!')

        mailbox = []
        got_message_event = threading.Event()

        class SubscriptionObserver(object):
            def on_subscription_data(self, data):
                for message in data['messages']:
                    mailbox.append(message)
                got_message_event.set()

        subscription_observer = SubscriptionObserver()
        client.subscribe(
            channel,
            SubscriptionMode.SIMPLE,
            subscription_observer)

        if not got_message_event.wait(30):
            print("Timeout while waiting for a message")
            sys.exit(1)

        for message in mailbox:
                # Create dataframe
                data = DataFrame([message],
                                 columns=['attack_type', 'attacker_ip', 'attack_port',
                                          'latitude2', 'longitude2', 'longitude',
                                          'city_target', 'country_target', 'attack_subtype',
                                          'latitude', 'city_origin', 'country_origin'])
                # Insert records to table
                try:
                    data.to_sql(table, engine, if_exists='append')

                except Exception as e:
                    print(e)

if __name__ == '__main__':  
    main()


Creating a Table

Now that we've set up the streaming code that we'll use to fill our table, we'll need to set up the table in MySQL to hold them all. For this we'll use the SQLAlchemy ORM (object relational mapper). It's a high falutin' term for a tool that simply abstracts SQL commands to be more 'pythonic'; that is, you don't necessarily have to be a SQL expert to create tables in your given database. Admittedly, it can be a bit daunting to get the hang of, but give it a shot. Many developers choose to interact a with a given database either via direct SQL or using an ORM. It's good practice to use a separate python file, in this case settings.py (or some variation thereof), to hold your database connection string in the following format (the addition of the mysqldb tag at the beginning is as a result of the installation of the mysql library you'll need for python), entitled SQLALCHEMY_DATABASE_URI:

'mysql+mysqldb://db_user:pass@db_host/db_name'  

Don't forget to sign in to your database to validate success!


Feeding MySQL and Tableau

Now all we need to do is turn on the hose and watch our table fill up. Running producer.py, we can then open a new tab, log in to our database to make sure our table is being populated, and go to work. Create a new connection to your MySQL database (called my db 'hacks') in Tableau and verify that everything is in order once you navigate to the data preview. There are lots of nulls in this data set, but this will simply be a matter of filtering them out on the front end.

Streaming Global Cyber Attack Analytics with Tableau and Python

Tableau should pick up right away on the geo data in the dataset, as denoted by the little globe icon next to the field.
Streaming Global Cyber Attack Analytics with Tableau and Python We can now simply double-click on the corresponding geo data field, in this case we'll be using Country Target, and then the Number of Records field in the Measures area.
Streaming Global Cyber Attack Analytics with Tableau and Python I've chosen to use the 'Dark' map theme for this example as it just really jives with the whole cyber attack, international espionage vibe. Note that you'll need to maintain a live connection, via Tableau, to your datasource and refresh at the interval you'd like, if using Tableau Desktop. If you're curious about how to automagically provide for this functionality, a quick google search will come up with some solutions.

Categories: BI & Warehousing

Migration to SQL Server 2016 with deprecated data types and Full-Text

Yann Neuhaus - Tue, 2017-07-11 10:06

A few weeks ago, I prepared a migration from SQL Server 2008 to SQL server 2016, I came across a case with deprecated data types and a Full-Text activated on these columns.
To simulate this scenario, I downloaded and installed the Data Migration Assistant(DMA).

Make sure that the .Net Framework 4.5 is installed on your machine before starting the installation of the DMA.
After the installation, you start the application which is very simple to use.
I created a database named db_to_mig.

In this database, I created 2 tables:

  • Text_With_FullText: table with a text named description and a FullText index on this Column
  • Text_Without_FullText: same table without the FullText Index

DMA_02

In the Data Migration Assistant, I created a new project. Be careful, to select “SQL Server” as target and not “Azure SQL Database”(default).
DMA_03

After, control that the target version is SQL Server 2016 and I recommend checking both checkboxes: “Compatibility issues” and “new features’ recommendation”.
The second step is to select the source, enter your instance and click connect.
Select your database and click add.
DMA_04

And go on clicking Start Assessment
DMA_05

Waiting the result…
DMA_07

After the evaluation, in the Review Results Part, I see 2 Issues:

    • Full-Text Search has changed since SQL Server 2008

DMA_08

    • Deprecated data types Text, Image or Ntext

DMA_09

Tips: You have also the possibility to save the report in a json file.
DMA_10

My case was first to convert all TEXT,NTEXT and image to VARCHAR(max), NVARCHAR(MAX) and VARBINARY(MAX) like the link here to the Microsoft documentation advise us.
First, I did a backup from my database and restored it on another server or on the same with a new name. Don’t touch directly the production database! :evil:
DMA_11

In my case, I restore the database with a new name on the same server: db_to_mig_2

DMA_12

On this copy, I change the data type from Text to nvarchar(max) for the first table without Full-Text with the T-SQL Command:

USE [db_to_mig_2]
GO

SELECT id,Description into #before FROM dbo.Text_Without_FullText WHERE Description is NOT NULL
GO

ALTER TABLE dbo.Text_Without_FullText ALTER COLUMN Description NVARCHAR(MAX)
GO

SELECT id,Description into #after  FROM dbo.Text_Without_FullText WHERE Description is NOT NULL
GO

SELECT DIFFERENCE(a.Description,b.Description), DIFFERENCE(b.Description,a.Description)
FROM #after AS a INNER JOIN #before AS b ON b.id = a.id
GO

SELECT Description from #before
SELECT Description from #after

DROP TABLE #before
DROP TABLE #after

DMA_14

I used 2 temporary tables to compare the result after the column data type change.
As you can see, the result of the select with difference is 4.
The value 4 indicates strong similarity or the same values.
To be sure, I advise you to do a better comparison between both temporary tables.
But this blog is not the case about comparing 2 strings with different data types.
The conversion is fast and easy, now, I do the same for the table with Full-Text:

SELECT id,Description into #before FROM dbo.Text_With_FullText WHERE Description is NOT NULL
GO

ALTER TABLE dbo.Text_With_FullText ALTER COLUMN Description NVARCHAR(MAX)
GO

SELECT id,Description into #after  FROM dbo.Text_With_FullText WHERE Description is NOT NULL
GO

SELECT DIFFERENCE(a.Description,b.Description), DIFFERENCE(b.Description,a.Description)
FROM #after AS a INNER JOIN #before AS b ON b.id = a.id
GO

SELECT Description from #before
SELECT Description from #after

DROP TABLE #before
DROP TABLE #after

DMA_13

And the result as you can see is an error message:

Msg 7614, Level 16, State 1, Line 1
Cannot alter or drop column ‘Description’ because it is enabled for Full-Text Search.

The Full-Text Index prevents me from changing the data type.
I need to drop the column from the FullText Index and create it after the data type change.

SELECT id,Description into #before FROM dbo.Text_With_FullText WHERE Description is NOT NULL
GO

DROP FULLTEXT INDEX ON dbo.Text_With_FullText;
GO

ALTER TABLE dbo.Text_With_FullText ALTER COLUMN Description NVARCHAR(MAX)
GO

CREATE FULLTEXT INDEX ON  dbo.Text_With_FullText(Description)
KEY INDEX PK_Text_With_FullText ON FT_Catalog
WITH STOPLIST = SYSTEM,CHANGE_TRACKING OFF, NO POPULATION;
GO

SELECT id,Description into #after  FROM dbo.Text_With_FullText WHERE Description is NOT NULL
GO

SELECT DIFFERENCE(a.Description,b.Description), DIFFERENCE(b.Description,a.Description)
FROM #after AS a INNER JOIN #before AS b ON b.id = a.id
GO

SELECT Description from  #before
SELECT Description from #after

DROP TABLE #before
DROP TABLE #after

DMA_15

Et voila! It is done. Be careful, if your FullText has more columns, you need to replace the CREATE FULLTEXT INDEX with an ALTER FULLTEXT INDEX ADD (column)
To finish, I reran the Data Migration Assistant and I saw that the only remaining issue was the Full-Text Search.

DMA_16

 

Cet article Migration to SQL Server 2016 with deprecated data types and Full-Text est apparu en premier sur Blog dbi services.

Building a digital dashboard for small business

Nilesh Jethwa - Tue, 2017-07-11 09:32

You take pains to make the most out of every dollar you earn. Do you have the same attitude for data?

Data that are used optimally for small businesses are great tools to help their owners come up with better decisions.

Digital reporting and performance dashboards can improve a business’ performance dramatically, but that’s only possible in your own situation if you use the right tools and strategies.

Here we will be taking up what it takes so that you can build an effective dashboard for your small business.

It’s easy to build a dashboard and a few questions to answer

Creating a dashboard for performance tracking is not difficult. There are plenty of free resources out there, which we will mention in the next sections.

There are a few questions that you need to answer though before deciding which type of dashboard fits your business best.

  • Who is your audience – It is important that you know who your dashboard’s viewers are. Also, what metrics matter to them?
  • What are the most important metrics – Not only that you determine your audience and the important metrics for them, but you also pinpoint the most important metrics for them.

The most important metrics should appear larger and on the top left hand corner of the dashboard.  That’s the primary part of the screen of your digital dash.

  • When thinking about the metrics to include, you also want to know to which these metrics should be compared.

 

Read more at http://www.infocaptor.com/dashboard/what-it-takes-to-build-a-digital-dashboard-for-small-business

Why vendor support is a good choice for deploying predictive/preventive support

Chris Warticki - Tue, 2017-07-11 08:41

Author:
Elaina Stergiades
Research Manager, Software and Hardware Support Services, IDC

The previous discussion highlighted the key potential benefits of purchasing support directly from the original hardware or software vendor to help resolve IT problems quickly – i.e. break/fix, or what IDC calls “reactive support.”  While the need for reactive support will continue to be important in support, recent IDC research shows that most IT organizations are finding that reactive support alone is not enough to manage their complex technology landscapes.  More and more, CIOs and IT managers need support providers who can help prevent IT problems from damaging critical business systems.  Whether it’s reducing true system down situations, or avoiding performance degradations that slow users to a crawl, IT organizations need the assurance that business leaders can do their jobs 24/7/365.

To accomplish this, IT organizations are expanding their use of advanced predictive and preventive support capabilities across their environments.  These capabilities are typically a complex mix of tools, utilities, online websites and IT process improvements that can immediately and dramatically reduce system down issues and performance issues across the IT landscape.  In addition, advanced preventive and predictive support is expanding quickly with recent advancements in artificial intelligence, cognitive computing, and machine learning.  IDC expects that advancements like expanded self-healing and automated problem diagnosis and resolution will become table-stakes for support in the next 5 to 7 years.

As more business leaders demand top performance from their IT organizations, often through extreme service level agreements, adopting preventive and predictive support technology is vital.  These advanced tools are a key first step to reducing risk and improving resiliency across the IT landscape.  However, for most hardware and software deployments, preventive and predictive support is best performed via deep integration with the underlying technologies.  Bolt-on tools and piecemeal utilities alone are not as effective as functionality integrated into the hardware and software itself. By purchasing support directly from the original vendors, IT organizations will have access to advanced preventive and predictive support technologies and capabilities – and can take advantage of the many potential benefits they can provide.

When considering support providers for preventive and predictive support, IDC recommends making sure their support capabilities include the following:

Ongoing access to the latest updates and patches for software and firmware, a critical component to reducing risk and maintaining the overall health and security of IT systems

Advanced tools for preventive support measures that are integrated directly into the hardware and software, with protected IP that can solve problems before they affect critical technology

Pairing predictive and preventive support with remote services delivery when problems do occur, which can help ensure faster identification and resolution

Ongoing updates to these predictive and preventive support tools, using machine learning and artificial intelligence to improve problem identification and resolution

IDC also recommends looking for support from vendors with a demonstrated history of significant ongoing investment in support technologies and capabilities over time, introducing new deliverables on a regular basis.  As technology landscapes continue to change very rapidly, having the assurance of the latest innovations in support functionality is an important part of a secure risk-avoidance strategy. 

 

Elaina Stergiades is the Research Manager for IDC's Software Support Services program. In this position, she provides insight and analysis of industry trends and market strategies for software vendors supporting applications, development environment and systems software. Elaina is also responsible for research, writing and program development of the software support services market.

Prior to joining IDC, Elaina spent 10 years in the software and web design industries. As a quality assurance engineer at Parametric Technology and Weather Services International (WSI), she led testing efforts for new applications and worked closely with customers to design and implement new functionality. Elaina also worked in product marketing at WSI, directing an initiative to launch a new weather crawl system. More recently, she was a project manager at Catalyst online. At Catalyst, Elaina was responsible for managing client search marketing campaigns targeting increased website traffic, revenue and top search engine rankings.

Elaina has a B.S. in mechanical engineering from Cornell University and an M.B.A. from Babson College.

Video: Think in a Functional Style to Produce Concise Code

OTN TechBlog - Tue, 2017-07-11 07:23

The addition of lambda and Streams to Java 8 made it much easier for developers to think in a functional style to produce concise, readable code. In this interview, Josh Backfield, a senior software engineer at Booz Allen Hamilton, digs into some of details and recaps his Oracle Code Atlanta technical session.

Additional Resources

 

 

SQL Server 2016: patching CU with R Services

Yann Neuhaus - Tue, 2017-07-11 01:48

As a good DBA, I begin to be up to date with all Cumulative Update (CU) by my customers.
It is the first time that I run an update for SQL Server 2016 with the CU 3.
I download the CU on Microsoft website and I begin my patching campaign on all SQL server 2016 instances.

The first one is quick & successful.
The second one, with R Services, is a little bit different.
SQL2016CU3_01

After, the features’ selection, you need to accept the “R services download”.
SQL2016CU3_02

Servers are not able to go to Internet to download the R services package.
SQL2016CU3_03

A new step appears “Offline Installation of Microsoft R Open and Microsoft R Server” with 2 links reference to download 2 packages.
I advise you to download both. SRO_xxx is for Microsoft R Open and SRS_xxx is for Microsoft R Server. It is not necessary but I think, it is every time good to have all packages for a build.
After, you have just to copy the two cab files to your server and select the folder in the update window.
SQL2016CU3_04

The installation of the Cumulative Update continues like usual.
This link here, give you the list of the Microsoft R Open and Microsoft R Server per build. It’s a very useful link.

PS: I do not use the installation by Command-line but flags are explained on this website.

Nice patching to you!

To finish, I give you a little tips: In SQL  Server 2017, you will have the same to do with the Python language
SQL2016CU3_05

 

Cet article SQL Server 2016: patching CU with R Services est apparu en premier sur Blog dbi services.

How to use bind variables

Tom Kyte - Tue, 2017-07-11 00:46
I am trying to use bind variables for the 1st time. I have a part of the code here from a test stored procedure- Declare x number; y number; z number; a date; b date; c number; d number; e number; execute immediate 'insert...
Categories: DBA Blogs

Build single SQL for multiple condition

Tom Kyte - Tue, 2017-07-11 00:46
Hi Connor/Chris, Can you pleas help to build a query based on below inputs and condition: table structe is shared in liveSQL as well. <code>DROP TABLE TEST_LOGIN_MASTER; DROP TABLE TEST_LOGIN_REQUEST; DROP TABLE TEST_LOGIN_ACCESS; CREAT...
Categories: DBA Blogs

ADF 12c BC Proxy User DB Connection and Save Point Error

Andrejus Baranovski - Mon, 2017-07-10 14:35
If you are modernising Oracle Forms system, high chance you need to rely on DB proxy connection. Read more about it in my previous post for ADF 11g - Extending Application Module for ADF BC Proxy User DB Connection. It works in the same way for ADF 12c, but there is issue related to handling DB error, when DB proxy connection is on. DB error is propagated to ADF but is being substituted by save point error (as result - user would not see original error from DB). It seems like related to JDBC driver in 12c. The workaround is to override ADF SQL builder class and disable save point error propagation (there might be better ways to workaround it).

Proxy connection is established from prepareSession method in generic AM Impl class:


If I would change salary value to negative and save data - DB constraint error would fire (negative not allowed). Unfortunately, end user would not see that error - he gets message about failed save point:


Workaround -  we can disable save point error propagation. Override SQL Builder class and add try/catch block in rollbackToSavepoint method. If error happens, do nothing:


You must register SQL Builder class with AM. Add jbo.SQLBuilderClass property in bc4j.xcfg, pointing to the class:


You should be able to see DB errors after this change is applied:


However, there is one drawback of this workaround to keep in mind. When data is posted to DB, ADF executes lock statement. If update fails, normally ADF would execute rollback to save point and lock will be removed. But not in the case of DB proxy, now rollback to save point is failing - this means lock will stay:


If user would fix data and try to save again - lock error will be returned:


Error during lock:


To bypass lock issue, you should enable DB pooling for AM instance. In this case, after each request DB connection will be returned back to the pool and lock will be released automatically:


Download sample application - AMExtendApp_v3.zip.

Bash: enabling Eclipse for Bash Programming | Plugin Shelled (shell editor)

Dietrich Schroff - Mon, 2017-07-10 14:14
After writing several posts about useless shell commands i tried to enable Eclipse for working with the bourne again shell.

First step is to get a plugin for syntax highlighting. The plugin shelled
https://sourceforge.net/projects/shelled/ is very easy to find. Just download the zip

and in Eclipse go to "help --> install new software".
There you have to add the archive:
And all other clicks are straight forward:





After the installation of the plugin you have to restart your Eclipse IDE and then the editor understands bash commands:


The configuration can be done via -> window -> preferences

If you want to setup your own coloring scheme, you can customize it within -> Shell Script -> Editor -> Syntax coloring




Benefits Of Log File Analysis

Nilesh Jethwa - Mon, 2017-07-10 13:31

Every time any page is requested from your website by a human or another program or an automated bot, the event is tracked in a log file that is stored on the web server. If you have installed Google Analytics, then google will tell you all about the visitor and page analytics. It will tell you how many users, what are the top pages etc. In the case of Log File Analyzer, you can get similar insights into your visitor statistics and page analytics.

So what is the difference between Google Analytics and Log File Analysis?

What do you do with the information that a typical Log File Analyser generates?

The main difference is Google Analytics will track only the pages that have the GA code added to them. It will miss any information on other pages that does not have the GA code simply because it just cannot see them and won't be able to report on them.

What is Technical SEO and how does it relate to Log File Analysis?

Read more at http://www.infocaptor.com/dashboard/log-file-analysis-for-technical-seo-and-other-benefits

Mobile Approvals 1.6 Now Available for iOS and Android

Steven Chan - Mon, 2017-07-10 12:44

We are pleased to announce updates for the Oracle E-Business Suite Mobile Approvals 1.6 smartphone app for iOS and Android. These updates are delivered as part of Oracle E-Business Suite Mobile Release 7, which supports both Oracle E-Business Suite Release 12.1.3 and 12.2.3 and beyond.  Oracle E-Business Suite Mobile Release 7 is a coordinated release of 17 Oracle E-Business Suite mobile apps, excluding Mobile Expenses and Mobile Field Service, which have their own off-cycle releases. 

For information on the Approvals app, see:

For more details on the updates for all EBS mobile apps, see:

What's New in Mobile Approvals 1.6

  • Ability to reassign an approval notification
  • Support for push notifications when using Oracle Mobile Cloud Service and enterprise distribution
  • Mobile Foundation:
    • Ability to import custom CA or self-signed server certificates to standard apps for TLS connections to Oracle E-Business Suite
    • Ability to download the mobile app configuration automatically from the server
    • Technical updates with uptake of Oracle Mobile Application Framework (MAF) 2.4.0

Related Articles

Categories: APPS Blogs

Global Accessories Retailer Parfois Establishes a Foundation for Accelerated Growth with Oracle Retail

Oracle Press Releases - Mon, 2017-07-10 12:17
Press Release
Global Accessories Retailer Parfois Establishes a Foundation for Accelerated Growth with Oracle Retail Gaining Operational Efficiency to Support Anticipated High Growth

Redwood Shores, Calif.—Jul 10, 2017

Today, Oracle announced that Parfois has successfully deployed the Oracle Retail Merchandise Operations Suite and the Oracle Retail Warehouse Management System to support their high growth and international expansion plans. Parfois operates over 750 stores across in 58 countries around the world. In six years, Parfois expects to triple the number of retail stores in their portfolio. Parfois offers affordable and on-trend fashion accessories such as handbags, jewelry, wallets, sunglasses, belts, scarves, watches, and hair accessories.  Parfois also recently introduced apparel to their assortment.  Parfois designs and develops 3,500 SKUs each season, with new items and merchandising in every store, every week. 

“Prior to the implementation of Oracle Retail, our merchandising processes could not scale to support new business models. Several of our core processes were supported by Excel-based tools that were prone to human error and a lack of consistency,” said Frederico Santos, Chief Information Officer and Chief Financial Officer, Parfois. “Our transformational initiative allowed us to evolve and efficiently prepare for growth opportunities today and tomorrow. By gaining visibility into inventory and adopting industry best practice, we can better anticipate demand and plan inventory placement. The robust Oracle solution provides a consistent and reliable core operations engine.”

Oracle Gold Partner Retail Consult partnered with Parfois to deploy Oracle Retail Merchandising, Oracle Retail Sales Audit, Oracle Retail Invoice Matching, Oracle Retail Trade Management and Oracle Retail Warehouse Management.  

“Retail Consult has truly partnered with us to achieve a holistic vision. Retail Consult brings the technical and business expertise to allow us to move forward together,” said Frederico Santos, Chief Information Officer and Chief Financial Officer, Parfois. “We trust Retail Consult to educate our teams to drive adoption of the solutions.”

“We are pleased to play a role in the success and expansion of Parfois. Merchandising is the backbone of the retail business and we continue to invest in our world class solutions. Our goal is to accelerate critical decision making through a powerful, modern interface and persona driven dashboards for retailers,” said Ray Carlin, Senior Vice President and General Manager, Oracle Retail.

Contact Info
Matt Torres
Oracle
415-595-1584
matt.torres@oracle.com
About Oracle Retail

Oracle provides retailers with a complete, open, and integrated suite of best-of-breed business applications, cloud services, and hardware that are engineered to work together and empower commerce. Leading fashion, grocery, and specialty retailers use Oracle solutions to anticipate market changes, simplify operations and inspire authentic brand interactions. For more information, visit our website at www.oracle.com/retail.

About Oracle

The Oracle Cloud delivers hundreds of SaaS applications and enterprise-class PaaS and IaaS services to customers in more than 195 countries and territories while processing 55 billion transactions a day. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Matt Torres

  • 415-595-1584

default listener port

Laurent Schneider - Mon, 2017-07-10 12:13

Long time ago, Maxime Yuen registered 1521 for nCube License Manager.

By googling I found : Ellison cleans house at nCube, and since them 1521 has been used as a default port for Oracle. Still, you’ll see nCube in IANA.ORG service names port numbers and in /etc/services the nCube name. I don’t know which one came first, Oracle using 1521 or Larry investing in nCube, but I am pretty sure it’s related &#x1f609;

$ curl https://www.iana.org/assignments/
  service-names-port-numbers/service-names
  -port-numbers.txt | grep 1521
ncube-lm 1521 tcp nCube License Manager 
  [Maxine_Yuen] [Maxine_Yuen]
ncube-lm 1521 udp nCube License Manager 
  [Maxine_Yuen] [Maxine_Yuen]
$ grep -w 1521 /etc/services
ncube-lm 1521/tcp # nCube License Manager
ncube-lm 1521/udp # nCube License Manager
$ netstat -a | grep ncube
tcp 0 0 *.ncube-lm *.* LISTEN

Later, still long time ago, Oracle officially registered 2483 and 2484 (tcps) for the listener communication, as documented on Recommended Port Numbers :
This port number may change to the officially registered port number of 2483 for TCP/IP and 2484 for TCP/IP with SSL.

Still, as of Oracle 12c Release 2, port 1521 is recommended.

Now, another question : do you really want to use port 1521?

On the one hand, it could be good for a hacker to know listener runs on 1521 and ssh on port 22. This is configurable of course.

On the other hand, you better use that is assigned to Oracle. RFC 6335 defines 1024-49151 as User Ports, and 49152-65535 as the Dynamic and/or Private
Ports (aka ephemeral). Remember, if a port is used before you start your listener, your listener won’t start.

Remember every network connection keeps a port busy. So if you start a network client from your database server to another server, ssh, sqlnet, mail, whatever, dns, then your port 1028 or 57313 may be busy for a client connection. Which will prevent your listener from starting. If you use port 9999, you could look on IANA and ask the owner if he plans anything on that port.

Very often, most ports are unused when you start the listener. If you find an unused port in the private range, 49152-65535, you may name it in /etc/services.

Very often I see database servers with more than one listener. Obviously, you cannot run more than one listener on port 1521. There are some case where you want different listener with different sqlnet.ora or different Oracle version. But this render consolidation (e.g. Multitenant) more painful.

The discussion on which port to use is obviously far beyond Oracle. There are gazillions of TCP/UDP servers running in the digital world and less than 65535 ports. For sure you cannot have all them on IANA.ORG, right?

In most cases, stick to Oracle recommendation, use port 1521.

ZDLRA System Activity Report

Fuad Arshad - Mon, 2017-07-10 11:30
The Recovery Appliance or ZDLRA is a great way to ensure consistent Backup and Recovery of you Oracle Database but as DBA's we often want to see what is happening Behind the Covers. As with every Oracle Product there is a GUI (Enterprise Manager) or Command line based environment.
The ZDLRA Development team just released a very nifty little script that is available via

Zero Data Loss Recovery Appliance System Activity Script (Doc ID 2275176.1)

This System is supposed to be used in conjunction with Enterprise Manager and a different way of l

The script is broken down into Multiple Sections  and the header is very important  to read and understand 

--------
 ZDLRA Activity script: 09-Jun-2017
Oracle suggests Enterprise Manager as the proper tool for monitoring
 a Zero Data Loss Recovery Appliance.  However, this simple script
 provides a different perspective on activity on the system and
 can be another aid in understanding system activity.
 The intention is that this script is run daily and only provides
 a short history of events
-------------------------------

This is followed by the version of the ZDLRA Software you are running. 
VERSION       NAME
---------------------------------------------------------------------- ---------
22-05-2017 10:06:57  ZDLRA_12.1.1.1.8.201705_LINUX.X64_RELEASE       ZDLRA

In This case this is the release of 12.1.1.1.8 that was released on 22nd of May

The you will see the General State of the system and in a Healthy Environment. there will be idling schedulers and the oldest work will be displayed. Typically the oldest work should be be a couple of hours /days old . If not that might point to some discrepancy and an SR should be opened to evaluate this situation

STATE   SCHEDULERS CURRENT_TIMER_ACTION RESOURCE_WAIT_TASK OLD_WORK
-----         ----------             -------------------------                   --------------------                    ------------
ON           176                  Idling                               UNLIMITED           21-JUN-2017

Then this is followed up by an examination of what is running on the system. On a regular system you will see both Work and Maintenance tasks.

  TASK_TYPE STATE     CURRENT_COUNT LAST_EXECUTE_TIME  WORK_TYPE   MIN_CREATION
----------                -----                -------------               --------------------                    -----------            ------------
PURGE_DUP  RUNNING 1                 09-JUL-2017 10:46:47         Work          09-JUL-2017
CROSSCHECK_DB TASK_WAIT 1                            Maintenance     09-JUL-2017
VALIDATE  TASK_WAIT 1                                    Maintenance     09-JUL-2017

Next Section Displays the State of Storage on the Recovery Appliance and can be used as a measure of understanding how much space has been used on the Recovery Appliance.

 TOTAL_SPACE USED_SPACE FREESPACE FREESPACE_GOAL
-------------               -------------               -------------      --------------
   596048.371           321061.930              274986.051   5960.484


The next Sections include Status of Replication Server and Task History for the last day. This is particularly helpful to assess how things are running.

There are also sections that include that state of each Database and how many days of Backups are available , Locally and replicated as well as sections that show all the incidents in the system and if any config changes were made to the system that were non default.

While Enterprise Manager is still the preferred way of ensuring you see , manage and get alerted on the Recovery Appliance. This handy little script is a nice way to see the overall status a Recovery Appliance really quickly





 

12c MultiTenant Posts -- 6 : Partial (aka Subset) Cloning of PDB

Hemant K Chitale - Mon, 2017-07-10 09:27
Note : This is a 12.2 feature.

Normally, if you clone a PDB, you'd get a full copy with all the tablespaces.  Now, in 12.2, you can exclude non-essential tablespaces by specifying USER TABLESPACES -- those that you want cloned.  (SYSTEM, SYSAUX and Local UNDO will certainly be cloned).

Let me start with the "NEWPDB" PDB (that I've used in previous examples) that has one more schema and tablespace:

$sqlplus system/oracle@NEWPDB

SQL*Plus: Release 12.2.0.1.0 Production on Mon Jul 10 21:52:55 2017

Copyright (c) 1982, 2016, Oracle. All rights reserved.

Last Successful login time: Mon Jul 10 2017 11:04:00 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> select tablespace_name from dba_tablespaces order by 1;

TABLESPACE_NAME
------------------------------
HRDATA
MYDATA
SYSAUX
SYSTEM
TEMP
UNDOTBS1

6 rows selected.

SQL> col owner format a8
SQL> col segment_name format a30
SQL> col tablespace_name format a8
SQL> select owner, segment_name, tablespace_name
2 from dba_segments
3 where tablespace_name like '%DATA'
4 order by 1,2
5 /

OWNER SEGMENT_NAME TABLESPA
-------- ------------------------------ --------
HEMANT BIN$UVb24iaCIE/gUwEAAH/WaQ==$0 MYDATA
HEMANT BIN$UVb24iaIIE/gUwEAAH/WaQ==$0 MYDATA
HEMANT HKC_STORE_FILE MYDATA
HEMANT I MYDATA
HEMANT OBJ_LIST MYDATA
HEMANT SYS_IL0000073525C00003$$ MYDATA
HEMANT SYS_IL0000073532C00003$$ MYDATA
HEMANT SYS_IL0000073535C00003$$ MYDATA
HEMANT SYS_LOB0000073525C00003$$ MYDATA
HEMANT SYS_LOB0000073532C00003$$ MYDATA
HEMANT SYS_LOB0000073535C00003$$ MYDATA
HEMANT T MYDATA
HR EMPLOYEES HRDATA

13 rows selected.

SQL>
SQL> select * from hr.employees;

EMPLOYEE_ID FIRST_NAME LAST_NAME
----------- ------------------------------ ------------------------------
HIRE_DATE DEPARTMENT_ID SALARY EMAIL_ID
--------- ------------- ---------- ---------------------------------------------
1 Hemant Chitale
06-JUL-17 1 15000 hemant@mydomain.com


SQL>


Besides, the HEMANT objects in the MYDATA tablespace, I now have HR owning an EMPLOYEES table in the HRDATA tablespace.

Now, I want to clone the NEWPDB tablespace but want to exclude HR data.

First, I set a target location for the datafiles.

$sqlplus '/ as sysdba'

SQL*Plus: Release 12.2.0.1.0 Production on Mon Jul 10 21:57:33 2017

Copyright (c) 1982, 2016, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> show parameter db_create_file

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_create_file_dest string
SQL> alter session set db_create_file_dest='/u02/oradata';

Session altered.

SQL>


Next, I create my Partial (or SubSet) Clone PDB:

SQL> create pluggable database NONHR from NEWPDB user_tablespaces=('MYDATA');

Pluggable database created.

SQL>
SQL> select con_id, file#, name
2 from v$datafile
3 order by 1,2
4 /

CON_ID FILE#
---------- ----------
NAME
--------------------------------------------------------------------------------
1 1
/u01/app/oracle/oradata/orcl12c/system01.dbf

1 3
/u01/app/oracle/oradata/orcl12c/sysaux01.dbf

1 7
/u01/app/oracle/oradata/orcl12c/users01.dbf

1 15
/u01/app/oracle/oradata/orcl12c/undotbs2.dbf

2 5
/u01/app/oracle/oradata/orcl12c/pdbseed/system01.dbf

2 6
/u01/app/oracle/oradata/orcl12c/pdbseed/sysaux01.dbf

2 8
/u01/app/oracle/oradata/orcl12c/pdbseed/undotbs01.dbf

3 9
/u01/app/oracle/oradata/orcl12c/orcl/system01.dbf

3 10
/u01/app/oracle/oradata/orcl12c/orcl/sysaux01.dbf

3 11
/u01/app/oracle/oradata/orcl12c/orcl/undotbs01.dbf

3 12
/u01/app/oracle/oradata/orcl12c/orcl/users01.dbf

3 13
/u01/app/oracle/oradata/orcl12c/orcl/APEX_1991375173370654.dbf

3 14
/u01/app/oracle/oradata/orcl12c/orcl/APEX_1993195660370985.dbf

4 16
/u03/oradata/NEWPDB/data_D-ORCL12C_I-768045447_TS-SYSTEM_FNO-16_0as7a8di

4 17
/u03/oradata/NEWPDB/data_D-ORCL12C_I-768045447_TS-SYSAUX_FNO-17_09s7a8d2

4 18
/u03/oradata/NEWPDB/data_D-ORCL12C_I-768045447_TS-UNDOTBS1_FNO-18_0bs7a8e1

4 19
/u03/oradata/NEWPDB/data_D-ORCL12C_I-768045447_TS-MYDATA_FNO-19_0cs7a8e4

4 20
/u03/oradata/NEWPDB/data_D-ORCL12C_I-768045447_TS-HRDATA_FNO-20_0ds7a8e5

5 21
/u02/oradata/ORCL12C/53F8012866211264E0530100007FD493/datafile/o1_mf_system_dp72
3vp5_.dbf

5 22
/u02/oradata/ORCL12C/53F8012866211264E0530100007FD493/datafile/o1_mf_sysaux_dp72
3vsz_.dbf

5 23
/u02/oradata/ORCL12C/53F8012866211264E0530100007FD493/datafile/o1_mf_undotbs1_dp
723vt1_.dbf

5 24
/u02/oradata/ORCL12C/53F8012866211264E0530100007FD493/datafile/o1_mf_mydata_dp72
3vt3_.dbf


22 rows selected.

SQL>
SQL> select con_id, name, open_mode
2 from v$pdbs
3 order by 1
4 /

CON_ID
----------
NAME
--------------------------------------------------------------------------------
OPEN_MODE
----------
2
PDB$SEED
READ ONLY

3
ORCL
READ WRITE

4
NEWPDB
READ WRITE

5
NONHR
MOUNTED


SQL>
SQL> alter pluggable database nonhr open;

Pluggable database altered.

SQL>


I can identify the new PDB "NONHR" as CON_ID=5.
Note that in the CREATE PLUGGABLE DATABASE command with the USER_TABLESPACES clause, I can also specify either of COPY, NOCOPY, MOVE, NO DATA or even SNAPSHOT COPY.  This is the simplest Subset Clone that is a Copy with Data.

Let's create the TNSNAMES.ORA entry for NONHR:

NONHR =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 0.0.0.0)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = nonhr)
)
)


Let's now connect to NONHR and confirm its contents.

SQL> connect system/oracle@NONHR
Connected.
SQL> show con_id

CON_ID
------------------------------
5
SQL> show con_name

CON_NAME
------------------------------
NONHR

SQL> select tablespace_name
2 from dba_tablespaces
3 order by 1
4 /

TABLESPACE_NAME
------------------------------
HRDATA
MYDATA
SYSAUX
SYSTEM
TEMP
UNDOTBS1

6 rows selected.

SQL> select file_name from dba_data_files
2 where tablespace_name = 'HRDATA'
3 /

no rows selected

SQL> select owner, segment_name, segment_type
2 from dba_segments
3 where tablespace_name = 'HRDATA'
4 /

no rows selected

SQL>
SQL> select tablespace_name, file_name
2 from dba_data_files
3 order by 1
4 /

TABLESPACE_NAME
------------------------------
FILE_NAME
--------------------------------------------------------------------------------
MYDATA
/u02/oradata/ORCL12C/53F8012866211264E0530100007FD493/datafile/o1_mf_mydata_dp72
3vt3_.dbf

SYSAUX
/u02/oradata/ORCL12C/53F8012866211264E0530100007FD493/datafile/o1_mf_sysaux_dp72
3vsz_.dbf

SYSTEM
/u02/oradata/ORCL12C/53F8012866211264E0530100007FD493/datafile/o1_mf_system_dp72
3vp5_.dbf

UNDOTBS1
/u02/oradata/ORCL12C/53F8012866211264E0530100007FD493/datafile/o1_mf_undotbs1_dp
723vt1_.dbf


SQL>
SQL> select segment_name, segment_type
2 from dba_segments
3 where owner = 'HR'
4 /

no rows selected

SQL> select username
2 from dba_users
3 where username = 'HR'
4 /

USERNAME
--------------------------------------------------------------------------------
HR

SQL>
SQL> select object_name, object_type
2 from dba_objects
3 where owner = 'HR'
4 /

OBJECT_NAME
--------------------------------------------------------------------------------
OBJECT_TYPE
-----------------------
EMPLOYEES
TABLE


SQL>
SQL> select owner, segment_name
2 from dba_segments
3 where tablespace_name = 'MYDATA'
4 /

OWNER SEGMENT_NAME
-------- ------------------------------
HEMANT BIN$UVb24iaCIE/gUwEAAH/WaQ==$0
HEMANT BIN$UVb24iaIIE/gUwEAAH/WaQ==$0
HEMANT HKC_STORE_FILE
HEMANT I
HEMANT OBJ_LIST
HEMANT SYS_IL0000073525C00003$$
HEMANT SYS_IL0000073532C00003$$
HEMANT SYS_IL0000073535C00003$$
HEMANT SYS_LOB0000073525C00003$$
HEMANT SYS_LOB0000073532C00003$$
HEMANT SYS_LOB0000073535C00003$$
HEMANT T

12 rows selected.

SQL>
SQL> select count(*) from hemant.obj_list;

COUNT(*)
----------
145282

SQL>


So, what has been copied to the NONHR PDB?  The HRDATA Tablespace, but not the DataFile.  The HR User and Table (definition only, no data), but not the Segment.
However, for the MYDATA Tablespace that was identified as a USER_TABLESPACE in the CREATE PLUGGABLE DATABASE statement, the Tablespace, Datafile, User, Table and Segment have all been copied.

Therefore, NONHR does not have the HR data!   I can drop the User and Tablespace.

SQL> drop tablespace hrdata including contents;

Tablespace dropped.

SQL> drop user hr;

User dropped.

SQL>


However, HR is still present in NEWPDB where NONHR was cloned from:

SQL> connect system/oracle@NEWPDB
Connected.
SQL> select owner, segment_name
2 from dba_segments
3 where tablespace_name = 'HRDATA'
4 /

OWNER SEGMENT_NAME
-------- ------------------------------
HR EMPLOYEES

SQL> select * from hr.employees;

EMPLOYEE_ID FIRST_NAME LAST_NAME
----------- ------------------------------ ------------------------------
HIRE_DATE DEPARTMENT_ID SALARY EMAIL_ID
--------- ------------- ---------- ---------------------------------------------
1 Hemant Chitale
06-JUL-17 1 15000 hemant@mydomain.com


SQL> show con_id

CON_ID
------------------------------
4
SQL> show con_name

CON_NAME
------------------------------
NEWPDB
SQL>
SQL> select tablespace_name, file_name
2 from dba_data_files
3 order by 1
4 /

TABLESPACE_NAME
------------------------------
FILE_NAME
--------------------------------------------------------------------------------
HRDATA
/u03/oradata/NEWPDB/data_D-ORCL12C_I-768045447_TS-HRDATA_FNO-20_0ds7a8e5

MYDATA
/u03/oradata/NEWPDB/data_D-ORCL12C_I-768045447_TS-MYDATA_FNO-19_0cs7a8e4

SYSAUX
/u03/oradata/NEWPDB/data_D-ORCL12C_I-768045447_TS-SYSAUX_FNO-17_09s7a8d2

SYSTEM
/u03/oradata/NEWPDB/data_D-ORCL12C_I-768045447_TS-SYSTEM_FNO-16_0as7a8di

UNDOTBS1
/u03/oradata/NEWPDB/data_D-ORCL12C_I-768045447_TS-UNDOTBS1_FNO-18_0bs7a8e1


SQL>


So, 12.2 introduces the ability to create a clone PDB database that is a SubSet (i.e. selected User Tablespaces data) of an existing PDB.

(Note : NEWPDB is in /u03 where it was moved from /u02 earlier as a Relocated Database while NONHR is in /u02 where it was created with OMF based on DB_CREATE_FILE_DEST).
.
.
.

Categories: DBA Blogs

Enabling A Modern Analytics Platform

Rittman Mead Consulting - Mon, 2017-07-10 09:03

Over recent years, bi-modal analytics has gained interest and, dare I say it, a level of notoriety, thanks to Garnter’s repositioning of its Magic Quadrant in 2016. I’m going to swerve the debate, but if you are not up to speed, then I recommend taking a look here first.

Regardless of your chosen stance on the subject, one thing is certain: the ability to provision analytic capabilities in more agile ways and with greater end user flexibility is now widely accepted as an essential part of any modern analytics architecture.

But are there any secrets or clues that could help you in modernising your analytics platform?

What Is Driving the Bi-Modal Shift?

The demand for greater flexibility from our analytics platforms has its roots in the significant evolutions seen in the businesses environment. Specifically, we are operating in/with:

  • increasingly competitive marketplaces, requiring novel ideas, more tailored customer relationships and faster decisions;
  • turbulent global economies, leading to a drive to reduce (capex) costs, maximise efficiencies and a need to deal with increased regulation;
  • broader and larger, more complex and more externalised data sets, which can be tapped into with much reduced latency;
  • empowered and tech-savvy departmental users, with an increased appetite for analytical decision making, combined with great advances in data discovery and visualisation technologies to satisfy this appetite;

In a nutshell, the rate at which change occurs is continuing to gather pace and so to be an instigator of change (or even just a reactor to it as it happens around you) requires a new approach to analytics and data delivery and execution.


Time to Head Back to the Drawing Board?

Whilst the case for rapid, user-driven analytics is hard to deny, does it mean that our heritage BI and Analytics platforms are obsolete and ready for the scrap heap?

I don’t think so: The need to be able to monitor operational processes, manage business performance and plan for the future have not suddenly disappeared; The need for accurate, reliable and trusted data which can be accessed securely and at scale is as relevant now as it was before. And this means that, despite what some might have us believe, all the essential aspects of the enterprise BI platforms we have spent years architecting, building and growing cannot be simply wiped away.

[Phew!]

Instead, our modern analytics platforms must embrace both ends of the spectrum equally: highly governed, curated and trustworthy data to support business management and control, coupled with highly available, flexible, loosely governed data to support business innovation. In other words, both modes must coexist and function in a relative balance.

The challenge now becomes a very different one: how can we achieve this in an overarching, unified business architecture which supports departmental autonomy, encourages analytical creativity and innovation, whilst minimising inefficiency and friction? Now that is something we can really get our teeth into!


What’s IT All About?

Some questions:

  • Do you have a myriad of different analytics tools spread across the business which are all being used to fulfil the same ends?
  • Are you constantly being asked to provide data extracts or have you resorted to cloning your production database and provisioning SQL Developer to your departmental analysts?
  • Are you routinely being asked to productionise things that you have absolutely no prior knowledge of?

If you can answer Yes to these questions, then you are probably wrestling with an unmanaged or accidental bi-modal architecture.

At Rittman Mead, we have seen several examples of organisations who want to hand greater autonomy to departmental analysts and subject matter experts, so that they can get down and dirty with the data to come up with novel and innovative business ideas. In most of the cases I have observed, this has been driven at a departmental level and instead of IT embracing the movement and leading the charge, results have often been achieved by circumventing IT. Even in the few examples where IT have engaged in the process, the scope of their involvement has normally been focused on the provision of hardware and software, or increasingly, the rental of some cloud resources. It seems to me that the bi-modal shift is often perceived as a threat to traditional IT, that it is somehow the thin end of a wedge leading to full departmental autonomy and no further need for IT! In reality, this has never been (and will never be) the ambition or motivation of departmental initiatives.

In my view, this slow and faltering response from IT represents a massive missed opportunity. More importantly though, it increases the probability that the two modes of operation will be addressed in isolation and this will only ever lead to siloed systems, siloed processes and ultimately, a siloed mentality. The creation of false barriers between IT and business departments can never be a positive thing.

That’s not to say that there won’t be any positive results arising from un-coordinated initiatives, it’s just that unwittingly, they will cause an imbalance in the overall platform: You might deliver an ultra-slick, flexible, departmentally focused discovery lab, but this will encourage the neglect and stagnation of the enterprise platform. Alternatively, you may have a highly accurate, reliable and performant data architecture with tight governance control which creates road-blocks for departmental use cases.


Finding the Right Balance

So, are there any smart steps that you can take if you are looking to build out a bi-modal analytics architecture? Well, here are a few ideas that you should consider as factors in a successful evolution:

1. Appreciate Your Enterprise Data Assets

You’ve spent a lot of time and effort developing and maintaining your data warehouse and defining the metadata so that it can be exposed in an easily understandable and user friendly way. The scope of your enterprise data also provides a common base for the combined data requirements for all of your departmental analysts. Don’t let this valuable asset go to waste! Instead provide a mechanism whereby your departmental analysts can access enterprise data quickly, easily, when needed and as close to the point of consumption as possible. Then, with good quality and commonly accepted data in their hands, give your departmental analysts a level of autonomy and the freedom to cut loose.

2. Understand That Governance Is Not a Dirty Word

In many organisations, data governance is synonymous with red tape, bureaucracy and hurdles to access. This should not be the case. Don’t be fooled into thinking that more agile means less control. As data begins to be multi-purposed, moved around the business, combined with disparate external data sources and used to drive creativity in new and innovative ways, it is essential that the provenance of the enterprise data is known and quantifiable. That way, departmental initiatives will start with a level of intrinsic confidence, arising from the knowledge that the base data has been sourced from a well known, consistent and trusted source. Having this bedrock will increase confidence in your analytical outputs and lead to stronger decisions. It will also drive greater efficiencies when it comes to operationalising the results.

3. Create Interdependencies

Don’t be drawn into thinking “our Mode 1 solution is working well, so let’s put all our focus and investment into our Mode 2 initiatives”. Instead, build out your Mode 2 architecture with as much integration into your existing enterprise platform as possible. The more interdependencies you can develop, the more you will be able to reduce data handling inefficiencies and increase benefits of scale down the line. Furthermore, interdependency will eliminate the risk of creating silos and allowing your enterprise architecture to stagnate, as both modes will have a level of reliance on one another. It will also encourage good data management practice, with data-workers talking in a common and consistent language.

4. Make the Transition Simple

Probably the single most important factor in determining the success of your bi-modal architecture is the quality with which you can transition a Mode 2 model into something operational and production-ready in Mode 1. The more effective this process is, the more likely you are to maximise your opportunities (be it new sales revenue, operating cost etc.) and increase your RoI. The biggest barriers to smoothing this transition will arise when departmental outputs need to be reanalysed, respecified and redesigned so that they can be slotted back into the enterprise platform. If both Mode 1 and Mode 2 activity is achieved with the same tools and software vendors, then you will have a head start…but even if disparate tools are used for the differing purposes, then there are always things that you can do that will help. Firstly, make sure that the owners of the enterprise platform have a level of awareness of departmental initiatives, so that there is a ‘no surprises’ culture…who knows, their experience of the enterprise data could even be exploited to add value to departmental initiatives. Secondly, ensure that departmental outputs can always be traced back to the enterprise data model easily (note: this will come naturally if the other 3 suggestions are followed!). And finally, define a route to production that is not overbearing or cumbersome. Whilst all due diligence should be taken to ensure the production environment is risk-free, creating artificial barriers (such as a quarterly or monthly release cycle) will render a lot of the good work done in Mode 2 useless.

Categories: BI & Warehousing

Advanced Code Search for Git in Oracle Developer Cloud Service

Shay Shmeltzer - Mon, 2017-07-10 07:11

One of the new features introduced in a recent monthly update of Oracle Developer Cloud Service is the advanced code search box you can see at the top right when you look at your Git repositories. This is a separate search functionality from the regular project artifacts search the box does in the other section of DevCS.

search screen

This search functionality is language aware, supporting a variety of languages including Java, JavaScript, HTML and CSS. It scans and indexes your code to understand its structure. DevCS can then do context aware searches for objects in your code, providing you autosuggest and even supporting camelCasing in the search box.

In the short video below I show you how this works. I start by importing code from a random github project into DevCS - and then I perform a search and show you how to find out the files, lines of code & revision references to your search term. You'll also see how code navigation works in the browser.

For more information about this capability have a look at the documentation here.

 

Categories: Development

Pages

Subscribe to Oracle FAQ aggregator