Feed aggregator

SQL Tuning Advisor- not really sure what it does

Tom Kyte - Wed, 2017-06-28 01:06
Good Morning, I am trying to understand what each of the three settings for the CBO really do. 1) There is the "normal", which allows the CBO to run as usual to create execution plans within a short time. 2) There is the Tuning Mode- Limit...
Categories: DBA Blogs

Find name of reference partitioned child tables

Tom Kyte - Wed, 2017-06-28 01:06
We want to delete the records from ?child tables (for e.g. `child1', `child2?, etc?)? before start deleting the records from parent table in a specific partition we are interested in for e.g. 'P_COMPLETED_20160519'. Child tables are partitioned using...
Categories: DBA Blogs

Create Insert Statements Dynamically

Tom Kyte - Wed, 2017-06-28 01:06
Tom,How do you create insert statments dynamically if I give a table name? in TOAD tool, they have this option for each table [Create insert statements] and I was wondering what kind of logic they might have used to create them. I was trying it my...
Categories: DBA Blogs

escape semicolon ?

Tom Kyte - Wed, 2017-06-28 01:06
hi Tom, I have a big script file that when executed gives an error: "ORA-01756: quoted string not properly terminated" I've reduced it to identifying as the semicolon being the problem: update table_x set x_sql_statement= 'DECLARE l_stat...
Categories: DBA Blogs

Updates to Oracle Utilities Testing solution

Anthony Shorten - Tue, 2017-06-27 18:49

We are pleased to announce the availability of new content for the Oracle Functional Testing Advanced Pack for Oracle Utilities. This pack allows customers of supported Oracle Utilities products to adopt automated testing quickly and easily by providing the testing components used by Product Development for use in the Oracle Application Testing Suite.

We have released, as patches available from My Oracle Support, the following content patches:

  • Oracle Utilities Customer Care And Billing v2.6.0.0.0 (available as patch 26075747).
  • Oracle Utilities Customer To Meter v2.6.0.0.0 (available as patch 26075823).
  • Oracle Utilities Meter Data Management/ Oracle Utilities Smart Grid Gateway v2.2.0.1 (available as patch 26075799).

This means the current release of the pack, v5.0.1.0, supports the following products and versions:

  • Oracle Utilities Customer Care And Billing 2.4.0.3, 2.5.0.1, 2.5.0.2 & 2.6.0.0
  • Oracle Utilities Mobile Workforce Management 2.2.0.3, 2.3.0.0 & 2.3.0.1
  • Oracle Real Time Scheduler 2.2.0.3, 2.3.0.0 & 2.3.0.1
  • Oracle Utilities Application Framework 4.2.0.3, 4.3.0.1, 4.3.0.2, 4.3.0.3 & 4.3.0.4
  • Oracle Utilities Meter Data Management 2.1.0.3, 2.2.0.0 & 2.2.0.1
  • Oracle Utilities Smart Grid Gateway (all adapters) 2.1.0.3, 2.2.0.0 & 2.2.0.1      
  • Oracle Utilities Work And Asset Management 2.1.1, & 2.2.0
  • Oracle Utilities Operational Device Management 2.1.1 & 2.2.0
  • Oracle Utilities Customer To Meter 2.6.0.0

The pack continues to support the ability to build flows for these products, including flows across multiple products, packaged integration and supports all channels of access including online, web services and batch. We also support mobile testing for the Oracle Utilities Mobile Workforce Management and Oracle Real Time Scheduler products running on Android and iOS devices.

The pack also includes sanity flows used by the Oracle Utilities cloud deployments that test the installation of the products are complete and operational.

Web Services: REST vs SOAP

Floyd Teter - Tue, 2017-06-27 14:54
I was in the middle of a discussion earlier today about service-based integration for Oracle HCM Cloud.  In that conversation, someone asked me why anyone would ever use SOAP over REST.  My answer was pretty lengthy but, when I was done, someone said I ought to share it on my blog. 

Well, here's the thing:  my thinking on the subject is not original.  I simply communicate fundamentals I learned elsewhere.  And I'm big on giving credit where credit is due.  So when asked about REST vs SOAP, I simply regurgitate the very best summary of REST vs SOAP that I've ever read.  That summary can be found here.  It's a little dated now, having been written in 2010, but it's still accurate.  Feel free to use it yourself.  But do take a moment every now and then to give credit where credit is due.


Summer, autumn and winter: a lot of conferences ahead

Yann Neuhaus - Tue, 2017-06-27 14:50

It is hot in Europe, it is summer, enjoy, but technology moves fast so you have the chance to already prepare for the next conferences. The IT Tage 2017 will happen the 11th to 14th of December this year and we are happy to be there again.
Selection_001
This event covers a wide range of topics and we will be there again covering and talking about:

In addition you will have the chance to join Jan from EnterpriseDB speaking about the differences between Oracle and PostgreSQL. If you want to hear more about that, you might want to join the PostgreSQL Conference Europe this year.

Selection_002

Maybe Jan and me get the chance to talk there as well, we submitted an interesting session, stay tuned.

Chances are high that you may find us at the ukoug, too:

Selection_004

… and we’ll be at the #DOAG2017 for sure this year again.
Selection_003

Hope to see you soon… Cheers, discussions ahead …

 

Cet article Summer, autumn and winter: a lot of conferences ahead est apparu en premier sur Blog dbi services.

12c Multitenant Internals: VPD for V$ views

Yann Neuhaus - Tue, 2017-06-27 13:49

I described in an earlier post on AWR views how the dictionary views were using metadata and object links to show information from other containers. But this mechanism cannot work for fixed views (aka V$) because they don’t have their definition in the dictionary.

The big difference is that most of V$ views are available long before the dictionary is opened or even created. Just start an instance in NOMOUNT and you can query the V$ views. Even in multitenant, you can switch to different containers in MOUNT, and query V$ views, when no dictionary is opened.

SQL> alter database mount;
Database altered.
 
SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED MOUNTED
3 PDB1 MOUNTED
 
SQL> alter session set container=pdb1;
Session altered.
 
SQL> show pdbs;
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
3 PDB1 MOUNTED

V$ views query information from the instance and this information pertain to one container:

  • CON_ID=0 for the CDB itself
  • CON_ID=1 for CDB$ROOT
  • CON_ID=2 for PDB$SEED
  • CON_ID=3 for the first PDB you have created

When you are in root, the V$ views are queried as normal and show all information – from all containers – with their related CON_ID

When you are in a PDB, you must see the objects that belong to your PDB, but not those that belong to other PDBS. But this is not sufficient. For example, you may query the version, and the version is related to the CDB itself, with CON_ID=0:

SQL> alter session set container=CDB$ROOT;
Session altered.
 
SQL> select * from v$version;
 
BANNER CON_ID
-------------------------------------------------------------------------------- ----------
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production 0
PL/SQL Release 12.2.0.1.0 - Production 0
CORE 12.2.0.1.0 Production 0
TNS for Linux: Version 12.2.0.1.0 - Production 0
NLSRTL Version 12.2.0.1.0 - Production 0

Then, in a PDB you should see your PDB objects and the CON_ID=0 ones. Oracle needs a new mecanism for that. One way would be to switch to root, query the V$ and filter on CON_ID. We don’t need that. Context switch is there to access data from a different container tablespace, because tablepaces are not shared. But V$ views expose data from the instance, and the instance is shared. Any container can see all rows, and we just want to filter some rows.

Here is the execution plan when querying V$VERSION from a PDB:


SQL> connect sys/oracle@//localhost/PDB1 as sysdba
Connected.
SQL> explain plan for select * from v$version;
Explained.
 
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 1078166315
 
------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 68 | 0 (0)| 00:00:01 |
|* 1 | FIXED TABLE FULL| X$VERSION | 1 | 68 | 0 (0)| 00:00:01 |
------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter(("CON_ID"=0 OR "CON_ID"=3) AND
"INST_ID"=USERENV('INSTANCE'))

An additional predicate (“CON_ID”=0 OR “CON_ID”=3) is added to the view. How is it done? Oracle has a security feature for that: Virtual Private Database – aka Row Level Security – which adds a where clause dynamically.

One way to get more information about virtual private databases is to have an error on its execution and I know that a user with only select privilege cannot EXPLAIN PLAN (see MOS Note 1029064.6).

I connect to a PDB with a low privileged user:
SQL> connect scott/tiger@//localhost/PDB1
Connected.

I explain plan the V$VERSION fixed view.
SQL> explain plan for select * from v$version;
 
Error starting at line : 10 File @ /media/sf_share/122/blogs/multitenant-vpd.sql
In command -
explain plan for select * from v$version
Error report -
ORA-28113: policy predicate has error
28113. 00000 - "policy predicate has error"
*Cause: Policy function generates invalid predicate.
*Action: Review the trace file for detailed error information.

Interesting error which confirms the guess: this is a VPD error and it generates a trace:
*** 2017-06-26T22:45:17.838507+02:00 (PDB1(3))
*** SESSION ID:(141.17865) 2017-06-26T22:45:17.838537+02:00
*** CLIENT ID:() 2017-06-26T22:45:17.838541+02:00
*** SERVICE NAME:(pdb1) 2017-06-26T22:45:17.838545+02:00
*** MODULE NAME:(java@VM104 (TNS V1-V3)) 2017-06-26T22:45:17.838548+02:00
*** ACTION NAME:() 2017-06-26T22:45:17.838552+02:00
*** CLIENT DRIVER:(jdbcoci : 12.2.0.1.0) 2017-06-26T22:45:17.838555+02:00
*** CONTAINER ID:(3) 2017-06-26T22:45:17.838558+02:00
 
-------------------------------------------------------------
Error information for ORA-28113:
Logon user : SCOTT
Table/View : SYS.V_$VERSION
VPD Policy name : CON_ID
Policy function: SYS.CON_ID
RLS view :
SELECT "BANNER","CON_ID" FROM "SYS"."V_$VERSION" "V_$VERSION" WHERE (con_id IN (0, 3) )
ORA-01039: insufficient privileges on underlying objects of the view
-------------------------------------------------------------

There’s no container switch here, all is running in PDB1 with CON_ID=3 and the internal VPD has added a where clause to filter rows with CON_ID=0 and CON_ID=3

Do not search for the VPD policy name ‘CON_ID’ and function ‘CON_ID’ in the dictionary views because this happens even when the dictionary is not accessible. This is an internal policy used when querying fixed views in multitenant and which probably use some of the VPD code only.

 

Cet article 12c Multitenant Internals: VPD for V$ views est apparu en premier sur Blog dbi services.

Internal Audit Policies

Dominic Brooks - Tue, 2017-06-27 13:23

The Oracle RDBMS market has long been a-shrinking.

A future as an Oracle specialist definitely looks like being past its shelf-life.

Of course, having spent many years as such you will have many transferable skills anyway, transferable to other products and languages within administration, build, performance, troubleshooting or design and development. And if you’re lucky to a number of these. It’s highly likely that you’ve not been a monogamous technologist anyway.

However, what frustrates me, I mean really, really frustrates me are internal audit policies.

Oracle is filled with plenty of fabulous features.

But when I can’t use a significant number of those then it makes me mad… to the extent that I see this all coming to an end prematurely. Unnecessarily.

I can’t use DBMS_PARALLEL_EXECUTE.

Fabulous feature.

From http://docs.oracle.com/database/122/ARPLS/DBMS_PARALLEL_EXECUTE.htm#ARPLS67331 :

To execute chunks in parallel, the user must have CREATE JOB system privilege.

Well, the application schema isn’t allowed CREATE JOB (or access to DBMS_SCHEDULER) because it’s against the internal audit policy.

(And I also hate it when I’m told that “anyway DBMS_JOB is deprecated” when it’s not, at least not until 12.2 and even then they deprecation is dubious because of transactionality or rather the continued lack thereof in the replacement.)

If I’m granted CREATE JOB then apparently I’m, or rather my application is, likely to suddenly go bonkers creating jobs left, right and centre when I should be using standard job infrastructure like Autosys or Control+M or whatever you use. (Even though I wouldn’t because I’m responsible for what I and my application do. And if I do irresponsible things, in general I expect sanctions and consequences)

I can make a very reasonable and well-reasoned argument as to why I should be using DBMS_PARALLEL_EXECUTE but ultimately I’m arguing with a checklist, even if there is a Jobsworth holding it.

Bottom line is that if I want to do multi-threaded work like this via DBMS_PARALLEL_EXECUTE, I can’t. But if I want to achieve the same thing in one thousand Java threads, that is perfectly fine. This is just not a level playing field!!!

My reasonable response is ok but can’t the DBAs provide a controlled developer toolkit of wrappers around such policied functionality?

To which the answer is also no. Because then the DBAs would potentially be responsible for any issues arising out of that usage.

The internal audit policy is largely driven by external policy coming from generic regulators and auditors (so I am told regularly although I don’t have direct access to the external policy).

The bank is being subjected to severe regulation and those regulations cannot be put on hold whilst alternative solutions are sought. Regulators and auditors have no concern when it comes to the lower level problems that are caused when they see high level gaps that need to be closed.

Yeah… so the application has to deal with the consequences even when the policy is wrong. Sorry. It’s wrong. Or it’s poorly implemented.

The list of what I can and can’t do (not personally remember, this is the main application schema, not me or a representation of me) grows all the time.

This DBMS_PARALLEL_EXECUTE and CREATE JOB is actually an old wound re-opened by refusal to be allowed to use an application context / SYS_CONTEXT.

The application needs CREATE ANY CONTEXT.

No, this is an elevated privilege. Not allowed.

Why is it an elevated privilege? Oh you mean the ANY word? That’s sort of misleading.

Checklist says no.

It’s not the same as CREATE ANY TABLE which allows you to create a table in ANY schema. Contexts are global. Security for actually setting the context value is via a named schema.package.

Checklist says no.

So the application can’t use VPD or any other worthy use case of using applications contexts.

Checklist says no.

So, can the DBA create the context for the application? Or provide some wrapper or toolkit to create it.

Checklist says no.

Um… ok. So that’s it then.

It’s just plain bonkers.

These are just two examples of many.

I’m had many a battle over the “can’t grant a privilege directly to a schema, has to be via a role” conversation. Which then leads to the explanation to the DBA of how PLSQL works. Oh so tedious.

I’m not 100% sure to what extent this is just local law enforcement or whether this is more widespread.

I’m pretty sure the latter but there are bound to be pockets of sanity and common sense out there somewhere.

Bottom line is that this will accelerate the shrinking Oracle RDBMS market.

People whose “speciality” are Oracle checklists are contributing to the demise and they will hopefully be able to transfer their checklist enforcement skills to other markets.

And shouldn’t product owners within Oracle surely be aware and be working with regulators and auditors to address these issues and in turn work with their customers to make sure the what they have discussed and agreed is carried down to the lower level? Also asking too much.

Bonkers. Bonkers. Bonkers. And been getting worse for years.

Working with Oracle is becoming unworkable.

My first article in nearly a year… and not a technical article… just a moan… and a reflection that I’m doing less and less Oracle work.
(But I’m not writing regularly on my other learnings either: http://thereplmc.wordpress.com)

It’s just a shame.


Oracle JET Modular Architecture Example

Andrejus Baranovski - Tue, 2017-06-27 13:01
One of my favourite parts in Oracle JET - modular code structuring support. This allows to split application functionality into modules and reusable functions. In this post I will show how you could leverage Oracle JET modular architecture not only by implementing common code functions, but also by managing data maintained in common modules.

Let's jump to the example (download or browse through sample code on GitHub repository - JETModularArchitecture). Sample application is based on JET template. I have created two common modules - dashboardChartHelper and dashboardTableHelper. These modules define data structure to be displayed in the dashboard and provide API to manage this data from consuming module:


Both helper modules are imported into consuming module - dashboard:


Dashboard module defines variables (chart and table data), which are initialized from variables assigned with data structures in helper modules:


There are two wrapper functions, calling API functions from helper modules. API provides data manipulation logic, which changes chart data structure. Wrapper functions are invoked from dashboard UI:


Here is the implementation of API functions - data array changes:


What is great about such structuring - data and data changes logic can be encapsulated in common helper module. As soon as we have observable variable defined in consuming module (dashboard), which points to the method helper method (where data is changed) - changes are automatically visible on UI.

Here is the data coming from helper modules rendered in dashboard module:


Press on Add Group E button, this will call helper module API function to update data array cached in that module. In turn observable variable in dashboard module will be updated and data displayed on UI will be refreshed:


Press on Remove Group E button - chart data will be changed again:

OUD 11.1.2.3 – ODSM Session Timeout

Yann Neuhaus - Tue, 2017-06-27 12:52

The ODSM is a quite powerful tool for managing the Oracle Unified Directory, and quite a lot of default settings are very reasonable. But there is one that disturbed me from the beginning. It is the ODSM Session Timeout. You might see a message like the following.

1

Or maybe this one.

2

It just says, that the page will expire unless a response is received within 2 minutes. Doing some complex searches or similar stuff often takes more than 2 minutes.

So, how can we increase the limit to, let’s say, 2 hours? A value which is more reasonable from my point of view.

In the early days, you had to edit a file called web.xml in a very intuitive directory. ;-) And change the session-timeout to whatever value you like. In my example 2 hours.

$ vi /u01/app/oracle/product/Middleware/11.1.2.3/user_projects/domains/base_domain/servers/AdminServer/tmp/_WL_user/odsm_11.1.1.5.0/d89dm9/war/WEB-INF/web.xml

...
...

  <session-config>
    <session-timeout>7200</session-timeout>
  </session-config>

However, changing the value here has the disadvantage that it might not be permanent.  This directory is a temporary cache directory for the WebLogic server.  So, if you apply a patch that overwrites the …/odsm/odsm.ear file, the changes you made to web.xml in the temporary cache directory are also overwritten. So, it is not a good long term solution.

Way better is to do it via the WebLogic Console.

Login to your WebLogic Console with the WebLogic user and navigate to “Deployments”. From there select the “/odsm” module.

3

Now move to the “Configuration” tab and change the Session Timeout (in seconds) to 7200, in case you want a 2h session timeout.

4

5

Now save the changes, and click another time OK, to save the Deployment Plan.

6

7

That’s it.

Conclusion

The default session timeout is way too short from my point of view. But no worries. Changing it via the WebLogic Console is quite easy and it might save you a lot of headaches. :-)

 

Cet article OUD 11.1.2.3 – ODSM Session Timeout est apparu en premier sur Blog dbi services.

Important Metrics for Food and Beverage Industry

Nilesh Jethwa - Tue, 2017-06-27 09:54

The food beverage industry can be extremely demanding if not highly competitive. Hence, it is important for managers and staff to remove guesswork from their vocabularies and instead utilize a dashboard app or two that will be able to assist them in running the business effectively, efficiently, and profitably.

Aside from standard excel dashboards, there are other reporting tools that focus on various metrics and KPIs that ensure that restaurants know what their strong and weak points are, where they are doing great service and earnings wise, and where they need to take more action and implement better controls.

Especially now that we live in a data-driven environment where access to real-time information is easier than ever, there is no reason for restaurants not to use excel dashboards to monitor their daily operations. There are distinct areas that need to be focused on and here they are:

  1. Kitchen
  • Food Cost

By comparing weekly food sales with purchases, your dashboard app will tell you whether you’re profiting from daily kitchen operations or not. Even spoiled food and other wastage are computed to determine the overall cost.

In the food beverage industry, there are certain percentages for food costs that are acceptable and those which are simply unjustifiable.

  • Kitchen Labor

It is also important for managers to know whether they have too many staff working in the kitchen at said hours of the day or not.

Read more at http://www.infocaptor.com/dashboard/important-metrics-and-kpis-for-food-beverage

Exploring the Rittman Mead Insights Lab

Rittman Mead Consulting - Tue, 2017-06-27 08:58
What is our Insights Lab?

The Insights Lab offers on-demand access to an experienced data science team, using a mature methodology to deliver one-off analyses and production-ready predictive models.

Our Data Science team includes physicists, mathematicians, industry veterans and data engineers ready to help you take analytics to the next level while providing expert guidance in the process.

Why use it?

Data is cheaper to collect and easier to store than ever before. But collecting the data is not synonymous with getting value from it. Businesses need to do more with the same budget and are starting to look into machine learning to achieve this.

These processes can take off some of the workload, freeing up people's time to work on more demanding tasks. However, many businesses don't know how to get started down this route, or even if they have the data necessary for a predictive model.

R

Our Data science team primarily work using the R programming language. R is an open source language which is supported by a large community.

The functionality of R is extended by many community written packages which implement a wide variety of statistical and graphical techniques, including linear and nonlinear modeling, statistical tests, time-series analysis, classification, clustering as well as packages for data access, cleaning, tidying, analysing and building reports.

All of these packages can be found on the Comprehensive R Archive Network (CRAN), making it easy to get access to new techniques or functionalities without needing to develop them yourself (all the community written packages work together).

R is not only free and extendable, it works well with other technologies and makes it an ideal choice for businesses who want to start looking into advanced analytics. Python is an obvious alternative, and several of our data scientists prefer it. We're happy to use whatever our client's teams are most familiar with.

Experienced programmers will find R syntax easy enough to pick up and will soon be able to implement some form of machine learning. However, for a detailed introduction to R and a closer look at implementing some of the concepts mentioned below we do offer a training course in R.

Our Methodology

Define Define a Question

Analytics, for all intents and purposes, is a scientific discipline and as such requires a hypothesis to test. That means having a specific question to answer using the data.

Starting this process without a question can lead to biases in the produced result. This is called data dredging - testing huge numbers of hypotheses about a single data set until the desired outcome is found. Many other forms of bias can be introduced accidentally; the most commonly occurring will be outlined in a future blog post.

Once a question is defined, it is also important to understand which aspects of the question you are most interested in. Associated, is the level of uncertainty or error that can be tolerated if the result is to be applied in a business context.

Questions can be grouped into a number of types. Some examples will be outlined in a future blog post.

Define a dataset

The data you expect to be relevant to your question needs to be collated. Maybe supplementary data is needed, or can be added from different databases or web scraping.

This data set then needs to be cleaned and tidied. This involves merging and reshaping the data as well as possibly summarising some variables. For example, removing spaces and non-printing characters from text and converting data types.

The data may be in a raw format, there may be errors in the data collection, or corrupt or missing values that need to be managed. These records can either be removed completely or replaced with reasonable default values, determined by which makes the most sense in this specific situation. If records are removed you need to ensure that no selection biases are being introduced.

All the data should be relevant to the question at hand, anything that isn't can be removed. There may also be external drivers for altering the data, such as privacy issues that require data to be anonymised.

Natural language processing could be implemented for text fields. This takes bodies of text in human readable format such as emails, documents and web page content and processes it into a form that is easier to analyse.

Any changes to the dataset need to be recorded and justified.

Model Exploratory Analysis

Exploratory data analysis involves summarising the data, investigating the structure, detecting outliers / anomalies as well as identifying patterns and trends. It can be considered as an early part of the model production process or as a preparatory step immediately prior. Exploratory analysis is driven by the data scientist, enabling them to fully understand the data set and make educated decisions; for example the best statistical methods to employ when developing a model.

The relationships between different variables can be understood and correlations found. As the data is explored, different hypotheses could be found that may define future projects.

Visualisations are a fundamental aspect of exploring the relationships in large datasets, allowing the identification of structure in the underlying dataset.

This is also a good time to look at the distribution of your dataset with respect to what you want to predict. This often provides an indication of the types of models or sampling techniques that will work well and lead to accurate predictions.

Variables with very few instances (or those with small variance) may not be beneficial, and in some cases could even be detrimental, increasing computation time and noise. Worse still, if these instances represent an outlier, significant (and unwarranted) value may be placed on these leading to bias and skewed results.

Statistical Modelling/Prediction

The data set is split into two sub groups, "Training" and "Test". The training set is used only in developing or "training" a model, ensuring that the data it is tested on (the test set) is unseen. This means the model is tested in a more realistic context and will help to determine whether the model has overfitted to the training set. i.e. is fitting random noise in addition to any meaningful features.

Taking what was learned from the exploratory analysis phase, an initial model can be developed based on an appropriate application of statistical methods and modeling tools. There are many different types of model that can be applied to the data, the best tends to depend on the complexity of your data and the any relationships that were found in the exploratory analysis phase. During training, the models are evaluated in accordance with an appropriate metric, the improvement of which is the "goal" of the development process. The predictions produced from the trained models when run on the test set will determine the accuracy of the model (i.e. how closely its predictions align with the unseen real data).

A particular type of modelling method, "machine learning" can streamline and improve upon this somewhat laborious process by defining models in such a way that they are able to self optimise, "learning" from past iterations to develop a superior version. Broadly, there are two types, supervised and un-supervised. A supervised machine learning model is given some direction from the data scientist as to the types of methods that it should use and what it is expecting. Unsupervised machine learning on the other hand, as the name suggests, involves giving the model less information to start with and letting it decide for its self what to value, and how to approach the problem. This can help to remove bias and reduce the number of assumptions made but will be more computationally intensive, as the model has a broader scope to investigate. Usually supervised machine learning is employed in a case where the problem and data set are reasonably well understood, and unsupervised machine learning where this is not the case.

Complex predictive modelling algorithms perform feature importance and selection internally while constructing models. These models can also report on the variable importance determined during the model preparation process.

Peer Review

This is an important part of any scientific process, and effectively utilities our broad expertise in modelling at Rittman Mead. This enables us to be sure no biases were introduced that could lead to a misleading prediction and that the accuracy of the models is what could be expected if the model was run on new unseen data. Additional expert views can also lead to alternative potential avenues of investigation being identified as part of an expanded or subsequent study.

Deploy Report

For a scientific investigation to be credible the results must be reproducible. The reports we produce are written in R markdown and contain all the code required to reproduce the results presented. This also means it can be re-run with new data as long as it is of the same format. A clear and concise description of the investigation from start to finish will be provided to ensure that justification and context is given for all decisions and actions.

Delivery

If the result is of the required accuracy we will deploy a model API enabling customers to start utilising it immediately.
There is always a risk however that the data does not contain the required variables to create predictions with sufficient confidence for use. In these cases, and after the exploratory analysis phase there may be other questions that would be beneficial to investigate. This is also a useful result, enabling us to suggest additional data to collect that may allow a more accurate result should the process be repeated later.

Support

Following delivery we are able to provide a number of support services to ensure that maximum value is extracted from the model on an on-going basis. These include:
- Monitoring performance and accuracy against the observed, actual values over a period of time. Should there be discrepancies between these values arise, these can be used to identify the need for alterations to the model.
- Exploring specific exceptions to the model. There may be cases in which the model consistently performs poorly. Instances like these may not have existed in the training set and the model could be re-trained accordingly. If they were in the training set these could be weighted differently to ensure a better accuracy, or could be represented by a separate model.
- Updates to the model to reflect discrepancies identified through monitoring, changes of circumstance, or the availability of new data.
- Many problems are time dependent and so model performance is expected to degrade, requiring retraining on more up to date data.

Summary

In conclusion our Insights lab has a clearly defined and proven process for data science projects that can be adapted to fit a range of problems.

Contact us to learn how Insights Lab can help your organization get the most from its data, and schedule your consultation today.
Contact us at info@rittmanmead.com

Categories: BI & Warehousing

Notes on datapatch

Michael Dinh - Tue, 2017-06-27 08:47

Mike Dietrich is the God father of all Oracle upgrades.

Here are just a few reference and should read

DBUA 12c and “datapatch.pl” – things to know

DBUA misses the post-upgrade datapatch execution in Oracle 12.1.0.2. The solution is to apply the SQL changes manually after DBUA has completed the database upgrade to Oracle Database 12c:

cd $ORACLE_HOME/OPatch
./datapatch -verbose

Does DBCA execute “datapatch” in Oracle 12.2?

My notes: when it doubt check it out.

$ cd $ORACLE_HOME/OPatch
$ ./datapatch -verbose

SQL Patching tool version 12.1.0.2.0 on Thu Jan 19 10:35:27 2017
Copyright (c) 2016, Oracle.  All rights reserved.

Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_93223_2017_01_19_10_35_27/sqlpatch_invocation.log

Connecting to database...OK
Bootstrapping registry and package to current versions...done
Determining current state...done

Current state of SQL patches:
Patch 24315824 (Database PSU 12.1.0.2.161018, Oracle JavaVM Component (OCT2016)):
  Installed in the binary registry only
Bundle series DBBP:
  ID 161018 in the binary registry and not installed in the SQL registry

Adding patches to installation queue and performing prereq checks...
Installation queue:
  Nothing to roll back
  The following patches will be applied:
    24315824 (Database PSU 12.1.0.2.161018, Oracle JavaVM Component (OCT2016))
    24340679 (DATABASE BUNDLE PATCH: 12.1.0.2.161018 (24340679))

Installing patches...
Patch installation complete.  Total patches installed: 2

Validating logfiles...
Patch 24315824 apply: SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/24315824/20676808/24315824_apply_DBFS_2017Jan19_10_35_43.log (no errors)
Patch 24340679 apply: SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/24340679/20646358/24340679_apply_DBFS_2017Jan19_10_35_47.log (no errors)
SQL Patching tool complete on Thu Jan 19 10:37:16 2017

SQL> set serveroutput on
SQL> exec dbms_qopatch.get_sqlpatch_status;

-- Logfile is provided. How convenient is that?
Patch Id : 24315824
        Action : APPLY
        Action Time : 19-JAN-2017 10:37:14
        Description : Database PSU 12.1.0.2.161018, Oracle JavaVM Component (OCT2016)
        Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/24315824/20676808/24315824_apply_DBFS_2017Jan19_10_35_43.log
        Status : SUCCESS

Patch Id : 24340679
        Action : APPLY
        Action Time : 19-JAN-2017 10:37:16
        Description : DATABASE BUNDLE PATCH: 12.1.0.2.161018 (24340679)
        Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/24340679/20646358/24340679_apply_DBFS_2017Jan19_10_35_47.log
        Status : SUCCESS

PL/SQL procedure successfully completed.

SQL>


$ cd $ORACLE_HOME/OPatch
$ ./datapatch -verbose


SQL> set serveroutput on
SQL> exec dbms_qopatch.get_sqlpatch_status;


Patch Id : 24315824
        Action : APPLY
        Action Time : 19-JAN-2017 10:37:14
        Description : Database PSU 12.1.0.2.161018, Oracle JavaVM Component (OCT2016)
        Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/24315824/20676808/24315824_apply_DBFS_2017Jan19_10_35_43.log
        Status : SUCCESS

Patch Id : 24340679
        Action : APPLY
        Action Time : 19-JAN-2017 10:37:16
        Description : DATABASE BUNDLE PATCH: 12.1.0.2.161018 (24340679)
        Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/24340679/20646358/24340679_apply_DBFS_2017Jan19_10_35_47.log
        Status : SUCCESS

PL/SQL procedure successfully completed.

SQL>

Italian Consumer Electronics Retailer Unieuro SpA Drives Efficiency Through Supply Chain

Oracle Press Releases - Tue, 2017-06-27 08:01
Press Release
Italian Consumer Electronics Retailer Unieuro SpA Drives Efficiency Through Supply Chain Anticipate Demand and Simplify Operations with a Unified View of Inventory

Redwood Shores, Calif.—Jun 27, 2017

Today, Oracle announced that consumer electronics retailer Unieuro SpA purchased Oracle Retail Demand Forecasting and Oracle Retail Allocation & Replenishment to optimize stock levels, increase profitability and improve cash flow.

Unieuro aims at improving its centralized supply chain organization in order to support effective omnichannel replenishment processes including DOS, affiliate and online channels. With this initiative, Unieuro is in a stronger position to optimize the cross-channel opportunity to minimize stock, reduce obsolescence and streamline organizational costs. The supply chain organization will have the much needed visibility into demand coming from all touch points and to ultimately orchestrate the supply, reduce lost sales and increase customer satisfaction.

“Oracle Retail provides a distinctive replenishment approach for short lifecycle products which includes attribute based forecasting.” said Luigi Fusco, COO, Unieuro SpA. “We believe the optimized orchestration of the stock across channels will help improve our fulfillment rate to improve customer satisfaction and reduce obsolescence to eliminate costs.”

“After conducting a proof of concept with our data, Oracle Retail gained our confidence to move forward with the project. We validated our choice with customer references in the consumer electronics and hardlines businesses,” said Luca Girotti, IT Director, Unieuro. “We are thankful to the Sopra Steria team who helped us evaluate the market offerings and ultimately decide to move forward with the Oracle Retail solution.”

“Retailers like Unieuro can proactively position inventory in the right place in the right quantity by using analytic algorithms to drive accuracy and visibility. The visibility of this new supply chain organization will help Unieuro inspire consumer loyalty with a better in-stock position wherever they are inspired to shop,” said Ray Carlin, Senior Vice President and General Manager, Oracle Retail.

About Oracle Retail

Oracle provides retailers with a complete, open, and integrated suite of best-of-breed business applications, cloud services, and hardware that are engineered to work together and empower commerce. Leading fashion, grocery, and specialty retailers use Oracle solutions to anticipate market changes, simplify operations and inspire authentic brand interactions. For more information, visit our website at www.oracle.com/retail.

About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

NetSuite Announces New Channel Partners Seeking to Drive Growth with Cloud ERP

Oracle Press Releases - Tue, 2017-06-27 08:01
Press Release
NetSuite Announces New Channel Partners Seeking to Drive Growth with Cloud ERP Cherry Bekaert, Terillium, Revolution Group and Conexus SG Join NetSuite Solution Provider Program

San Mateo, Calif.—Jun 27, 2017

Oracle NetSuite Global Business Unit, one of the world’s leading providers of cloud-based financials / ERP, HR, Professional Services Automation (PSA) and omnichannel commerce software suites, today announced the addition of new partners to the NetSuite Solution Provider Program including Cherry Bekaert, Terillium, Revolution Group and Conexus SG. All four partners have launched new cloud ERP practices with NetSuite to help clients capitalize on the transformative power of cloud ERP. These new partners are responding to high demand for cloud ERP that can deliver greater agility and scalability for growth without the costly overhead and limitations of on-premise systems. At the same time, the partners are taking advantage of the opportunity to diversify and grow their business while benefiting with the high margins and recurring revenue offered through the NetSuite Solution Provider Program.

Launched in 2002, the NetSuite Solution Provider Program is the industry’s leading cloud channel partner program. Since its inception, NetSuite has been a leader in partner success, breaking new ground in building and executing on the leading model to make the channel successful with NetSuite. A top choice for partners who are building new cloud ERP practices or for those expanding their existing practice to meet the demand for cloud ERP, NetSuite has enabled partners to transform their business model to fully capitalize on the revenue growth opportunity of the cloud. The NetSuite Solution Provider Program delivers unprecedented benefits that include highly attractive margins and range from business planning, sales, marketing and professional services enablement, to training and education. Click here for more information about the NetSuite Solution Provider Program.

“These new partners are teaming with NetSuite as the global migration to the cloud continues to accelerate,” said Craig West, Oracle NetSuite Global Business Unit Vice President of Channel Sales. “We look forward to helping customers modernize and transform how business is done through these collaborative and mutually beneficial relationships.”

Cherry Bekaert Launches ERP Consulting Practice with NetSuite Cloud ERP

Cherry Bekaert (www.cherrybekaert.com), one of the 25 largest CPA firms in the U.S. with 1,000+ associates, has launched a new ERP consulting practice to offer NetSuite to businesses in software and technology, manufacturing, distribution, private equity, transportation and logistics, nonprofit, financial services and other industries. The 70-year-old firm, headquartered in Richmond, VA with strong concentration from Washington, D.C. to Miami, will provide implementation, support and customization services to existing and new midmarket customers amid growing demand for NetSuite’s industry-leading solution for ERP, CRM and ecommerce. Cherry Bekaert specializes in helping clients take advantage of opportunities and address new requirements that come with expansive growth. By joining forces with NetSuite, Cherry Bekaert’s new Cloud Solutions practice will equip clients for improved efficiency and productivity in financial operations; compliance with revenue recognition, tax and reporting requirements; and real-time visibility into consolidated financial results.

“The demand for cloud solutions is surging among our middle-market client base, particularly in technology, health and life sciences, and industrial, the sectors that make up our practice,” said Paul Doucet, Managing Director of Cherry Bekaert’s Cloud Solutions practice. “NetSuite’s integrated applications provide a dynamic cloud solution with deep expertise in our core industry segments. We look forward to an equitable and rewarding relationship.”   

Terillium Expands Oracle Platinum Relationship with NetSuite

Terillium (www.terillium.com), a Platinum level member in the Oracle PartnerNetwork since 2006 and 12-time recipient of Oracle Excellence Awards, is expanding its Oracle technology offerings as a new NetSuite Solution Provider serving clients in the small and midmarket manufacturing, distribution and services sectors. Based in Cincinnati, OH, with more than 170 associates, Terillium’s new NetSuite practice will complement its traditional focus on Oracle JD Edwards and Oracle cloud offerings for supply chain, procurement, financials, planning and budgeting, sales and human capital management. With Oracle’s acquisition of NetSuite complete last November, allying with NetSuite was a “natural fit” that enables Terillium to offer organizations NetSuite’s agile, scalable and unified platform, according to Steve May, Vice President at Terillium. Terillium leverages deep expertise gained in delivering solutions to more than 500 businesses to meet rising demand for cloud ERP, CRM and B2B ecommerce with a complete NetSuite services offering for implementation, customization, integration and ongoing optimization.

“We teamed with NetSuite because it’s the #1 cloud ERP on the market with a complete and proven platform used by over 40,000 organizations that can be deployed very quickly with all the benefits of the cloud,” May said. “Offering NetSuite cloud ERP to the businesses we work with is aligned with our mission to provide the best solutions and services possible.”

Revolution Group Focuses on Manufacturing, Distribution with NetSuite

Revolution Group (www.revolutiongroup.com), founded in 1995, is expanding their cloud ERP practice with NetSuite to meet growing demand among its manufacturing, distribution and professional services clients and prospects for cloud ERP. Based in Ohio with about 80 associates, Revolution Group will provide NetSuite cloud ERP distribution, implementation, customization and development that combines NetSuite’s best-in-class platform with Revolution Group’s decades of expertise in the manufacturing, distribution and professional services industries. The relationship with NetSuite expands Revolution Group’s cloud technology portfolio beyond Salesforce and the Plex Manufacturing Cloud. Revolution Group will also leverage its development team and the SuiteCloud development platform for “last mile” customizations and optimizations that help clients continuously innovate and grow with NetSuite.

“We see NetSuite as a very scalable, flexible and growth-oriented platform to meet our customers’ business requirements,” said Rick Snide, CEO of Revolution Group. “Businesses are seeking cloud ERP solutions that correspond with their business needs and budget. Our teaming with NetSuite allows us to offer our clients and prospects options that complement our current services and help them achieve their goals.”

Conexus SG Expands Beyond On-Premise Microsoft ERP with NetSuite

Conexus SG (www.conexussg.com), based in Dallas-Fort Worth, TX, has joined the NetSuite Solution Provider program to offer their first cloud ERP offering and expand beyond its traditional focus of on-premise Microsoft Dynamics GP, SL, CRM and related solutions. The move addresses increased demand for NetSuite cloud solutions, among Conexus SG’s clients and prospects in the oil and gas, manufacturing, distribution, healthcare, restaurant and services industries in the Texas and Oklahoma region. Founded in 2008 and a Microsoft Gold ERP Partner that has served hundreds of customers, Conexus SG combines its deep experience in industry-specific ERP implementation with NetSuite’s proven platform to offer clients a true cloud ERP solution geared for efficiency and growth. Besides NetSuite ERP, CRM and ecommerce implementation and optimization, the 20-person firm will also offer the NetSuite OpenAir professional services automation solution.

“Customers are asking for cloud and customers are asking for NetSuite,” said Eric Holleman, Conexus SG Managing Director. “The future is cloud, and NetSuite is the clear leader in cloud ERP. NetSuite is an ideal fit for fast-growing companies because it doesn’t require infrastructure, it’s fast to implement, and it’s scalable for growth. We’re delighted to team up with NetSuite to offer clients greater choice and flexibility.”

Contact Info
Christine Allen
Public Relations, Oracle NetSuite Global Business Unit
603-743-4534
PR@netsuite.com
About Oracle NetSuite Global Business Unit

Oracle NetSuite Global Business Unit pioneered the Cloud Computing revolution in 1998, establishing the world’s first company dedicated to delivering business applications over the internet. Today, Oracle NetSuite Global Business Unit provides a suite of cloud-based financials / Enterprise Resource Planning (ERP), HR and omnichannel commerce software that runs the business of companies in more than 100 countries. For more information, please visit http://www.netsuite.com.

Follow Oracle NetSuite Global Business Unit’s Cloud blog, Facebook page and @NetSuite Twitter handle for real-time updates.

About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Christine Allen

  • 603-743-4534

Webcast: Managing and Monitoring Oracle E-Business Suite On-Premises and Cloud

Steven Chan - Tue, 2017-06-27 02:00

Oracle University has a large number of free recorded webcasts that are useful for E-Business Suite system administrators.  Here's a new one on the latest Application Management Pack updates:

Vasu Rao, Senior Principle Product Strategy Manager, shares the latest tools including the Application Management Suite for monitoring and managing an Oracle E-Business Suite 12.2 environment. Among the topics covered for Oracle E-Business Suite on-premises and cloud environments are online patching management; monitoring the technology stack; configuration comparison between run and patch editions; provisioning an instance; technology stack patch recommendations; customization management; and automated cloning. This material was presented at Collaborate 17.

Related Articles

 

Categories: APPS Blogs

Video: APIs and Microservices: Making the Right Choices for Your Mobile Apps

OTN TechBlog - Mon, 2017-06-26 15:36

 “In the past few years the REST APIs that mobile apps use to access server resources have gotten a lot of prominence," observes, Parvez Syed Mohamed, Director of Product Management for Oracle Mobile Cloud Service. "But it’s still very difficult to figure out the right methods to browse and consume third-party APIs and to be able to get the right set of data to shrink down the payload so it works for mobile apps. It’s all about making the right choices for your mobile app. It’s also about making it more personalized, and you also have to worry about the security. All of these are topics that every single architect and mobile developer needs to worry about.” Parvez shares insight into meeting addressing those concerns in this interview recorded at Oracle Code Toronto, April 18, 2017.

 

Additional Resources

 

 

Log Buffer #515: A Carnival of the Vanities for DBAs

Pythian Group - Mon, 2017-06-26 09:15

This Log Buffer Edition covers Oracle, SQL Server and MySQL.

Oracle:

You may wish to clone an Oracle Home (for example you have all your databases on a single Oracle Home, but you want to separate Development from Test

Removing Outliers using stddev()

Installing Scala and Apache Spark on a Mac

Introduction to Oracle Big Data Cloud Service – Compute Edition (Part V) – Pig

More on Optimistic Locking with ORA_ROWSCN

SQL Server:

How to Decipher sysschedules

SQL Azure Reporting Automation

Database Code Analysis

Visual Studio 2017 and Swagger: Building and Documenting Web APIs

Techniques to Cleanse Bad Data in SQL Server

MySQL:

ClickHouse in a General Analytical Workload (Based on a Star Schema Benchmark)

Debian 9 and MySQL: Watch Out!

How to Install LAMP on Ubuntu 16.04

Linux perf and the CPU regression in MySQL 5.7

Secure Binlog Server: Encrypted binary Logs and SSL Communication

Categories: DBA Blogs

Log Buffer #515: A Carnival of the Vanities for DBAs

Pythian Group - Mon, 2017-06-26 09:15

This Log Buffer Edition covers Oracle, SQL Server and MySQL.

Oracle:

You may wish to clone an Oracle Home (for example you have all your databases on a single Oracle Home, but you want to separate Development from Test

Removing Outliers using stddev()

Installing Scala and Apache Spark on a Mac

Introduction to Oracle Big Data Cloud Service – Compute Edition (Part V) – Pig

More on Optimistic Locking with ORA_ROWSCN

SQL Server:

How to Decipher sysschedules

SQL Azure Reporting Automation

Database Code Analysis

Visual Studio 2017 and Swagger: Building and Documenting Web APIs

Techniques to Cleanse Bad Data in SQL Server

MySQL:

ClickHouse in a General Analytical Workload (Based on a Star Schema Benchmark)

Debian 9 and MySQL: Watch Out!

How to Install LAMP on Ubuntu 16.04

Linux perf and the CPU regression in MySQL 5.7

Secure Binlog Server: Encrypted binary Logs and SSL Communication

Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator