Feed aggregator

Can I have only two editions using Oracle EBR for my application and still achieve zero downtime?

Tom Kyte - Fri, 2017-04-07 09:06
Hi Tom, We are planning to implement Oracle EBR in our DB and plan to have only two editions created ED1 and ED2 apart from ORA$BASE. We would also ensure that all the 5000 objects(editionable) we have in ORA$BASE would be actualized into ED1 and ...
Categories: DBA Blogs

How to update tables in loop having 5L records in each table effectively in less time

Tom Kyte - Fri, 2017-04-07 09:06
Hi Tom, i have a scenario where i need to update so many tables(around 80) at once and each table having a minimum of 5 Lakhs records and i used below approach. it is taking more time(don't know exactly because it is still running from more than ...
Categories: DBA Blogs

ORA-00838: Specified value of MEMORY_TARGET is too small

Tom Kyte - Fri, 2017-04-07 09:06
I have a oracle 12c installation. Following commands were executed as SYS user. ALTER SYSTEM SET MEMORY_MAX_TARGET=20G SCOPE=SPFILE; ALTER SYSTEM SET MEMORY_TARGET = 20G SCOPE = SPFILE; ALTER SYSTEM SET PGA_AGGREGATE_LIMIT = 15G SCOPE = SPFIL...
Categories: DBA Blogs

First Date of Week(Monday Date)

Tom Kyte - Fri, 2017-04-07 09:06
Using PO_DT field in Oracle, Trying to get the First Date of week based on value of PO_DT.
Categories: DBA Blogs

continuous scrolling output for table data

Tom Kyte - Fri, 2017-04-07 09:06
hi tom, I am a big fan of your work. We are in need of a procedure, which gives output of table data in ... tail -f format ... meaning the output is continuous, never ending procedure that shows records as it happens in table. I am looking to ha...
Categories: DBA Blogs

Default column value vs commit write batch nowait. When actual value is assigned?

Tom Kyte - Fri, 2017-04-07 09:06
Hi Tom, <b>In this section I'll explain how I come up with the question..</b> Under normal operation, our application do next actions: 1. call util.do_some_action, but not wait for response 2. do some network calls to remote system 3. receiv...
Categories: DBA Blogs

swingbench datagenerator connect string

Tom Kyte - Fri, 2017-04-07 09:06
I need to use [swingbench][1] to quantify performance of a given host. However since I am pretty new to Databases as such cannot get the [datagenerator][2] program to connect to an Oracle DB instance that has been "opened" on the host. After insta...
Categories: DBA Blogs

How to trace plsql executed by a package or a procedure

Tom Kyte - Fri, 2017-04-07 09:06
Hi, In an application that we use (which uses Oracle as the database to store data), when we update and save data, it internally calls a plsql package or a procedure to invoke an UPDATE statement. We have traced the UPDATE statement using db le...
Categories: DBA Blogs

Using package global variables

Tom Kyte - Fri, 2017-04-07 09:06
We are trying to do some conversion of large tables (100 mil rows) in 11g. This is a vendor based schema so from existing structure to new structure with some calculations involved. Our developers want to use package global variables. I heard it is r...
Categories: DBA Blogs

Compare two tables having different data types and different column name

Tom Kyte - Fri, 2017-04-07 09:06
HI Tom, I have two table in migration , source table and destination table.. Column Names in source table are different w.r.to destination table and data type is also different but Fields are mapped from source table to destination table. I ne...
Categories: DBA Blogs

Oracle Mobile Cloud Service (MCS): An introduction to API security: Basic Authentication and OAuth2

Amis Blog - Fri, 2017-04-07 07:41

As an integration/backend developer, when starting a project using Mobile Cloud Service, it is important to have some understanding of what this MBaaS (Mobile Backend as a Service) has to offer in terms of security features. This is important in order to be able to configure and test MCS. In this blog I will give examples on how to configure and use the basic authentication and OAuth2 features which are provided to secure APIs. You can read the Oracle documentation (which is quite good for MCS!) on this topic here.

Introduction

Oracle Mobile Cloud Service offers platform APIs to offer specific features. You can create custom APIs by writing JavaScript code to run on Node.js. Connectors are used to access backend systems. This blogs focuses on authentication options for incoming requests.

The connectors are not directly available from the outside. MCS can secure custom and platform APIs. This functionality is taken care of by the Mobile Backend and the custom API configuration.

Getting started

The first thing to do when you want to expose an API is assign the API to a Mobile Backend. You can do this in the Mobile Backend configuration screen, APIs tab.

You can allow anonymous access, but generally you want to know who accesses your API. Also because MCS has a license option to pay for a specific number of API calls; you want to know who you are paying for. In order to require authentication on a per user basis, you first have to create a user and assign it to a group. You can also do this from the Mobile Backend configuration. Go to the Mobile Users Management tab to create users and groups.

After you have done this, you can assign the role to the API. You can also do this on a per endpoint basis which makes this authentication scheme very flexible.

Now we have configured our API to allow access to users who are in a specific role. We can now call our API using basic authentication or OAuth2.

Basic Authentication

In order to test our API, Postman is a suitable option. Postman is a freely available Chrome plugin (but also available standalone for several OSes) which provides many options for testing HTTP calls.

Basic authentication is a rather weak authentication mechanism. You Base64 encode a string username:password and send that as an HTTP header to the API you are calling. If someone intercepts the message, he/she can easily Base64 decode the username:password string to obtain the credentials. You can thus understand why I’ve blanked out that part of the Authorization field in several screenshots.

In addition to specifying the basic authentication header, you also need to specify the Oracle-Mobile-Backend-Id HTTP header which can be obtained from the main page of the Mobile Backend configuration page.

Obtain Oracle-Mobile-Backend-Id

Call your API with Basic authentication

This mechanism is rather straightforward. The authorization header needs to be supplied with every request though.

OAuth2

OAuth2 works a bit different than basic authentication in that first a token is obtained from a token service and the token is used in subsequent requests. When using the token, no additional authentication is required.

You can obtain the token from the Mobile Backend settings page as shown above. When you do a request to this endpoint, you need to provide some information:

You can use basic authentication with the Client ID:Client secret to access the token endpoint. These can be obtained from the screen shown below.

You also need to supply a username and password of the user for whom the token is generated. After you have done a request to the token service, you obtain a token.

This token can be used in subsequent request to your API. You can add the Bearer field with the token as Authentication HTTP header to authenticate instead of sending your username/password every time. This is thus more secure.

Finally

I’ve not talked about security options for outgoing requests provided by the supplied connectors.

These have per connector specific options and allow identity propagation. For example the REST connector (described in the Oracle documentation here) supports SAML tokens, CSF keys, basic authentication, OAuth2, JWT. The SOAP connector (see here) can use WS-Security in several flavours, SAML tokens, CSF keys, basic authentication, etc (quite a list).

The post Oracle Mobile Cloud Service (MCS): An introduction to API security: Basic Authentication and OAuth2 appeared first on AMIS Oracle and Java Blog.

Oracle Looking to Buy Accenture? Stranger Things Have Happened.

Abhinav Agarwal - Fri, 2017-04-07 07:30
Image credit: pixels.comThe Register reported that Oracle may be exploring the "feasibility of buying multi-billion dollar consultancy Accenture."

To summarize the numbers involved here, Oracle had FY16 revenues of $37 billion, net income of $8.9 billion, and a market cap of $180 billion.

On the other hand, Accenture had FY16 revenues of US$34.8 billion, net income of $4.1 billion, and a market cap of $77 billion.

Some questions that come to mind:
  1. Why? Oracle buying NetSuite in 2016 made sense. Oracle buying Salesforce would make even more sense. Oracle buying a management consulting and professional services company, and that too one with more than a quarter million employees, on the face of it, makes little sense. Would it help Oracle leapfrog Amazon's AWS cloud business? Would it help Oracle go after a new market segment? The answers are not clear, at all.
  2. Who would be in charge of this combined entity? Both have similar revenues, though Accenture has a market cap that is less than half Oracle's and a workforce that is roughly three times Oracle's. The cultural meshing itself would prove to be a challenge. Mark Hurd, one of two CEOs of Oracle (the other CEO is Safra Catz, a former investment banker), has the experience running a large, heterogeneous organization. Prior to his stint at Oracle, he was credited with making the HP and Compaq merger work. At Oracle, however, he has not run software product development, which has been run by Thomas Kurian, and who reports to Larry Ellison, and not Hurd. A merger between Oracle and Accenture would place an even greater emphasis on synergies between Oracle's software division and Accenture's consulting business.
  3. Oracle would need to spend close to $100 billion to buy Accenture, if it does. How would it finance it, even assuming it spends all its $68 billion in cash to do so? Keep in mind that its largest acquisition was in the range of $10 billion. The financial engineering would be staggering. It helps that it has a former investment banker as one of two CEOs.
  4. Will Oracle make Accenture focus on the Oracle red stack of software products and applications - both on-premise and in the cloud? If yes, it would need a much smaller-sized workforce than Accenture has. That in turn would diminish the value of Accenture to Oracle, and make the likely sticker price of $100 billion look even costlier.
  5. Is Oracle looking to become the IBM of the twenty-first century? It's certainly been a public ambition of Larry Ellison. In 2009, he said he wanted to pattern Oracle after Thomas Watson Jr's IBM, "combining both hardware and software systems." If Oracle keeps Accenture as a business unit free to pursue non-Oracle deals, does it mean Oracle is keen on morphing into a modern-day avatar of IBM and IBM Global Services, offering hardware, software, and professional services - all under one red, roof?
  6. Is Oracle serious about such a merger? An acquisition of this size seems more conjecture than in the realms of possibility, at least as of now. One is reminded of the time in 2003 when Microsoft explored the possibility of buying SAP. Those discussions went nowhere, and the idea was dropped. Combining two behemoths is no easy task, even for a company like Oracle, that has stitched together almost 50 acquisitions in just the last five years.
  7. If such an acquisition did go through, there would likely be few anti-trust concerns. That's a big "if".
  8. Stranger things have happened in the software industry, like HP buying Autonomy.
  9. I hope the Register piece was not an example of an early April Fool's joke.
(HT Sangram Aglave whose LinkedIn post alerted me to this article)

I first published this in LinkedIn Pulse on April 1, 2017.

© 2017, Abhinav Agarwal.

Oracle E-Business Suite 12.2 Web Services Security: Authentication and Authorization

This is the seventh posting in a blog series summarizing the new Oracle E-Business Suite 12.2 Mobile and web services functionality and recommendations for securing them.

Once traffic is accepted and passed by the URL Firewall, WebLogic initiates the standard Oracle E-Business Suite authentication and authorization procedures. Web services are authenticated and authorized no differently than for end-users.

Authorization rules for web services are relatively easy to configure in that all web services are defined as functions. The Oracle E-Business Suite's function security scheme and rules engine apply the same to GUI forms as for web services. In other words, the table APPLSYS.FND_FORM_FUNCTIONS defines all the forms that users use as well as defines all web services deployed. Menus then are built referencing these functions and Oracle E-Business Suite user accounts (APPLSYS.FND_USER) are given responsibilities with the menus of functions. These user accounts can be staff members or can be generic accounts (e.g. to support specific web services). Ensuring that appropriate users and responsibilities can call and use specific web services is the same critical step as ensuring that only appropriate users can use specific forms.

There are two authentication options for web services, local FND_USER passwords and tokens. Tokens can be SAML send vouchers/E-Business Suite Session Ids). Whichever is used, ensure that accounts are not inappropriately over privileged and the passwords and tokens not widely known and/or shared.

If you have any questions, please contact us at info@integrigy.com

-Michael Miller, CISSP-ISSMP, CCSP, CCSK

References
 
 
 
 
 
Web Services, DMZ/External, Oracle E-Business Suite
Categories: APPS Blogs, Security Blogs

Machine learning: Getting started with random forests in R

Amis Blog - Fri, 2017-04-07 02:08

According to Gartner, machine learning is on top of the hype cycle at the peak of inflated expectations. There is a lot of misunderstanding about what machine learning actually is and what it can be done with it.

Machine learning is not as abstract as one might think. If you want to get value out of known data and do predictions for unknown data, the most important challenge is asking the right questions and of course knowing what you are doing, especially if you want to optimize your prediction accuracy.

In this blog I’m exploring an example of machine learning. The random forest algorithm. I’ll provide an example on how you can use this algorithm to do predictions. In order to implement a random forest, I’m using R with the randomForest library and I’m using the iris data set which is provided by the R installation.

The Random Forest

A popular method of machine learning is by using decision tree learning. Decision tree learning comes closest to serving as an off-the-shelf procedure for data mining (see here). You do not need to know much about your data in order to be able to apply this method. The random forest algorithm is an example of a decision tree learning algorithm.

Random forest in (very) short

How it works exactly takes some time to figure out. If you want to know details, I recommend watching some youtube recordings of lectures on the topic. Some of its most important features of this method:

  • A random forest is a method to do classifications based on features. This implies you need to have features and classifications.
  • A random forest generates a set of classification trees (an ensemble) based on splitting a subset of features at locations which maximize information gain. This method is thus very suitable for distributed parallel computation.
  • Information gain can be determined by how accurate the splitting point is in determining the classification. Data is split based on the feature at a specific point and the classification on the left and right of the splitting point are checked. If for example the splitting point splits all data of a first classification from all data of a second classification, the confidence is 100%; maximum information gain.
  • A splitting point is a branching in the decision tree.
  • Splitting points are based on values of features (this is fast)
  • A random forest uses randomness to determine features to look at and randomness in the data used to construct the tree. Randomness helps reducing compute time.
  • Each tree gets to see a different dataset. This is called bagging.
  • Tree classification confidences are summed and averaged. Products of the confidences can also be taken. Individual trees have a high variance because they have only seen a small subset of data. Averaging helps creating a better result.
  • With correlated features, strong features can end up with low scores and the method can be biased towards variables with many categories.
  • A random forest does not perform well with unbalanced datasets; samples where there are more occurrences of a specific class.
Use case for a random forest

Use cases for a random forest can be for example text classification such as spam detection. Determine if certain words are present in a text can be used as a feature and the classification would be spam/not spam or even more specific such as news, personal, etc. Another interesting use case lies in genetics. Determining if the expression of certain genes is relevant for a specific disease. This way you can take someone’s DNA and determine with a certain confidence if someone will contract a disease. Of course you can also take other features into account such as income, education level, smoking, age, etc.

R Why R

I decided to start with R. Why? Mainly because it is easy. There are many libraries available and there is a lot of experience present worldwide; a lot of information can be found online. R however also has some drawbacks.

Some benefits

  • It is free and easy to get started. Hard to master though.
  • A lot of libraries are available. R package management works well.
  • R has a lot of users. There is a lot of information available online
  • R is powerful in that if you know what you are doing, you require little code doing it.

Some challenges

  • R loads datasets in memory
  • R is not the best at doing distributed computing but can do so. See for example here
  • The R syntax can be a challenge to learn
Getting the environment ready

I decided to install a Linux VM to play with. You can also install R and R studio (the R IDE) on Windows or Mac. I decided to start with Ubuntu Server. I first installed the usual things like a GUI. Next I installed some handy things like a terminal emulator, Firefox and stuff like that. I finished with installing R and R-studio.

So first download and install Ubuntu Server (next, next, finish)

sudo apt-get update
sudo apt-get install aptitude

–Install a GUI
sudo aptitude install –without-recommends ubuntu-desktop

— Install the VirtualBox Guest additions
sudo apt-get install build-essential linux-headers-$(uname -r)
Install guest additions (first mount the ISO image which is part of VirtualBox, next run the installer)

— Install the below stuff to make Dash (Unity search) working
http://askubuntu.com/questions/125843/dash-search-gives-no-result
sudo apt-get install unity-lens-applications unity-lens-files

— A shutdown button might come in handy
sudo apt-get install indicator-session

— Might come in handy. Browser and fancy terminal application
sudo apt-get install firefox terminator

–For the installation of R I used the following as inspiration: https://www.r-bloggers.com/how-to-install-r-on-linux-ubuntu-16-04-xenial-xerus/
sudo echo “deb http://cran.rstudio.com/bin/linux/ubuntu xenial/” | sudo tee -a /etc/apt/sources.list
gpg –keyserver keyserver.ubuntu.com –recv-key E084DAB9
gpg -a –export E084DAB9 | sudo apt-key add –
sudo apt-get update
sudo apt-get install r-base r-base-dev

— For the installation of R-studio I used: https://mikewilliamson.wordpress.com/2016/11/14/installing-r-studio-on-ubuntu-16-10/

wget http://ftp.ca.debian.org/debian/pool/main/g/gstreamer0.10/libgstreamer0.10-0_0.10.36-1.5_amd64.deb
wget http://ftp.ca.debian.org/debian/pool/main/g/gst-plugins-base0.10/libgstreamer-plugins-base0.10-0_0.10.36-2_amd64.deb
sudo dpkg -i libgstreamer0.10-0_0.10.36-1.5_amd64.deb
sudo dpkg -i libgstreamer-plugins-base0.10-0_0.10.36-2_amd64.deb
sudo apt-mark hold libgstreamer-plugins-base0.10-0
sudo apt-mark hold libgstreamer0.10

wget https://download1.rstudio.org/rstudio-1.0.136-amd64.deb
sudo dpkg -i rstudio-1.0.136-amd64.deb
sudo apt-get -f install

Doing a random forest in R

R needs some libraries to do random forests and create nice plots. First give the following commands:

#to do random forests
install.packages(“randomForest”)

#to work with R markdown language
install.packages(“knitr”)

#to create nice plots
install.packages(“ggplot2”)

In order to get help on a library you can give the following command which will give you more information on the library.

library(help = “randomForest”)

 Of course, the randomForest implementation does have some specifics:

  • it uses the reference implementation based on CART trees
  • it is biased in favor of continuous variables and variables with many categories

A simple program to do a random forest looks like this:

#load libraries
library(randomForest)
library(knitr)
library(ggplot2)

#random numbers after the set.seed(10) are reproducible if I do set.seed(10) again
set.seed(10)

#create a training sample of 45 items from the iris dataset. replace indicates items can only be present once in the dataset. If replace is set to true, you will get Out of bag errors.
idx_train <- sample(1:nrow(iris), 45, replace = FALSE)

#create a data.frame from the data which is not in the training sample
tf_test <- !1:nrow(iris) %in% idx_train

#the column ncol(iris) is the last column of the iris dataset. this is not a feature column but a classification column
feature_columns <- 1:(ncol(iris)-1)

#generate a randomForest.
#use the feature columns from training set for this
#iris[idx_train, ncol(iris)] indicates the classification column
#importance=TRUE indicates the importance of features in determining the classification should be determined
#y = iris[idx_train, ncol(iris)] gives the classifications for the provided data
#ntree=1000 indicates 1000 random trees will be generated
model <- randomForest(iris[idx_train, feature_columns], y = iris[idx_train, ncol(iris)], importance = TRUE, ntree = 1000)

#print the model
#printing the model indicates how the sample dataset is distributed among classes. The sum of the sample classifications is 45 which is the sample size. OOB rate indicates ‘out of bag’ (the overall classification error).

print(model)

#we use the model to predict the class based on the feature columns of the dataset (minus the sample used to train the model).
response <- predict(model, iris[tf_test, feature_columns])

#determine the number of correct classifications
correct <- response == iris[tf_test, ncol(iris)]

#determine the percentage of correct classifications
sum(correct) / length(correct)

#print a variable importance (varImp) plot of the randomForest
varImpPlot(model)

#in this dataset the petal length and width are more important measures to determine the class than the sepal length and width.

The post Machine learning: Getting started with random forests in R appeared first on AMIS Oracle and Java Blog.

Oracle Code : See you there!

Tim Hall - Fri, 2017-04-07 01:30

You may have seen a lot of tweets (#OracleCode) recently about the Oracle Code events around the world.

The content of the events is rather different to the typical Oracle events I go to, so it will be a good opportunity for me to learn some new stuff.

I’ll be speaking at two of the European events this year.

They are Oracle events, so there is bound to be an Oracle spin on things, but I think it’s a welcome change of tack for Oracle to acknowledge that they are not always the centre of the universe in the minds of developers. If there is an event near you, check it out and see what is happening in the development world these days.

Cheers

Tim…

Oracle Code : See you there! was first posted on April 7, 2017 at 7:30 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Workaround for ADF BC View Object Attribute Order Problem in JDeveloper 12c

Andrejus Baranovski - Thu, 2017-04-06 21:23
I'm sure probably every ADF developer sooner or later faced this issue. When you create VO based on EO, JDEV gives you alphabetically ordered list of attributes. As a result - order of attributes in EO and VO becomes different. While this doesn't influence runtime functionality, it becomes quite annoying for application maintenance. Hard to match attributes between VO and EO, developer need to search through the list to locate attribute he is looking for. But there is a workaround, I will describe it here.

Let's see what the problem first. Assume we have Employees EO, attributes are generated in the same order as DB table columns:


Now if you are using VO creation wizard, list of attributes will be displayed in alphabetic order. This is frustrating:


Without any other choice, developer would select EO attributes and select them in such order as it is listed:


But wait, there is a workaround. Don't select all attributes, instead select only EO item itself. Then use Add button to add entire list of EO attributes:


This time attributes will be added in original order, as the order is set in EO.


Enjoy this small, but useful hint.

OUAF 4.3.0.4.0 Release Summary

Anthony Shorten - Thu, 2017-04-06 20:28

The next release of the Oracle Utilities Application Framework (4.3.0.4.0) is in its final implementation across our product lines over the next few months. This release improves the existing Oracle Utilities Application Framework with exciting new features and enhanced existing features for our cloud and non-cloud implementations. Here is a summary of the key features of the new Oracle Utilities Application Framework.

Main Features CMA Improvements

The following highlights some improvements to CMA processing.

Ad-hoc Migration Requests

A new migration request BO has been provided to allow for building ‘ad-hoc’ migration requests using a list of specific objects.  It’s called the “entity list” migration request.

A special zone is included to find records to include in the migration request.  This zone allows you to choose a maintenance object that is configured for CMA and enter search criteria to get a list of objects to choose.  The zone supports linking one or more objects for the same MO en masse.


Once records are linked, a zone allows you to view the existing records and remove any if needed.

Selection

Grouping Migration Requests

Migration requests may now be grouped so that you can maintain more granular migration requests that get grouped together to orchestrate a single export of data for a ‘wholesale’ migration.  The framework supplies a new ‘group’ migration request that includes other migration requests that logically group migration plans.  Edge products or implementations may include this migration request into their own migration request.


Mass Actions During Migration Import Approval

When importing data sets, a user may now perform mass actions on migration objects to approve or reject or mark as ‘needs review’.


Groovy Library Support

Implementers may now define a Groovy library script for common functionality that may be included in other Groovy scripts.

There’s a new script type:


Scripts of this type define a Groovy Library Interface step type to list the Groovy methods defined within the script that are available for use by other scripts.


Additional script steps using the Groovy Member step type are used to define the Groovy code that the script implements.

Groovy scripts that choose to reference the Groovy Library Script can use the createLibraryScript method provided by the system to instantiate the library interface.

Search Menu Capability

A new option in the toolbar allows a user to search for a page rather than using the menu to find the desired page.


All menu items whose label matches what the user types are shown (as you type):


Additional Features

The following is a subset of additional features that are included.   Refer to the published release notes for more details.

  • URI validation / substitution. Any place where a URI is configured can now use substitution variables to support transparency across environment. The fully substituted value can also be validated against a whitelist for added security.
  • Minimizing the dashboard suppresses refresh. This allows a user to improve response when navigating throughout the system by delaying the refresh of zones in the dashboard while it is minimized.
  • New support for UI design. Input maps may now support half width sections.  Both display and input maps may support “floating” half width sections that fill in available space on the UI based on what is displayed.
  • Individual batch controls may now be secured independently.
  • Ad-hoc batch parameters are supplied to all batch related plug-in spots. Additionally, plug-in driven batch programs may now support ad-hoc parameters.
  • Elements in a schema that include the private=true attribute will no longer appear in the WSDL of any Inbound Web Service based upon that schema.

OUAF 4.3.0.4.0 Release Summary

Anthony Shorten - Thu, 2017-04-06 20:28

The next release of the Oracle Utilities Application Framework (4.3.0.4.0) is in its final implementation across our product lines over the next few months. This release improves the existing Oracle Utilities Application Framework with exciting new features and enhanced existing features for our cloud and non-cloud implementations. Here is a summary of the key features of the new Oracle Utilities Application Framework.

Main Features CMA Improvements

The following highlights some improvements to CMA processing.

Ad-hoc Migration Requests

A new migration request BO has been provided to allow for building ‘ad-hoc’ migration requests using a list of specific objects.  It’s called the “entity list” migration request.

A special zone is included to find records to include in the migration request.  This zone allows you to choose a maintenance object that is configured for CMA and enter search criteria to get a list of objects to choose.  The zone supports linking one or more objects for the same MO en masse.


Once records are linked, a zone allows you to view the existing records and remove any if needed.

Selection

Grouping Migration Requests

Migration requests may now be grouped so that you can maintain more granular migration requests that get grouped together to orchestrate a single export of data for a ‘wholesale’ migration.  The framework supplies a new ‘group’ migration request that includes other migration requests that logically group migration plans.  Edge products or implementations may include this migration request into their own migration request.


Mass Actions During Migration Import Approval

When importing data sets, a user may now perform mass actions on migration objects to approve or reject or mark as ‘needs review’.


Groovy Library Support

Implementers may now define a Groovy library script for common functionality that may be included in other Groovy scripts.

There’s a new script type:


Scripts of this type define a Groovy Library Interface step type to list the Groovy methods defined within the script that are available for use by other scripts.


Additional script steps using the Groovy Member step type are used to define the Groovy code that the script implements.

Groovy scripts that choose to reference the Groovy Library Script can use the createLibraryScript method provided by the system to instantiate the library interface.

Search Menu Capability

A new option in the toolbar allows a user to search for a page rather than using the menu to find the desired page.


All menu items whose label matches what the user types are shown (as you type):


Additional Features

The following is a subset of additional features that are included.   Refer to the published release notes for more details.

  • URI validation / substitution. Any place where a URI is configured can now use substitution variables to support transparency across environment. The fully substituted value can also be validated against a whitelist for added security.
  • Minimizing the dashboard suppresses refresh. This allows a user to improve response when navigating throughout the system by delaying the refresh of zones in the dashboard while it is minimized.
  • New support for UI design. Input maps may now support half width sections.  Both display and input maps may support “floating” half width sections that fill in available space on the UI based on what is displayed.
  • Individual batch controls may now be secured independently.
  • Ad-hoc batch parameters are supplied to all batch related plug-in spots. Additionally, plug-in driven batch programs may now support ad-hoc parameters.
  • Elements in a schema that include the private=true attribute will no longer appear in the WSDL of any Inbound Web Service based upon that schema.

E-Business Suite Technology Stack Blog in Migration

Steven Chan - Thu, 2017-04-06 18:05

This blog is being migrated to a new blogging platform (at last!). This is our fifth migration since 2006, so I expect a bit of reorganization of content.  We're going on hiatus for a bit until the dust settles.

Heads up: all comments posted from now to the new blog's appearance will be lost. If you post a comment that's gotten lost in the transition, please re-post when the new blog is up and running.


Categories: APPS Blogs

Pages

Subscribe to Oracle FAQ aggregator