Feed aggregator

Easing into the Cloud for Oracle WebCenter Investments

WebCenter Team - Wed, 2016-10-19 08:17

Author: Marcus Diaz, Senior Principal Product Manager, Oracle

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin;}

For customers who have invested in traditional on-premise deployments of IT applications to support their business objectives, the growing trend and push to the cloud can often appear to be a daunting challenge. But the good news is that it is not an “all or nothing” proposition when you consider taking a hybrid approach as you step towards cloud adoption. As Figure 1 shows below, 71% of customers below are taking a hybrid cloud approach which includes the use of an conventional on-premise IT application in conjunction with some form of cloud technology.

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin;}

Figure 1: Hybrid Cloud Adoption

Lift and Shift is a term that is being used quite a bit these days in talking about the push to the cloud by various cloud vendors. While exact definitions vary, we at Oracle use this term to mean copying (or cloning) an on-premises implementation of Oracle WebCenter Content, Imaging, Portal or Sites environment into Oracle’s public cloud offerings. The targets for these environments could be either Oracle’s Infrastructure as a Service (IaaS) Compute cloud or Oracle’s Java Cloud Service (JCS).

All of the current 12c releases of the WebCenter product family (Content, Imaging, Capture, Portal & Sites) are certified to run in these Oracle cloud environments.

Oracle’s Infrastructure as a Service (IaaS) Compute cloud environment is a cloud based virtualization service for deploying your applications. You get a virtualized operating system environment with a number of pre-allocated CPU cores (Oracle calls them OCPU’s) and storage running on Oracle cloud infrastructure. As an administrator, you would do all the same things that you would have done to install the Oracle WebCenter Stack into your own hardware but instead you do it remotely using a secure connection to your Oracle cloud environment.

Oracle’s Java Cloud Service (JCS) cloud environments are similar to IaaS/Compute cloud with one big difference: JCS comes with a pre-provisioned Oracle WebLogic Application Server. You still get a virtualized operating system environment with an number of pre-allocated CPU cores but instead of starting the application installation process at the operating system level – you start with an already running WLS server and only need to install the middleware applications. The additional benefits of the JCS environment are that the WLS application server functions like monitoring, backup, update & scale out are integrated into the JCS cloud administration console functions.

When you lift and shift your WebCenter on-premise applications to the cloud – these cloud instances can be used for testing, development, or production environments. For production environments, the high availability standard reference architectures that the WebCenter products document & support for clustering and load balancing are supported as well. Figure 2 below the Oracle WebCenter Content & Imaging reference high availability architecture.

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin;}

Figure 2: Oracle WebCenter Content & Imaging Reference HA architecture

From a licensing perspective, it’s a case of Bring Your Own License (BYOL) , meaning you can re-allocate your existing on-premise perpetual processor licenses that you already own and re-deploy them on Oracle’s cloud infrastructure instead of your own hardware. What you pay for is a subscription cost for the virtual cloud CPU’s & storage that you use in the Oracle cloud. You can pay as you go (metered subscription) for the resources you use or you can prepay (non-metered) for a fixed set of cloud resources.

In summary, with the availability of “lift-n-shift” support in Oracle’s compute cloud environments you’ve now got the best of both worlds – the flexibility of the Oracle cloud for scaling and off-loading your infrastructure costs while continuing to be able to use your existing WebCenter product family investments and the applications you’ve built around them.

To get started, connect with us via your Oracle WebCenter account representative or your Oracle point of contact. We will drive strategy sessions to work on a roadmap that works best for your short- and long-term needs. And, get the best of both worlds - leveraging your existing investments and benefits of the Cloud.

We know you have questions so please do reach out to us via the comments section or through your point of contact. For more information on our Content and Experience cloud portfolio, visit us at oracle.com/dx.

Data Visualization Desktop

Rittman Mead Consulting - Wed, 2016-10-19 06:17

Yesterday Data Visualization Desktop (DVD) Version was released. DVD, since its first release, aims to extend Oracle's Data Visualization portfolio by adding a desktop tool allowing data visualization capabilities directly to end users without the intervention of the IT department, in line with the Gartner's bi-modal IT.

The new version adds several capabilities to the existing product like visualization types, data sources and a wrangling option. This post will share the details of the new release additional features.


After downloading DVD, the installation is pretty simple, just double click on the Oracle_Data_Visualization_Desktop_V2_12_2_2_0_0.msi file, choose the installation folder and click on "Install".

Installation end

Once the installation is finished, be aware of the message in the last screen, it says that the correct version of R and the set of packages need to be installed in order to be used with DVD for Advanced Analytics. Those can be installed via the "Install Advanced Analytics" file placed in Start Menu -> Programs -> Oracle.

This setup allows to chose the R installation location, installs R and then downloads from cran.us.r-project.org the relevant packages.

R Setup

New Visualisations

The first set of improvements in the new release is about the new out of the box visualisation, and new set of graphs is now available:

  • List: Shows a list of the dimension's values together with a gradient colouring based on the measure selected

List View

  • Parallel Coordinates: Shows multiple dimensions on the same chart enhancing the ability to quickly get an insight about possible connections between them

Parallel View

  • Timeline: It's an effective way of showing time related facts, each fact is shown along a timeline, with one or more distinguishing attributes, the example shows the quantity shipped by day and city.

Timeline View

  • Network Diagrams: Chord, Circular, Network and Sankey Diagrams are used to shows inter-relationship between elements

Network Views

Other visual enhancements include a multi-canvas layout that can be exported with a single click and a hierarchical or gradient colouring for the charts.

Data Sources

A lot of new data sources have been added to DVD, some of them still in beta phase. A bunch of new databases are now supported like Netezza, Amazon Aurora and PostgreSQL.

An interesting enhancement is the connection to Dropbox and Google Drive allowing DVD to source files stored in Cloud. Finally DVD's exposure to Big Data world has been enhanced by the addition of the connectivity to tools such as Apache Drill, Presto and Cassandra.

DVD Data Sources

Excel Editing

Excel sheets used as data source now can be edited and the DVD project refreshed without the need of manually reloading the spreadsheet.

Data Flows

There is a new component in DVD called Data Flow allowing the end user some basic transformations of the data like joining two datasets (even if coming from different sources), filtering, aggregating, adding columns based on custom formulas and storing the result on the local file system.

DVD Data Flows Options

In the example below two files coming from Hive (but the source can also be different) are joined and a subset of columns is selected and stored locally.

DVD Data Flows Options

Data Flows can be stored in DVD and re-executed upon request. The list of Data Flows is available under Data Sources -> Data Flows. In the next blog post I'll show a typical Analyst use case in which Data Flow can help automating a series of data loading, cleansing and enriching steps.

Data Insights

Data Insights provides a way of quickly understand the dataset available, by default it shows a series of graphs, one for every attribute, with the cardinality of each attribute's value. A drop down menu allows to show the same graphs based on any measure defined in the dataset.

DVD Data Insights

BI Ask

The new DVD version contains also BI Ask, providing the ability to create queries with natural language which is automatically interpreted and presented in suggested visualisations.

BI Ask

As you could read in this post the new version of Data Visualization Desktop adds a series of really interesting features enabling not only the data visualisation but also data exploration and wrangling. In the next blog post we'll see a typical DVD use case and how the new Data Flow option could be used to couple data coming from various sources.

Categories: BI & Warehousing

Oracle Critical Patch Update for October 2016

Syed Jaffar - Wed, 2016-10-19 05:08
The Critical Patch Update for October 2016 was released on October 18th, 2016. Oracle strongly recommends applying the patches as soon as possible. 

Visit the URL below for more updates


Documentum story – Jobs in a high availability installation

Yann Neuhaus - Wed, 2016-10-19 04:55

When you have an installation with one Content Server (CS) you do not take care where the job will be running. It’s always on your single CS.
But how should you configure the jobs in case you have several CS? Which jobs have to be executed and which one not? Let’s see that in this post.

When you have to run your jobs in a high availability installation you have to configure some files and objects.

Update the method_verb of the dm_agent_exec method:

API> retrieve,c,dm_method where object_name = 'agent_exec_method'
API> get,c,l,method_verb
API> set,c,l,method_verb
SET> ./dm_agent_exec -enable_ha_setup 1
API> get,c,l,method_verb
API> save,c,l
API> reinit,c


The java methods have been updated to be restartable:

update dm_method object set is_restartable=1 where method_type='java';


On our installation we use jms_max_wait_time_on_failures = 300 instead the default one (3000).
In server.ini (Primary Content Server) and server_HOSTNAME2_REPO01.ini (Remote Content Server), we have:



Based on some issues we faced, for instance with the dce_clean job that ran twice when we had both JMS projected to each CS, EMC advised us to each JMS with its local CS only. With this configuration, in case the JMS is down on the primary CS, the job (using a java method) is started on the remote JMS via the remote CS.

Regarding which jobs have to be executed – I am describing only the one which are used for the housekeeping.
So the question to answer is which job does what and what is “touched”, either metadata or/and content.

To verify that, check how many CS are used and where they are installed:

select object_name, r_host_name from dm_server_config
REPO1               HOSTNAME1.DOMAIN


Verify on which CS the jobs will run and “classify” them.
Check the job settings:

select object_name, target_server, is_inactive from dm_job

The following jobs work only on metadata, they can run anywhere so the target_server has to be empty

 object_name target_server is_inactive dm_ConsistencyChecker False dm_DBWarning False dm_FileReport False dm_QueueMgt False dm_StateOfDocbase False



The following jobs work only on content.


As we are using a NAS for the Data directory which is shared for both servers, only one of the two jobs has to run. Per default the target_server is defined. So for the one which has to run, target_server has to be empty.

  object_name  target_server  is_inactive dm_ContentWarning False dm_ContentWarningHOSTNAME2_REPO1  REPO1.HOSTNAME2_REPO1@HOSTNAME2.DOMAIN True dm_DMClean False dm_DMCleanHOSTNAME2_REPO1  REPO1.HOSTNAME2_REPO1@HOSTNAME2.DOMAIN True Metadata and Content

These following jobs are working on metadata and content.


Filescan scans the NAS content storage. As said above, it is shared and therefore the job only need to be execute once: the target_server has to be empty to be run everywhere.

LogPurge is also cleaning files under $DOCUMENTUM/dba/log and subfolders which are obviously not shared and therefore both dm_LogPurge jobs have to run. You just have to use another start time to avoid an overlap when objects are removed from the repository.

   object_name   target_server   is_inactive dm_DMFilescan False dm_DMFilescanHOSTNAME2_REPO1  REPO1.HOSTNAME2_REPO1@HOSTNAME2.DOMAIN True dm_LogPurge  REPO1.REPO1@HOSTNAME1.DOMAIN False dm_LogPurgeHOSTNAME2_REPO1  REPO1.HOSTNAME2_REPO1@HOSTNAME2.DOMAIN False

Normally with this configuration your housekeeping jobs should be well configured.

One point you have to take care is when you use DA to configure your jobs. Once you open the job properties, the “Designated Server” is set to one of your server and not to “Any Running Server” which means target_server = ‘ ‘. If you click the OK button, you will set the target server and in case this CS is down, the job will fail because it cannot use the second CS.


Cet article Documentum story – Jobs in a high availability installation est apparu en premier sur Blog dbi services.

Get the hostname of the executing server in BPEL

Darwin IT - Wed, 2016-10-19 04:48
This week I got involved in a question on the Oracle Forums on getting the hostname of the server executing the bpel process. In itself this is not possible in BPEL. Also if you have a long running async process, the process gets dehydrated at several points (at a receive, wait, etc.). After an incoming signal, another server could process it further. You can't be sure that one server will process it to the end.

However, using Java, you can get the hostname of an executing server, quite easily. @AnatoliAtanasov suggested this question on stackOverflow. I thought that it would be fun to try this out.

Although you can opt for creating an embedded java activity, I used my earlier article on SOA and Spring Contexts to have it in a separate bean. By the way, in contrast to my suggestions in the article, you don't have to create a separate spring context for every bean you use.

My java bean looks like:
package nl.darwinit.soasuite;
import java.net.InetAddress;
import java.net.UnknownHostException;

public class ServerHostBeanImpl implements IServerHostBean {
public ServerHostBeanImpl() {

public String getHostName(String hostNameDefault){
String hostName;
InetAddress addr;
addr = InetAddress.getLocalHost();
hostName = addr.getHostName();
catch (UnknownHostException ex)
System.out.println("Hostname can not be resolved");
hostName = hostNameDefault;
return hostName;


The interface class I generated is:
package nl.darwinit.soasuite;

public interface IServerHostBean {
String getHostName(String hostNameDefault);

Then I defined a Spring Context, getHostNameContext, with the following content
<?xml version="1.0" encoding="UTF-8" ?>
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util"
xmlns:jee="http://www.springframework.org/schema/jee" xmlns:lang="http://www.springframework.org/schema/lang"
xmlns:aop="http://www.springframework.org/schema/aop" xmlns:tx="http://www.springframework.org/schema/tx"
xmlns:sca="http://xmlns.oracle.com/weblogic/weblogic-sca" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/tool http://www.springframework.org/schema/tool/spring-tool.xsd http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop.xsd http://www.springframework.org/schema/cache http://www.springframework.org/schema/cache/spring-cache.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd http://www.springframework.org/schema/task http://www.springframework.org/schema/task/spring-task.xsd http://www.springframework.org/schema/jee http://www.springframework.org/schema/jee/spring-jee.xsd http://www.springframework.org/schema/lang http://www.springframework.org/schema/lang/spring-lang.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx.xsd http://www.springframework.org/schema/jdbc http://www.springframework.org/schema/jdbc/spring-jdbc.xsd http://www.springframework.org/schema/jms http://www.springframework.org/schema/jms/spring-jms.xsd http://www.springframework.org/schema/oxm http://www.springframework.org/schema/oxm/spring-oxm.xsd http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc.xsd http://xmlns.oracle.com/weblogic/weblogic-sca META-INF/weblogic-sca.xsd">
<!--Spring Bean definitions go here-->
<sca:service name="GetHostService" target="ServerHostBeanImpl" type="nl.darwinit.soasuite.IServerHostBean"/>
<bean id="ServerHostBeanImpl" class="nl.darwinit.soasuite.ServerHostBeanImpl"/>

After wiring the context to my BPEL the composite looks like:

Then, deploying and running it, gives the following output:

Nice, isn't it?

Documentum story – How to display correct client IP address in the log file when a WebLogic Domain is fronted by a load Balancer

Yann Neuhaus - Wed, 2016-10-19 04:32

The Load Balancers do not provide the client IP address by default. The WebLogic HTTP log file (access_log) does not provide the client IP address but the Load Balancer one.
This is sometimes a problem when diagnosing issues and the Single Sign On configuration does not provide the user name in the HTTP log either.

In most of  the cases, the Load Balancer can provides an additional header named “X-Forwarded-For” but it needs to be configured from the Load Balancer administration people.
If the “X-Forwarded-For” Header is provided, it can be fetched using the WebLogic Server HTTP extended logging.

To enable the WebLogic Server HTTP logging to fetch the “X-Forwarded-For” Header follow the steps below for each WebLogic Server in the WebLogic Domain:

  1. Browse to the WebLogic Domain administration console and sign in as an administrator user
  2. Open the servers list and select the first managed server
  3. Select the logging TAB and the HTTP sub-tab
  4. Open the advanced folder and change the format to “extended” and the Extended Logging Format Fields to:
    "cs(X-Forwarded-For) date time cs-method cs-uri sc-status bytes"
  5. Save
  6. Browse back to the servers list and repeat the steps for each WebLogic Server from the domain placed behind the load balancer.
  7. Activate the changes.
  8. Stop and restart the complete WebLogic domain.

After this, the WebLogic Servers HTTP Logging (access_log) should display the client IP address and not the Load Balancer one.

When using the WebLogic Server extended HTTP logging, the username field is not available any more.
This feature is described in the following Oracle MOS article:
Missing Username In Extended Http Logs (Doc ID 1240135.1)

To get the authenticated usename displayed, an additional custom filed provided by a custom Java class needs to be used.

Here is an example of such Java class:

import weblogic.servlet.logging.CustomELFLogger; 
import weblogic.servlet.logging.FormatStringBuffer; 
import weblogic.servlet.logging.HttpAccountingInfo;

/* This example outputs the User-Agent field into a
 custom field called MyCustomField

public class MyCustomUserNameField implements CustomELFLogger{

public void logField(HttpAccountingInfo metrics,
  FormatStringBuffer buff) {

The next step is to compile and create a jar library.

Set the environment running the WebLogic setWLSEnv.sh script.

javac MyCustomUserNameField.java

jar cvf MyCustomUserNameField.jar MyCustomUserNameField.class

Once done, copy the jar library file under the WebLogic Domain lib directory. This way, it will be made available in the class path of each WebLogic Server of this WebLogic Domain.

The WebLogic Server HTTP Extended log format can now be modified to include a custom field named “x-MyCustomUserNameField”.


Cet article Documentum story – How to display correct client IP address in the log file when a WebLogic Domain is fronted by a load Balancer est apparu en premier sur Blog dbi services.

OBIEE, Big Data Discovery, and ODI security updates - October 2016

Rittman Mead Consulting - Wed, 2016-10-19 04:14

Oracle release their "Critical Patch Update" (CPU) notices every quarter, bundling together details of vulnerabilities and associated patches across their entire product line. October's was released yesterday, with a few entries of note in the analytics & DI space.

Each vulnerability is given a unique identifier (CVE-xxxx-xxxx) and a score out of ten. The scoring uses a common industry-standard scale on the basis of how easy it is to exploit, and what is compromised (availability, data, etc). Ten is the worst, and I would crudely paraphrase it as generally meaning that someone can wander in, steal your data, change your data, and take your system offline. Lower than that and it might be that it requires extensive skills to exploit, or the impact be much lower.

A final point to note is that the security patches that are released are not available for old versions of the software. For example, if you're on OBIEE or earlier, and it is affected by the vulnerability listed below (which I would assume it is), there is no security patch. So even if you don't want to update your version for the latest functionality, staying within support is an important thing to do and plan for. You can see the dates for OBIEE versions and when they go out of "Error Correction Support" here.

If you want more information on how Rittman Mead can help you plan, test, and carry out patching or upgrades, please do get in touch!

The vulnerabilities listed below are not a comprehensive view of an Oracle-based analytics/DI estate - things like the database itself, along with Web Logic Server, should also be checked. See the CPU itself for full details.

Big Data Discovery (BDD)
  • CVE-2015-3253
    • Affected versions: 1.1.1, 1.1.3, 1.2.0
    • Base score: 9.8
    • Action: upgrade to the latest version, 1.3.2. Note that the upgrade packages are on Oracle Software Delivery Cloud (née eDelivery)
  • CVE-2016-2107
    • Affected versions:,,,
    • Base score: 5.9
    • Action: apply bundle patch 161018 for your particular version (see MoS doc 2171485.1 for details)
BI Publisher ODI
  • CVE-2016-5602

    • Affected versions:,,,,
    • Base score: 5.7
    • The getInfo() ODI API could be used to expose passwords for data server connections.
    • More details in MoS doc 2188855.1
  • CVE-2016-5618

    • Affected versions:,,,,,
    • Base score: 3.1
    • This vulnerability documents the potential that a developer could take the master repository schema credentials and use them to grant themselves SUPERVISOR access. Even using the secure wallet, the credentials are deobfuscated on the local machine and therefore a malicious developer could still access the credentials in theory.
    • More details in MoS doc 2188871.1
Categories: BI & Warehousing

SQL group by query shenanigans

Tom Kyte - Wed, 2016-10-19 03:26
Hi Chris or Connor, Saw you guys at OOW so I thought I toss a basic SQL query to you. I?ll use the HR.EMPLOYEES table to represent my problem so forgive me if it's a bit contrived. I?ll like to construct a query to sum the salaries grouped by ...
Categories: DBA Blogs

Grant Access on Table

Tom Kyte - Wed, 2016-10-19 03:26
I've given grant to one user on a table but user unable to access the table though I'm getting output as grant succeeded. The scenario is 1. There is one database A on remote location. 2. A is trying to access some table on database B(place on othe...
Categories: DBA Blogs

View that opens and runs once. The next time it's opened it hangs.

Tom Kyte - Wed, 2016-10-19 03:26
I have a view that when you open the view in say something like TOAD, or MS Access, or SMSS it opens fine the first time. The next time you open it or select from it it hangs. Creating the view I get no errors or warnings. If I open the view i...
Categories: DBA Blogs

Bulk Collection Save Exception

Tom Kyte - Wed, 2016-10-19 03:26
Dear Tom, Please help me on the below... we are having BULK COLLECT option which will save exception like below FORALL i in 1tab.count SAVE EXCEPTIONS INSERT INTO table values(obj(i)); exception when excep_bulk_err then ...
Categories: DBA Blogs

Oracle RAC without ASM

Tom Kyte - Wed, 2016-10-19 03:26
Hi My aim is to install oracle rac 11gR2 without ASM. What's required steps to achieve this and this system (rac installation) and what's pros / cons for this kind of installation . Regards, Oussema
Categories: DBA Blogs

DBLink for Local Tables

Tom Kyte - Wed, 2016-10-19 03:26
Nice day i'm from peru so my english is not the best. The database is a: Oracle Database 11g Release - 64bit Production PL/SQL Release - Production "CORE Production" TNS for Linux: Version - Product...
Categories: DBA Blogs

Database PL/SQL developer

Tom Kyte - Wed, 2016-10-19 03:26
what are the roles and responsibilities of PL/SQL developer? what kind of knowledge required for oracle PL/SQL developer? what will be the future of PL/SQL developer?
Categories: DBA Blogs

PL/SQL Database Programming Question

Tom Kyte - Wed, 2016-10-19 03:26
I am struggling to figure out which LOOP statement to use. Here's the question: Each day, starting on Monday, the price will drop 5% from the previous day?s price. Monday?s sale price will be 5% less than what is stored in the database in the BB...
Categories: DBA Blogs

Oracle IaaS Workshop for EMEA Partners

Oracle Cloud Platform: Infrastructure as a Service Workshop for Partners ...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Manage GitHub behind a proxy

Yann Neuhaus - Wed, 2016-10-19 02:00

I’m quite used to GitHub since I’m using it pretty often but I actually never tried to use it behind a proxy. In the last months, I was working on a project and I had to use GitHub to version control the repository that contained scripts, monitoring configurations, aso… When setting up my local workstation (Windows) using the GUI, I faced an error showing that GitHub wasn’t able to connect to the repository while I was able to access it using my Web Browser… This is the problem I faced some time ago and I just wanted to share this experience because even if I’m writing a lot of blogs related to Documentum, it is sometimes good to change your mind you know… Therefore today is GitHub Day!


After some research and analysis and you already understood it if you read the first paragraph of this blog, I thought that maybe it was a proxy that is automatically setup in the Web Browser and that would prevent the GitHub process to access the GitHub repository and I was right! So GitHub behind a proxy, how can you manage that? Actually that’s pretty simple because everything is there so you just need to configure it. Unfortunately, I didn’t find any options in the GUI that would allow you to do that and therefore I had to use the Command Line Interface for that purpose. If there is a way to do that using the GUI, you are welcome to share!


Ok so let’s define some parameters:

  • PROXY_USER = The user’s name to be used for the Proxy Server
  • PROXY_PASSWORD = The password of the proxy_user
  • PROXY.SERVER.COM = The hostname of your Proxy Server
  • PORT = The port used by your Proxy Server in HTTP
  • PORT_S = The port used by your Proxy Server in HTTPS


With these information, you can execute the following commands to configure GitHub using the Command Line Interface (Git Shell on Windows). These two lines will simply tell GitHub that it needs to use a proxy server in order to access Internet properly:

git config --global http.proxy http://PROXY_USER:PROXY_PASSWORD@PROXY.SERVER.COM:PORT
git config --global https.proxy https://PROXY_USER:PROXY_PASSWORD@PROXY.SERVER.COM:PORT_S


If your Proxy Server is public (no authentication needed), then you can simplify these commands as follow:

git config --global http.proxy http://PROXY.SERVER.COM:PORT
git config --global https.proxy https://PROXY.SERVER.COM:PORT_S


With this simple configuration, you should be good to go. Now you can decide, whenever you want, to just remove this configuration. That’s also pretty simple since you just have to unset this configuration with the same kind of commands:

git config --global --unset http.proxy
git config --global --unset https.proxy


The last thing I wanted to show you is that if it is still not working, then you can check what you entered previously and what is currently configured by executing the following commands:

git config --global --get http.proxy
git config --global --get https.proxy


This conclude this pretty small blog but I really wanted to share this because I think it can help a lot of people!


Cet article Manage GitHub behind a proxy est apparu en premier sur Blog dbi services.

Microsoft Accounts Fail To Log In To Windows 10 with “User Profile Service failed the login” Error.

Jeff Moss - Wed, 2016-10-19 01:05

My kids are getting to the age where they can’t keep away from the laptop, various pads or the Smart TV to go online…time for some protection I thought.

I figured, for the Windows 10 laptops, that I’d use the Microsoft Accounts approach and use the “big brother” features there to stop the kids watching things they shouldn’t and restrict their access time.

First step was to convert my local account into a Microsoft one – simple enough and worked fine.

Next step was to create additional Microsoft accounts and then have them linked up as part of the “Family” – again, fine.

Then tell the PC to add those users – again all fine and simple to do.

All going well up until now, but then when I try to logout of my working Microsoft account on the laptop and login to one of the Family Microsoft accounts, it fails with the “User profile Service failed the login:

Image result for user profile service service logon fail windows 10


After much googling and trying various things, the one which worked for me was to copy the directory C:\Users\Default from a working Windows 7 Ultimate machine onto the laptop with the problem (where the directory did not exist at all). The advice I found actually referred to copying from another Windows 10 machine, but I didn’t have one of those – only a Windows 7 one.

I then added the family Microsoft accounts back in and after logging out and trying to login as one of these added accounts then worked fine!

I can’t be certain what the issue was, but various reading suggested an issue where the machine was upgraded from Windows 7/8 to 10 and where the local profile (C:\Users\Default) was either missing or corrupted. Copying in a working one from another machine fixed the issue in my case.

SGMB_URL = "http://www.oramoss.com/wp-content/plugins/social-media-builder/"; jQuery(".dropdownWrapper").hide();

Oracle E-Business Suite 11i - October 2016 is Last Critical Patch Update

Starting with the April 2016 Critical Patch Update (CPU), Oracle E-Business Suite 11.5.10 CPU patches are only available for customers with additional fee Tier 1 support contracts.  As of December 2016, no more CPU patches are available for Oracle E-Business Suite 11i.  October 2016 is the last CPU patch for Oracle E-Business Suite 11i.  For 12.0, the last CPU patch was October 2015.

Even though there are no more security patches, many, if not most, vulnerabilities discovered and patched in Oracle E-Business Suite 12.x are also present and exploitable in 11i.  A significant number of these security bugs are SQL injection bugs which allow an attacker to execute SQL as the Oracle E-Business Suite APPS database account.  These attacks can easily compromise the entire application and database.

As there are no more security patches for 11i and 12.0, we strongly recommend all 11i and 12.0 customers who have not yet upgraded to 12.x take immediate defensive steps to protect the Oracle E-Business Suite 11i, especially those with Internet facing modules such as iSupplier, iStore, iRecruitment, and iSupport.  A key layer of defense is Integrigy’s web application firewall for Oracle E-Business Suite, AppDefend, which provides virtual patching for these security bugs and additional protection from generic web application attack like SQL injection and cross-site scripting (XSS) and common Oracle E-Business Suite security misconfigurations.

Reference: AppDefend for the Oracle E-Business Suite

Oracle E-Business Suite, Oracle Critical Patch Updates
Categories: APPS Blogs, Security Blogs

Oracle Database Critical Patch Update October 2016: and Only

The list of Oracle Database versions supported for Critical Patch Updates (CPU) is getting shorter and shorter.  Starting with the October 2016 CPU, only and are supported.  In order to apply CPU security patches for all other Oracle versions, the database must be upgraded to or  As these are terminal database releases, the final CPU patch for is July 2021 and for is October 2020.  For those who have not yet applied 12c CPU patches, only Patch Set Updates (PSU) are available which include both security fixes and a large number of high priority fixes - Security Patch Updates (SPU) which include only security fixes are not available for 12c.

The October 2016 CPU fixes 12 security bugs in 7 database components.  Only the APEX (Application Express) security bug is remotely exploited without authentication – as with all APEX patches, this is a separate patch and upgrades APEX to

This CPU should be considered HIGH risk due to the 5 security bugs that require only CREATE SESSION privilege in order to exploit.  These bugs can be exploited by any database user and can be used to compromise the entire database.

Oracle Database, Oracle Critical Patch Updates
Categories: APPS Blogs, Security Blogs


Subscribe to Oracle FAQ aggregator