Feed aggregator

Dataguard vs Shareplex for DR

Tom Kyte - Thu, 2017-02-09 20:46
Hello Tom, I am looking for an advise on the DR setup. I am with a company where the design of the database and DR setup is done by the vendor and they are using shareplex instead of dataguard. I am having hard time convincing the management to c...
Categories: DBA Blogs

Fixing blank charts on ambari home page (Hortonworks Data Platform)

Jeff Moss - Thu, 2017-02-09 15:54

I created a 4 node container based (Proxmox LXC) Hortonworks Data Platform 2.5 Hadoop cluster recently and all went well apart from all the charts on the Ambari homepage were blank or showing “N/A”, like this:

An outline of the environment:

  • 4 node cluster of LXC containers on Proxmox host
  • Centos 7 Linux OS
  • Nodes are called bishdp0[1-4], all created from same template and identical configuration
  • All containers are on network
  • DNS Server also available on same network and all hosts can resolve each other via DNS
  • Hortonworks Data Platform version 2.5
  • Proxmox host sits on a corporate network and the host has iptables set to allow the containers on to reach the internet via the corporate proxy server, e.g. for yum access
  • Other than the blank charts everything appears to be working fine

After much reading around it turns out that I hadn’t quite set up the proxy serving correctly, specifically that I hadn’t told Ambari to ignore some hosts, namely the bishdp0[1-4] hosts on the network, when proxying. I can’t find a 2.5 HDP version of the document for setting up the proxy serving for Ambari but the 2.2 instructions worked.

Steps I took to fix the problem:

First stop the services on the cluster. Log on to the node with the Ambari Server where I have a script called ambari-stop-all-services.sh which I created based on part of this article. Thanks slm.

Run the script:


Now stop the Ambari agent on all the servers:

pdsh -w bishdp0[1-4] service ambari-agent stop

Now stop the Ambari Server:

service ambari-server stop

Now edit the Ambari environment script:

vi /var/lib/ambari-server/ambari-env.sh

Look for the line that begins “export AMBARI_JVM_ARGS” and ensure it has entries for the following parameters:

  • http.proxyHost
  • http.proxyPort
  • http.proxyUser
  • http.proxyPassword
  • http.nonProxyHosts

It’s the last one that was missing in my case, which meant that Ambari was trying to go to the proxy server even for these containers on the network.

After editing, the line looked like this (I’ve redacted the specifics – just replace the entries with values suited to your environment):

export AMBARI_JVM_ARGS=$AMBARI_JVM_ARGS’ -Xms512m -Xmx2048m -XX:MaxPermSize=128m -Dhttp.proxyHost=<proxy IP> -Dhttp.proxyPort=<proxy port> -Dhttp.proxyUser=<user> -Dhttp.proxyPassword=<password> -Dhttp.nonProxyHosts=<*.domain> -Djava.security.auth.login.config=$ROOT/etc/ambari-server/conf/krb5JAASLogin.conf -Djava.security.krb5.conf=/etc/krb5.conf -Djavax.security.auth.useSubjectCredsOnly=false’

Now restart everything, Ambari server first:

service ambari-server start

…then the agents on all nodes (pdsh is great – thanks Robin Moffatt for your notes!)

pdsh -w bishdp0[1-4] service ambari-agent start

And finally start the services on the cluster using the ambari-start-all-services.sh script.


After I did this, the charts started showing details:

Alliance 2017

Jim Marion - Thu, 2017-02-09 14:07

We are in the final countdown for Alliance 2017. I am really excited about this conference. It is such a great opportunity to meet up with old friends as well as make new ones. The Alliance session content and delivery is extremely high caliber. HEUG is a very engaged community. The MGM is a pretty amazing facility as well.

At GreyHeller our week starts with an amazing Monday workshop. On Monday, February 27th from 10:00 AM to 2:30 PM, Larry Grey and I will be hosting a pre-conference workshop titled Advanced PeopleTools Development Workshop with Jim Marion (session 4378). Our objective is to give you hands on experience with all of the new PeopleTools including Fluid and the Event Mapping Framework. But wait, there's more... Fluid itself is a new development paradigm with a lot of flexibility. In this session you will learn how to use CSS and JavaScript to further enhance the PeopleSoft user experience. For more details and registration, visit the Monday Workshops page on the Alliance conference site.

On Tuesday morning at we join our partner MODO LABS at 8:30 AM to present the session A Student for Life - Engaging prospective, new, current, and past students has never been easier. In this session you will see how MODO LABS partnered with GreyHeller makes it trivial to embed PeopleSoft content in a native, secure user experience giving users access to native, on-device capabilities such as maps, notifications, etc.

On Tuesday, February 28th from 09:45 AM to 10:45 AM, our friends from the University of Massachusetts will be sharing about their experience mobilizing and modernizing the Student Center (session 4036) at their UMass Boston, Dartmouth and Lowell campuses using our PeopleMobile™ product. It really is amazing how our product transforms the PeopleSoft user experience. Definitely a "must see."

On Tuesday, February 28th from 1:15 PM to 3:15 PM, Larry and I will be leading the PeopleSoft Cloud to Ground workshop – Cloud Adoption Strategies and Best Practices (session 4381). In the ERP space, Hybrid "is the new black." There are a lot of great cloud enhancements to a traditional ERP. Anyone thinking about implementing cloud is also thinking about backend data integrations. But what about the user experience? You don't have to settle for a disjointed user experience. In this session, Larry and I will show you how your organization can integrate the UX layer into a single, common user experience.

On Thursday, March 2nd at 9:15 AM, my friend Felicia Kendall from UCF will be sharing about their highly publicized breach (including costs) and their experiences with securing PeopleSoft after a highly publicized breach. This should prove to be a very valuable session. The session is titled University of Central Florida: Post-breach Mitigation & Prevention Strategy (session 4108).

While attending Alliance, be sure to wander through the demo grounds. Our booth (#301) will be right beside the Oracle booth. I'm looking forward to wandering through and visiting with my friends from Oracle, Ciber, Deloitte, Gideon Taylor, Intrasee, Smart ERP, Accenture, Presence of IT, MODO LABS, Huron, Sierra-Cedar, and many more.

See you on the floor!

Linux – Securing your important files with XFS extendend attributes

Yann Neuhaus - Thu, 2017-02-09 09:19

Let’s say, the tnsnames.ora is a quite important file on your system, and you want to make sure that you notice when someone changes the file. Taking a look at the modification time of that file would be good idea, or not?

Per default, the ls -l command show only the (mtime) modification time. In my case, I know that the tnsnames.ora was changed on “Feb 9 11:24″.

oracle@dbidg03:/u01/app/oracle/network/admin/ [rdbms112] ls -l tnsnames.ora
-rw-r--r-- 1 oracle oinstall 1791 Feb  9 11:24 tnsnames.ora

But in reality, more time stamps are stored. The atime, the ctime and the mtime.

  • atime is the access time (only stored in filesystem is not mounted with the noatime option)
  • ctime is the change time, meaning the inode was change, e.g. with the chmod command
  • mtime is the modification time, meaning the content changed

The ctime is often misinterpreted as “creation time”, but this is not the case. The creation time of a file is not recorded with XFS. There are other file systems that can do it, like ZFS, but XFS does not support “creation time”. You can use the stat command to see all time stamps in one shot.

oracle@dbidg03:/u01/app/oracle/network/admin/ [rdbms112] stat tnsnames.ora
  File: ‘tnsnames.ora’
  Size: 2137            Blocks: 8          IO Block: 4096   regular file
Device: fb02h/64258d    Inode: 163094097   Links: 1
Access: (0644/-rw-r--r--)  Uid: (54321/  oracle)   Gid: (54321/oinstall)
Access: 2017-02-09 11:24:00.243281419 +0100
Modify: 2017-02-09 11:24:00.243281419 +0100
Change: 2017-02-09 11:24:00.254281404 +0100
 Birth: -

Ok. Now someone comes along and changes the tnsnames.ora

oracle@dbidg03:/u01/app/oracle/network/admin/ [rdbms112] vi tnsnames.ora

A change was done, and the modification time of that file changed immediately.

oracle@dbidg03:/u01/app/oracle/network/admin/ [rdbms112] ls -l tnsnames.ora
-rw-r--r-- 1 oracle oinstall 2136 Feb  9 11:31 tnsnames.ora

And also other timestamps might have changed like the atime and ctime.

oracle@dbidg03:/u01/app/oracle/network/admin/ [rdbms112] stat tnsnames.ora
  File: ‘tnsnames.ora’
  Size: 2136            Blocks: 8          IO Block: 4096   regular file
Device: fb02h/64258d    Inode: 161521017   Links: 1
Access: (0644/-rw-r--r--)  Uid: (54321/  oracle)   Gid: (54321/oinstall)
Access: 2017-02-09 11:31:06.733673663 +0100
Modify: 2017-02-09 11:31:06.733673663 +0100
Change: 2017-02-09 11:31:06.738673656 +0100
 Birth: -

Cool, now I know that the file was changed at “Feb 9 11:31″. But how reliable is that information? With the touch command, I can easily change the modification time to any value I like. e.g. I can set it to the same date as beforehand.

oracle@dbidg03:/u01/app/oracle/network/admin/ [rdbms112] touch -m --date="Feb  9 11:24" tnsnames.ora

oracle@dbidg03:/u01/app/oracle/network/admin/ [rdbms112] ls -l tnsnames.ora
-rw-r--r-- 1 oracle oinstall 2136 Feb  9 11:24 tnsnames.ora

Now I have set the modification time to almost the same value, as it was beforehand. (Almost, because the microseconds are different) Besides that, the access and the change time are different.

oracle@dbidg03:/u01/app/oracle/network/admin/ [rdbms112] stat tnsnames.ora
  File: ‘tnsnames.ora’
  Size: 2136            Blocks: 8          IO Block: 4096   regular file
Device: fb02h/64258d    Inode: 161521017   Links: 1
Access: (0644/-rw-r--r--)  Uid: (54321/  oracle)   Gid: (54321/oinstall)
Access: 2017-02-09 11:31:06.733673663 +0100
Modify: 2017-02-09 11:24:00.000000000 +0100
Change: 2017-02-09 11:36:51.631671612 +0100
 Birth: -

No problem, I can make it even more precise by specifying  the whole date format including microseconds and time zone.

oracle@dbidg03:/u01/app/oracle/network/admin/ [rdbms112] touch -m --date="2017-02-09 11:24:00.243281419 +0100" tnsnames.ora

oracle@dbidg03:/u01/app/oracle/network/admin/ [rdbms112] stat tnsnames.ora
  File: ‘tnsnames.ora’
  Size: 2136            Blocks: 8          IO Block: 4096   regular file
Device: fb02h/64258d    Inode: 161521017   Links: 1
Access: (0644/-rw-r--r--)  Uid: (54321/  oracle)   Gid: (54321/oinstall)
Access: 2017-02-09 11:31:06.733673663 +0100
Modify: 2017-02-09 11:24:00.243281419 +0100
Change: 2017-02-09 11:39:41.775993054 +0100
 Birth: -

And if I want to, I can even change the access time.

oracle@dbidg03:/u01/app/oracle/network/admin/ [rdbms112] touch -a --date="2017-02-09 11:24:00.243281419 +0100" tnsnames.ora

oracle@dbidg03:/u01/app/oracle/network/admin/ [rdbms112] stat tnsnames.ora
  File: ‘tnsnames.ora’
  Size: 2136            Blocks: 8          IO Block: 4096   regular file
Device: fb02h/64258d    Inode: 161521017   Links: 1
Access: (0644/-rw-r--r--)  Uid: (54321/  oracle)   Gid: (54321/oinstall)
Access: 2017-02-09 11:24:00.243281419 +0100
Modify: 2017-02-09 11:24:00.243281419 +0100
Change: 2017-02-09 11:42:22.935350329 +0100
 Birth: -

Only the ctime (change time) is not so easy to change. At least not with the touch command. For changing the ctime you need to invoke the file system debugger or stuff like that. In the end, monitoring my tnsnames.ora file changes by time is not so precise. So why not using the XFS extend attribute feature to help me. e.g. I could create md5 check sums and when the check sum differs, I know that the content was changed. Let’s do it with the root user.

As root:

[root@dbidg03 admin]# getfattr -d tnsnames.ora
[root@dbidg03 admin]#

[root@dbidg03 admin]# md5sum tnsnames.ora
d135c0ebf51f68feda895dac8631a999  tnsnames.ora

[root@dbidg03 admin]# setfattr -n user.md5sum -v d135c0ebf51f68feda895dac8631a999 tnsnames.ora
[root@dbidg03 admin]#
[root@dbidg03 admin]# getfattr -d tnsnames.ora
# file: tnsnames.ora

But this is also not so secure. Even if done with root, it can easily be removed by the oracle user.

oracle@dbidg03:/u01/app/oracle/network/admin/ [rdbms112] getfattr -d tnsnames.ora
# file: tnsnames.ora

oracle@dbidg03:/u01/app/oracle/network/admin/ [rdbms112] setfattr -x user.md5sum tnsnames.ora
oracle@dbidg03:/u01/app/oracle/network/admin/ [rdbms112] getfattr -d tnsnames.ora

To overcome this issue, XFS uses 2 disjoint attribute name spaces associated with every filesystem object. They are the root (or trusted) and user address spaces. The root address space is accessible only to the superuser, and then only by specifying a flag argument to the function call. Other users (like the oracle user in my case) will not see or be able to modify attributes in the root address space. The user address space is protected by the normal file permissions mechanism, so the owner of the file can decide who is able to see and/or modify the value of attributes on any particular file.

Ok. So let’s do it again by using the root (trusted) address space.

[root@dbidg03 admin]# setfattr -n trusted.md5sum -v "d135c0ebf51f68feda895dac8631a999" tnsnames.ora
[root@dbidg03 admin]# getfattr -n trusted.md5sum tnsnames.ora
# file: tnsnames.ora

However, from the oracle user point of view, no attributes exist, even if you know the attribute you are looking for.

oracle@dbidg03:/u01/app/oracle/network/admin/ [rdbms112] getfattr -d tnsnames.ora
oracle@dbidg03:/u01/app/oracle/network/admin/ [rdbms112] getfattr -n trusted.md5sum tnsnames.ora
tnsnames.ora: trusted.md5sum: No such attribute

You can take it even further, but adding another root attribute, e.g. the time when you created the md5 checksum.

[root@dbidg03 admin]# setfattr -n trusted.md5sumtime -v "09.02.2018 13:00:00" tnsnames.ora
[root@dbidg03 admin]# getfattr -n trusted.md5sumtime tnsnames.ora
# file: tnsnames.ora
trusted.md5sumtime="09.02.2018 13:00:00"

[root@dbidg03 admin]# getfattr -n trusted.md5sum tnsnames.ora
# file: tnsnames.ora

Now you have a good chance to find out if the file content was changed or not, by simply checking if the file has a different check sum.


XFS extended attributes are quite powerful features and you can use them in a lot of scenarios. Take care that you have a backup solution that support extended attributes, else you will lose all the information once you restore your data.


Cet article Linux – Securing your important files with XFS extendend attributes est apparu en premier sur Blog dbi services.

Index bouncy scan

Jonathan Lewis - Thu, 2017-02-09 07:05

There’s a thread running on OTN at present about deleting huge volumes of duplicated data from a table (to reduce it from 1.1 billion to about 22 million rows). The thread isn’t what I’m going to talk about, though, other than quoting some numbers from it to explain what this post is about.

An overview of the requirement suggests that a file of about 2.2 million rows is loaded into the table every week with (historically) no attempt to delete duplicates. As a file is loaded into the table every row gets the same timestamp, which is the sysdate at load time. I thought it would be useful to know how many different timestamps there were in the whole table.  (From an averaging viewpoint, 1.1 billion rows at 2.2 million rows per week suggests about 500 dates/files/weeks – or about 9.5 years – but since the table relates to “customer accounts” it seems likely that the file was originally smaller and has grown over time, which means the hiostory may be rather longer than that.)

Conveniently there is an index on the “input_user_date” column in the table so we might feel happy running a query that simply does:

        distinct input_user_date
order by

We might then refine the query to do a count(*) aggregate, or do some analytics to find any strange gaps in the timing of the weekly loads. However, all I’m really interested in is the number of dates because I’ve suggested we could de-duplicate the data by running a PL/SQL process that does a simple job for each date in turn, and I want to get an idea of how many times that job will run so that I can estimate how long the entire process might take.

The trouble with the basic query is that the table is (as you probably noticed) rather large, and so is the index. If we assume 8 bytes (which includes the length byte) for a date, 7 bytes for the rowid, 4 bytes overhead, and 100% packing we get about 420 index entries per leaf blocks, so with 1.1 billion entries the index is about 2.6 million leaf blocks. If the index had been built with compression (which means you’d only be recording a date once per leaf block) it would still be about 1.6 million leaf blocks. Fortunately we wouldn’t have to do much “real” sorting to report just a list of distinct values, or even the count(*) for each date, if we made Oracle use an index full scan – but it’s still a lot of work to read 1.6 million blocks (possibly using single block reads) and do even something as simple as a running count as you go. So I whipped up a quick and dirty bit of PL/SQL to do the job.

        m_d1 date := to_date('01-Jan-0001');
        m_d2 date := to_date('01-Jan-0001');
        m_ct number := 0;
                        input_user_date > m_d1

                exit when m_d2 is null;

                m_ct := m_ct + 1;
                dbms_output.put_line('Count: ' || m_ct || '  Date: ' || m_d2);
                m_d1 := m_d2;

        end loop;

The code assumes that the input_user_date hasn’t gone back to a silly date in the past to represent a “null date” (which shouldn’t exist anyway; if you want to use code like this but have a problem with a special “low-value” then you would probably be safest adding a prequel SQL that selects the min(columnX) where columnX is not null to get the starting value instead of using the a constant as I have done.

The execution path for the SQL statement should be an index-only: “index range scan (min/max)” which typically requires only 3 or 4 logical I/Os to find the relevant item for each date (which compares well with the estimated 2,200,000 / 420 = 5,238 leaf blocks we would otherwise have to scan through for each date). Here’s the path you should see:

| Id  | Operation                    | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT             |       |       |       |     3 (100)|          |
|   1 |  SORT AGGREGATE              |       |     1 |     8 |            |          |
|   2 |   FIRST ROW                  |       |     1 |     8 |     3   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN (MIN/MAX)| CA_I1 |     1 |     8 |     3   (0)| 00:00:01 |

Predicate Information (identified by operation id):
   3 - access("INPUT_USER_DATE">:B1)

I did build a little data set as a proof of concept – and produced a wonderful example of how the scale and the preceding events makes a difference that requires you to look very closely at what has happened. I used a table t1 in my example with a column d1, but apart from the change in names the PL/SQL block was as above.Here’s the code I used to create the data and prepare for the test:

create table t1 nologging
        trunc(sysdate) + trunc((rownum - 1)/100) d1,
        rpad('x',100)   padding
        rownum <= 50000

execute dbms_stats.gather_table_stats(user,'t1')
alter table t1 modify d1 not null;

create index t1_i1 on t1(d1) nologging pctfree 95

select index_name, leaf_blocks from user_indexes;

alter system flush buffer_cache;

alter session set events '10046 trace name context forever, level 8';

My data set has 500 dates with 100 rows per date, and the pctfree setting for the index gives me an average of about 8 leaf blocks per date (for a total of 4,167 leaf blocks). It’s only a small index so I’m expecting to see just 2 or 3 LIOs per date, and a total of about 500 physical reads (one per date plus a handful for reading branch blocks). Here’s the output from the running tkprof against the trace file:

 T1 WHERE D1 > :B1

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute    501      0.00       0.01          0          0          0           0
Fetch      501      0.08       0.18       4093       1669          0         501
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     1003      0.09       0.19       4093       1669          0         501

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 62     (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  SORT AGGREGATE (cr=3 pr=64 pw=0 time=9131 us)
         1          1          1   FIRST ROW  (cr=3 pr=64 pw=0 time=9106 us cost=3 size=8 card=1)
         1          1          1    INDEX RANGE SCAN (MIN/MAX) T1_I1 (cr=3 pr=64 pw=0 time=9089 us cost=3 size=8 card=1)(object id 252520)

I’ve done a physical read of virtually every single block in the index; but I have done only 3 buffer gets per date – doing fewer buffer gets than physical reads.

I’ve been caught by two optimisations (which turned out to be “pessimisations” in my test): I’ve flushed the buffer cache, so the Oracle runtime engine has decided to consider “warming up” the cache by reading extra blocks from any popular-looking objects that I’m accessing, and the optimizer may have given the run-time engine enough information to allow it to recognise that this index is subject to range scans and could therefore be a suitable object to use while warming up. As you can see from the following extracts from session events and session activity stats – we’ve done a load of multiblock reads through the index.

Event                                             Waits   Time_outs           Csec    Avg Csec    Max Csec
-----                                             -----   ---------           ----    --------    --------
db file sequential read                               1           0           0.03        .031           6
db file scattered read                              136           0          13.54        .100           1

Name                                                                     Value
----                                                                     -----
physical reads                                                           4,095
physical reads cache                                                     4,095
physical read IO requests                                                  137
physical reads cache prefetch                                            3,958
physical reads prefetch warmup                                           3,958

This isn’t likely to happen, of course, in the production system where we’ll be starting with a fully loaded cache and the leaf blocks we need are (logically) spaced apart by several thousand intervening blocks.


I can’t remember who first brought this strategy to my attention – though I’m fairly sure it was one of my Russian colleagues, who has blogged about ways to work around what is effectively a limitation of the “index skip scan”. Apologies to the originator, and if you recognise your work here please add a comment with URL below.

Working with OBIEE Data in Excel using ODBC

Rittman Mead Consulting - Thu, 2017-02-09 04:09

Look at this picture. I'm sure you've recognised the most favourite data analysis tool of all times - Excel.


But what you can't see in this picture is the data source for the table and charts. And this source is OBIEE's BI Server. Direct. Without exports or plugins!

Querying OBIEE Directly from Excel? With No Plugins? What Is Going On!

The OBIEE BI Server (nqsserver / OBIS) exposes an ODBC interface (look here if you live in a world full of Java and JDBC) which is used by Presentation Services and Administration tool. But a lesser-known benefit of this is that we can utilise this ODBC interface for own needs. But there is a little problem with the OBIEE 12c client installation - its size. Full (and the only possible actually) client OBIEE installation is more than 2 gigabytes and consists of more than 31 thousand files. Not a huge problem considering HDD sizes and prices but something not so good if you have an average-sized SSD.

And the second point to consider. We don’t want to give a full set of developer tools to an end-user. Even if our security won’t let them break anything, why would we stuff his head with unnecessary things? Let's keep things simple.

So what I had in mind with this research was to make a set of OBIEE ODBC libraries as small as possible. And the second aim was avoiding a full installation with cutting out redundant pieces. I need a small "thing" I can easily copy to any computer and then use it.

Disclaimer. Everything below is a result of our investigation. It’s not a supported functionality or Oracle’s recommendation.

I will not describe in full details the process of the investigation as it is not too challenging. It's less a detective thriller and more a tedious story. But anyways the main points will be highlighted.

Examine Working Copy

The first thing I needed to know what changes Oracle's installer does during an installation. Does it copy something to the Windows folder or everything stays in its installation folder? Does it make any registry changes (apparently it does but what exactly)?

For this task, I took a fresh Windows, created a dump of the registry and folders structure of the Windows folder, then installed OBIEE client using normal installation process, made the same dumps and compared them once again.

There were no surprises. OBIEE installer doesn't copy a single byte to the Windows folder (and it's a good news I think) but it creates a few registry keys (what was expected). Anyone who has ever tried to play around Windows ODBC won't be surprised with it at all.

I deleted some keys in order to make this screenshots more clear and readable.

So now I know names of the DLLs and their places. A good point to start. A small free utility Dependency walker helped me to find out a set of DLLs I need. This tool is very easy to use and very useful for finding a missing DLL. Just give it a DLL to explore and it will show all DLLs used by it and mark all missing.

Dependency walker

And a bit of educated guess helped to find one more folder called locale which stores all language files.

So, as a result, we got a tiny ODBC-related OBIEE client. It's very small. With only English locale it has a size about 20 megabytes and consists of 75 files. Compare it to 31 thousand files of the full client.

So that was a short story of looking and finding things. Now goes some practical result.

Folders Structure.

It seems that some paths are hard-coded. So we can't put DLLs to any folder we like. It should be something\bi\bifoundation\server. C:\BI-client\bi\bifoundation\server for example.

The List of DLLs

I tried to find the minimum viable set of the libraries. The list has only 25 libraries but it takes too much place on the screen so I put them into a collapsible list in order to keep this post not too long. These libraries should go under bin folder. C:\BI-client\bi\bifoundation\server\bin for example.

The list of ODBC DLLs

  • BiEndPointManagerCIntf64.dll
  • mfc100u.dll
  • msvcp100.dll
  • msvcr100.dll
  • nqcryptography64.dll
  • nqerrormsgcompiler64.dll
  • nqmasutility64.dll
  • nqperf64.dll
  • nqportable64.dll
  • nqsclusterapi64.dll
  • nqsclusterclient64.dll
  • nqsclusterutility64.dll
  • nqsodbc64.dll
  • nqsodbcdriverconndlg64.dll
  • nqssetup.dll
  • NqsSetupENU.dll
  • nqstcpclusterclient64.dll
  • NQSTLU64.4.5.dll
  • nqutilityclient64.dll
  • nqutilitycomm64.dll
  • nqutilitygeneric64.dll
  • nqutilitysslall64.dll
  • perfapi64.dll
  • samemoryallocator864.dll
  • xerces-c_2_8.dll

Or you may take the full bin folder. Its size is about 240 megabytes. You won't win the smallest ODBC client contest but will save a few minutes of your time.


The second folder you need is locale, it is located near bin. C:\BI-client\bi\bifoundation\server\locale, for example. Again if you agree with not the smallest client in the world, you may take the whole locale. But there are 29 locales and I think most of the time you will need only one or two of them. Every locale is about 1.5 megabytes and has 48 files. A good place for some optimisation in my opinion.

Registry Key

And the last part is registry keys. I need to tell my Windows what is my driver name and what is its path and so on. If it was a usual part of the registry I'd created a file anything.reg, put a code like this into it and imported it into the registry.

Windows Registry Editor Version 5.00

"Oracle BI Server"="Installed"


But luckily there is a small console utility which makes the task easier and more elegant - scripted. Microsoft provides us a tool called odbcconf.exe located in C:\Windows\System32 folder. And its syntax is not very obvious but not too hard also. Generally the syntax is the following: odbcconf.exe /a {action "parameters"}. In this case the call is odbcconf.exe {installdriver "Oracle BI Server|Driver=C:\BI-client\bi\bifoundation\server\bin\nqsodbc64.dll|Setup=C:\BI-client\bi\bifoundation\server\bin\nqssetup.dll|APILevel=2|SQLLevel=2|ConnectionFunctions=YYN|DriverODBCVer=03.52|Regional=Yes"}. Here installdriver is the action and the long string is the set of parameters divided by |. It may look a bit complicated but in my opinion it leaves less space for manual work and therefore less space for error. Just one note: don't forget to start a cmd windows as administrator.

Visual C++ Redistributable

If your computer is fresh and clean, you need to install a Visual C++ 2010 redistributable package. It's included in Oracle's client and placed in 'Oracle_Home\bi' folder. The file name is vcredist_x64.exe.


And as a result I got an ODBC driver I can use as I want. And not obvious but pleasant bonus is that I can give it any name I like. OBIEE version, path, whatever I want.

And I can create an ODBC DSN in a normal way using ODBC Data source Administrator. Just like always. No matter this is a hand-made driver. It was properly registered and it is absolutely legitimate.

So just a brief intermediate summary. We can take a full 2+ gigabytes OBIEE client. Or we can spend some time to:
1. Create a folder and put into it some files from the Oracle OBIEE client;
2. Create a few registry keys;
3. Install a Visual C++ 2010 redistributable
And we will get a working OBIEE ODBC driver which size is slightly above 20 megabytes.


So now we have a working ODBC connection, what can it give us?

Meet the most beloved by end users all around the world tool - Excel.

At this point of the story, some may tell me "Hey, stop right there! Where have you got that SQL? And why is it so strange? That's not an ANSI SQL". The evil part of me wants to simply give you a link to the official documentation: Logical SQL Reference and run away. But the kind one insists that even while documentation has done no harm to anyone, that's not enough.

In a nutshell, this is an SQL that Presentation services send to BI Server. When anyone builds an analysis or runs a dashboard, Presentation services create and send logical queries to BI Server. And we can use it for our own needs. Create an analysis as usual (or open an existing one), navigate to the Advanced tab, and then copy and paste analysis' Logical SQL. You may want to refine it, maybe remove some columns, or change aliases, or add a clause or two from the evil part's documentation, but for the first step just take it and use it. That simple.

And of course, we can query our BI server using any ODBC query tool.

And all these queries go directly to the BI Server. This method doesn't use Presentation Services, OBIEE won't build a complex HTML which we have to parse later. We use a fast and efficient way instead.

Categories: BI & Warehousing

Partner Webcast – Announcing Oracle CASB Cloud Service, an API-based Cloud Access Security Broker

On September 18, 2016, Oracle announced that it signed an agreement to acquire Palerra, extending Oracle Identity Cloud Service with an innovative Cloud Access Security Broker (CASB). The transaction...

We share our skills to maximize your revenue!
Categories: DBA Blogs

How to change the DBID after restore the database on other server

Tom Kyte - Thu, 2017-02-09 02:26
Hi Tom, I am in the process to migrating the databases from existing server to the new server. I did restore my controlfile using RMAN catalog and after that I logged in locally ?rman target /? and restored the database. My question is how can...
Categories: DBA Blogs

How Sorts (Disk) in query works

Tom Kyte - Thu, 2017-02-09 02:26
I have two queries - 1. I see sorts(Disk) in the autotrace output for a query.What is actually sort(Disk) and how it works. Is the rowsets are brought in memory in chunks, sorted and written back to temp tablespace. After which the chunks are merg...
Categories: DBA Blogs

When to replace the hash-cluster for an in-memory table

Tom Kyte - Thu, 2017-02-09 02:26
At the moment we was a database with a dual timeline. Transaction timeling and validity timeline. All the valid records in the current transaction timeline are duplicated in hash cluster for performance. Now Oracle 12 is coming along with it's in...
Categories: DBA Blogs

Does Dataguard apply ddl on user tables

Tom Kyte - Thu, 2017-02-09 02:26
Hello, If I alter an user table (add a new column) in a dataguard environment from the active node, will that be reflected on the mounted DB of the passive node? Regards, Daniel
Categories: DBA Blogs

EBS Technology Codelevel Checker Updated (Feb 2107) for EBS 12.2

Steven Chan - Thu, 2017-02-09 02:04

The E-Business Suite Technology Codelevel Checker (ETCC) tool helps you identify missing application tier or database bugfixes that need to be applied to your E-Business Suite Release 12.2 system. ETCC maps missing bugfixes to the default corresponding patches and displays them in a patch recommendation summary.

What's New

ETCC was recently updated to include bug fixes and patching combinations for the following:

Recommended Versions
  • January 2017 Oracle WebLogic Server PSU
  • Oracle Fusion Middleware
  • January 2017 Database PSU and Proactive Bundle Patch
  • October 2016 Database PSU and Engineered Systems Patch
Minimum Versions
  • October 2016 Oracle WebLogic Server PSU
  • Oracle Fusion Middleware
  • October 2016 Database PSU and Proactive Bundle Patch
  • July 2016 Database PSU and Engineered Systems Patch

Obtaining ETCC

We recommend always using the latest version of ETCC, as new bugfixes will not be checked by older versions of the utility. The latest version of the ETCC tool can be downloaded via Patch 17537119 from My Oracle Support.

Related Articles


Categories: APPS Blogs

Steps to Recreate Central Inventory in Real Applications Clusters (Doc ID 413939.1)

Michael Dinh - Wed, 2017-02-08 21:13



$ $ORACLE_HOME/OPatch/opatch lsinventory -detail -oh $ORACLE_HOME

Oracle Interim Patch Installer version
Copyright (c) 2017, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/oracle/product/12.1.0/db_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/12.1.0/db_1/oraInst.loc
OPatch version    :
OUI version       :
Log file location : /u01/app/oracle/product/12.1.0/db_1/cfgtoollogs/opatch/opatch2017-02-08_15-56-03PM_1.log

List of Homes on this system:

Inventory load failed... OPatch cannot load inventory for the given Oracle Home.
Possible causes are:
   Oracle Home dir. path does not exist in Central Inventory
   Oracle Home is a symbolic link
   Oracle Home inventory is corrupted
LsInventorySession failed: OracleHomeInventory gets null oracleHomeInfo

OPatch failed with error code 73

This happened due to error during install. – oraInventory mismatch.

$ cat /etc/oraInst.loc

$ cd /u01/software/database
$ export DISTRIB=`pwd`
$ ./runInstaller -silent -showProgress -waitforcompletion -force -ignorePrereq -responseFile $DISTRIB/response/db_install.rsp \
> oracle.install.option=INSTALL_DB_SWONLY \
> UNIX_GROUP_NAME=oinstall \
> INVENTORY_LOCATION=/u01/app/oracle/oraInventory \

Backup oraInventory for both nodes and attachHome

$ cd $ORACLE_HOME/oui/bin
$ ./runInstaller -silent -ignoreSysPrereqs -attachHome \
ORACLE_HOME="/u02/app/12.1.0/grid" ORACLE_HOME_NAME="OraGI12Home1" \
LOCAL_NODE="node01" CLUSTER_NODES="{node01,node02}" CRS=true
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 16383 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'AttachHome' was successful.

$ ./runInstaller -silent -ignoreSysPrereqs -attachHome \
ORACLE_HOME="/u01/app/oracle/product/12.1.0/db_1" ORACLE_HOME_NAME="OraDB12Home1" \
LOCAL_NODE="node01" CLUSTER_NODES="{node01,node02}"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 16383 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'AttachHome' was successful.

SQL Server AlwaysOn – Distributed availability groups, read-only with round-robin capabilities

Yann Neuhaus - Wed, 2017-02-08 14:04


This blog post comes from a very interesting discussion with one of my friends about the read-only capabilities of secondary replicas in the context of distributed availability groups. Initially, distributed availability groups are designed to address D/R scenarios and some migration scenario types as well. I already discussed about of one possible migration scenario here. However, we may also take advantage of using secondary replicas as read-only in Reporting Scenarios (obviously after making an assessment of whether the cost is worth it.). In addition, if you plan to introduce scale-out with secondary replicas (even with asynchronous replication) you may consider to use distributed availability groups and cascading feature which will address network bandwidth overhead especially if your cross-datacenter link is not designed to handle heavily replication workload. Considering this last scenario, my friend’s motivation (Sarah Bessard) was to assess distributed availability groups in the replacement of SQL Server replication.

As a reminder, SQL Server 2016 provides new round-robin feature with secondary read-only replicas and extending it by including additional replicas from another availability group seems to be a good idea. But here things become more complicated because transparent redirection and round-robin features sound promising but in fact let’s see if it works when distributed availability group comes into play.

Let’s have a demo on my lab environment. So for the moment two separate availability groups which run on the top of their own Windows Failover Cluster – respectively AdvGrp and AdvGrpDR


blog 116 - 01 - distributed ag - archi

At this stage, we will focus only on my second availability group AdvDrGrp. Firstly, I configured read-only routes for my 4 replicas and here the result:

	g.name AS group_name
	sys.availability_replicas AS r
	sys.availability_groups AS g ON r.group_id = g.group_id
	g.name = N'AdvGrpDR'

	r.replica_server_name AS primary_replica,
	r2.replica_server_name AS read_only_secondary_replica,
	g.name AS availability_group
	sys.availability_read_only_routing_lists AS rl
	sys.availability_replicas AS r ON rl.replica_id = r.replica_id
	sys.availability_replicas AS r2 ON rl.read_only_replica_id = r2.replica_id
	sys.availability_groups AS g ON g.group_id =  r.group_id
	g.name = N'AdvGrpDR'
	primary_replica, availability_group, routing_priority;


blog 116 - 1 - distributed ag ro - RO config

URL read-only routes and preferred replicas are defined for all the replicas. I defined round-robin configuration for replicas WIN20161SQL16\SQL16 to WIN20163SQL16\SQL16 whereas the last one is configured with a preference order (WIN20163SQL16\SQL16 first and WIN20164SQL16\SQL16 if the previous one is not available).

After configuring read-only routes, I decided to check if round-robin comes into play before implementing my distributed availability group. Before running my test I also implemented a special extended event which includes read-only route events as follows:

ADD EVENT sqlserver.hadr_evaluate_readonly_routing_info,
ADD EVENT sqlserver.read_only_route_complete,
ADD EVENT sqlserver.read_only_route_fail
ADD TARGET package0.event_file ( SET filename=N'alwayson_ro' ),
ADD TARGET package0.ring_buffer;


My test included a basic command based on SQLCMD and –K READONLY special parameter as follows:

blog 116 - 2 - distributed ag ro - RO test

According to the above output we may claim that my configuration is well configured. We may also double check by looking at the extend event output

blog 116 - 3 - distributed ag ro - xe ro output

But now let’s perform the same test after implementing my distributed availability group. The script I used was as follows:

USE [master];
-- Primary cluster 
    LISTENER_URL = 'tcp://lst-advgrp.dbi-services.test:5022',    
    LISTENER_URL = 'tcp://lst-advdrgrp.dbi-services.test:5022',   

USE [master];
-- secondary cluster
    LISTENER_URL = 'tcp://lst-advgrp.dbi-services.test:5022',    
    LISTENER_URL = 'tcp://lst-advdrgrp.dbi-services.test:5022',   


blog 116 - 0 - distributed ag ro - archi

Performing the previous test after applying the new configuration gives me a different result this time.

blog 116 - 4 - distributed ag ro - RO test 2

It seems that the round-robin capability is not correctly performed although I used the same read-only routes configuration. In the same way, taking a look at the extended event output gave me no results. It seems that transparent redirection and round-robin features from the listener did not come into play this time.

Let’s perform a last test which includes moving AdvDrGrp availability to another replica to confirm transparent redirection does not work as we may expect




blog 116 - 5 - distributed ag ro - RO test 3

Same output than previously. The AdvDrGrp availability group has moved from WIN20163SQL16\SQL16 replica to WIN20164SQL16\SQL16 replica and the connection reached out the new defined primary of the second availability group (secondary role from the distributed availability group perspective) meaning we are not redirected on one of defined secondaries.

At this stage, it seems that we will have to implement our own load balancing component – whatever it is – in order to benefit from all the secondary replicas and read-only features on the second availability group. Maybe one feature that Microsoft may consider as improvement for the future.

Happy high availability moment!









Cet article SQL Server AlwaysOn – Distributed availability groups, read-only with round-robin capabilities est apparu en premier sur Blog dbi services.

Oracle Public Cloud: 2 OCPU for 1 proc. license

Yann Neuhaus - Wed, 2017-02-08 11:40

I’ve blogged recently about the Oracle Core Factor in the Clouds. And then, in order to optimize your Oracle licences, you need to choose the instance type that can run faster on less cores. In a previous blog post, I tried to show how this can be complex, comparing the same workload (cached SLOB) on different instances of same Cloud provider (Amazon). I did that on instances with 2 virtual cores, covered by 2 Oracle Database processor licences. Here I’m doing the same on the Oracle Public Cloud where, with the same number of licenses, you can run on 4 hyper-threaded cores.

Trial IaaS

I’m running with the 30-months trial subscription. I did several tests because they were not consistent at first. I had some runs where it seems that I was not running at full CPU. What I know is that your CPU resources are guaranteed on the Oracle Public Cloud, but maybe it’s not the case on trial, or I were working on a maintenance window, or…

Well, I finally got consistent results and I’ve run the following test on the IaaS (Cloud Compute Service) to do something similar to what I did on AWS, with the Bring You Own License idea.

In Oracle Public Cloud, you can run 2 cores per 1 Oracle processor licence. This means that if I have 2 processor licences, I can run on an instance shape with 4 OCPU. This shape is called ‘OC5′. Here it is:

[oracle@a9f97f ~]$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 1
Core(s) per socket: 8
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 63
Model name: Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz
Stepping: 2
CPU MHz: 2294.938
BogoMIPS: 4589.87
Hypervisor vendor: Xen
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 46080K
NUMA node0 CPU(s): 0-7
[oracle@a9f97f ~]$ cat /proc/cpuinfo | tail -26
processor : 7
vendor_id : GenuineIntel
cpu family : 6
model : 63
model name : Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz
stepping : 2
microcode : 0x36
cpu MHz : 2294.938
cache size : 46080 KB
physical id : 0
siblings : 8
core id : 7
cpu cores : 8
apicid : 14
initial apicid : 14
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm constant_tsc rep_good nopl eagerfpu pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm xsaveopt fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid
bogomips : 4589.87
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management:

And here are the results:

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 1.0 30.2 0.00 5.48
DB CPU(s): 1.0 30.1 0.00 5.47
Logical read (blocks): 884,286.7 26,660,977.4
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 2.0 25.0 0.00 9.53
DB CPU(s): 2.0 25.0 0.00 9.53
Logical read (blocks): 1,598,987.2 20,034,377.0
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 3.0 40.9 0.00 9.29
DB CPU(s): 3.0 40.9 0.00 9.28
Logical read (blocks): 2,195,570.8 29,999,381.1
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 4.0 42.9 0.00 14.46
DB CPU(s): 4.0 42.8 0.00 14.45
Logical read (blocks): 2,873,420.5 30,846,373.9
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 5.0 51.7 0.00 15.16
DB CPU(s): 5.0 51.7 0.00 15.15
Logical read (blocks): 3,520,059.0 36,487,232.0
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 6.0 81.8 0.00 17.15
DB CPU(s): 6.0 81.8 0.00 17.14
Logical read (blocks): 4,155,985.6 56,787,765.6
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 7.0 65.6 0.00 17.65
DB CPU(s): 7.0 65.5 0.00 17.62
Logical read (blocks): 4,638,929.5 43,572,740.0
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 8.0 92.3 0.00 19.20
DB CPU(s): 8.0 92.1 0.00 19.16
Logical read (blocks): 5,153,440.6 59,631,848.6

This is really good. This is x2.8 more LIOPS than the maximum I had on AWS EC2. A x2 factor is expected because I have x2 vCPUS here. But CPU is also faster. So, two conclusions here:

  • There is no technical reason behind the reject of core factor on Amazon EC2. It is only a marketing decision.
  • For sure, for same Oracle Database cost, Oracle Cloud outperforms Amazon EC2 because is is cheaper (not to mention the discounts you will get if you go to Oracle Cloud)
So what?

This is not a benchmark. The LIOPS may depend a lot on your application behaviour, and CPU is not the only resource to take care. But for sure, the Oracle Public Cloud IaaS is fast and costs less when used for Oracle products, because of the rules on core factor. But those rules are for information only. Check your contract for legal stuff.


Cet article Oracle Public Cloud: 2 OCPU for 1 proc. license est apparu en premier sur Blog dbi services.

How to Upgrade an Oracle-based Application without Downtime

Gerger Consulting - Wed, 2017-02-08 11:32
One of most common reasons IT departments avoid database development is the belief that an application upgrade in the database causes downtime. However, nothing can be further from the truth. On the contrary, Oracle Database provides one of the most bullet proof ways to upgrade an application without any downtime: Edition Based Redefinition (EBR)

EBR is a powerful and fascinating feature of Oracle (added in version 11.2), that enables application upgrades with zero downtime, while the application is actively used and operational.

Attend the free webinar by Oren Nakdimon on February 16th to learn how to use EBR, see many live examples, and get tips from real-life experience in a production site using EBR extensively.

The webinar is free but space is limited.

Sign up for the free webinar.

Categories: Development

February 15: Hillside Family of Agencies―Oracle HCM Cloud Customer Forum

Linda Fishman Hoyle - Wed, 2017-02-08 10:02

Join us for an Oracle HCM Cloud Customer call on Wednesday, February 15, 2017, at 9:00 a.m. PDT.

Carolyn Kenny, Director of Information Services at Hillside Family of Agencies, will discuss why Hillside decided to move its Oracle E-Business Suite HR and ERP on premises to the Oracle HCM and ERP Cloud.

Hillside Family of Agencies is using the following Oracle Cloud products: Core HR, Payroll, Benefits and Absence Management, and Oracle ERP Cloud. The company is implementing in phases. Phase 1 included Core HR, HR Analytics, Recruiting, Social Sourcing, and Benefits. Phase 2 includes Performance and Goal Management, Career and Succession Planning, and Learning and Development.

Register now to attend the live forum and learn more about Hillside Family of Agencies’ experience with Oracle HCM Cloud.

IntraSee: All Aboard the Cloud Train

WebCenter Team - Wed, 2017-02-08 09:27

Authored by: Paul Isherwood. CEO & Co-Founder, IntraSee 

As one era ends, another begins. As client-server eventually succumbed to the ascendency of the Internet and web-based systems, so too will on-premise solutions fade into history as the Cloud becomes the new normal. For many organizations there will be concern about making this transition. The comfort that people feel for what is known is hard to let go, especially when what is new does not have a clearly defined path to adoption.

At IntraSee we believe in clarity of thought, which means providing clear direction on what can be a confusing subject. And in that spirit, we have identified a number of offerings that will help you painlessly get to your final destination.  We’ve grouped these into use-cases we believe are highly applicable for many organizations currently on the PeopleSoft platform.

  • Use-Case 1: I am using the PeopleSoft Interaction Hub as an HR or Campus portal, how do I provide the same kind of functionality in the Oracle Cloud?
  • Use-Case 2: I am using the PeopleSoft Interaction Hub to house all my content, policies and procedures. I have thousands of HTML objects and images, plus thousands of pdf files and Word docs. How do I move them into the Oracle Cloud so they complement HCM or Student Cloud? And how do I manage them once they are there?
  • Use-Case 3: I’ve created a number of bolt-ons in PeopleTools that I know won’t be available in the HCM Cloud. Is there some way I can rebuild them using Oracle’s Cloud tools? It’s not an option for us just to drop them. 
Read more about these Use-Cases in depth in Paul's original post here.

Outsourcing Inc. Standardizes on Oracle Identity Cloud Service

Oracle Press Releases - Wed, 2017-02-08 07:00
Press Release
Outsourcing Inc. Standardizes on Oracle Identity Cloud Service Selects solution that enhances security while not compromising on ease-of-use

Redwood Shores, Calif.—Feb 8, 2017

Oracle today announced that Outsourcing Inc., the leading outsourcing services for manufacturing companies, selected Oracle Identity Cloud Service, a next-generation security and identity management cloud platform that is designed to be an integral part of the enterprise security fabric.

Outsourcing is experiencing rapid growth to address the changes of its customer-base. Its sales for the period ending December 31, 2015 reached a record high 80.8 billion Yen, and it has grown by 36 percent year-after-year. Currently, the company focuses on key industries such as IT, construction, and healthcare. It has invested 43 billion Yen in mergers and acquisitions and has 31 subsidiaries in Japan and 54 subsidiaries worldwide.

In order to provide a solution to its expanding global work force, Outsourcing required a technology solution that would provide best-in-class security for employees without compromising user–experience.  Additionally, the company needed a solution that could work across multiple cloud services and on-premises applications used by the group’s companies in Japan and overseas. Outsourcing needed a solution that would integrate with Oracle's Documents Cloud Service so it could promptly operate with Oracle’s SaaS applications, applications built on the Oracle Cloud Platform, and third-party cloud services.

Oracle Identity Cloud will provide Outsourcing’s employees with Single Sign-on authentication that will allow them to access documents via the Oracle Documents Cloud tool. This will improve user experience and streamline operational management and enhance security. It will also build the technical foundation and operation of user ID management and authentication in the cloud. Outsourcing also plans to develop a collaboration with custom applications running on Oracle’s IaaS and PaaS and to establish a common ID and access management platform within group companies while sequentially deploying it to Oracle’s SaaS applications and third-party service.

"Outsourcing needed to establish an agile, secure system environment because of its expanding business through mergers & acquisitions (M&A), diversifying target industries, and growing domestic and overseas networks,” said Kinji Manabe, General Manager in Business Management Department, Outsourcing, Inc. “Oracle has a proven record of providing the best-in-class management solutions, and we are convinced that the Oracle Identity Cloud will be the foundation for the future growth of Outsourcing."

Contact Info
Sarah Fraser
Norihito Yachita
Oracle Japan
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.


Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.


This document is for informational purposes only and may not be incorporated into a contract or agreement. 

Talk to a Press Contact

Sarah Fraser

  • +1.650.743.0660

Norihito Yachita

  • +81.3.6834.4835

runcluvfy.sh -pre crsinst NTP failed PRVF-07590 PRVG-01017

Michael Dinh - Wed, 2017-02-08 06:56

12c ( RAC Oracle Linux Server release 7.3
/u01/software/grid/runcluvfy.sh stage -pre crsinst -n node1,node2 -verbose

Starting Clock synchronization checks using Network Time Protocol(NTP)...

Checking existence of NTP configuration file "/etc/ntp.conf" across nodes
  Node Name                             File exists?            
  ------------------------------------  ------------------------
  node02                                yes                     
  node01                                yes                     
The NTP configuration file "/etc/ntp.conf" is available on all nodes
NTP configuration file "/etc/ntp.conf" existence check passed

Checking daemon liveness...

Check: Liveness for "ntpd"
  Node Name                             Running?                
  ------------------------------------  ------------------------
  node02                                no                      
  node01                                yes                     
PRVF-7590 : "ntpd" is not running on node "node02"
PRVG-1017 : NTP configuration file is present on nodes "node02" on which NTP daemon or service was not running
Result: Clock synchronization check using Network Time Protocol(NTP) failed

NTP was indeed running on both nodes.
The issue is /var/run/ntpd.pid does not exist on the failed node.
NTP was started with incorrect options.


# cat /etc/sysconfig/ntpd
OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"

# systemctl status ntpd.service
ntpd.service - Network Time Service
   Loaded: loaded (/usr/lib/systemd/system/ntpd.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2017-02-03 20:37:18 CST; 3 days ago
 Main PID: 22517 (ntpd)
   CGroup: /system.slice/ntpd.service
           /usr/sbin/ntpd -u ntp:ntp -x -u ntp:ntp -p /var/run/ntpd.pid

# ll /var/run/ntpd.*
-rw-r--r-- 1 root root 5 Feb  3 20:37 /var/run/ntpd.pid


# cat /etc/sysconfig/ntpd
OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"

# systemctl status ntpd.service
ntpd.service - Network Time Service           
   Loaded: loaded (/usr/lib/systemd/system/ntpd.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2017-02-03 18:10:23 CST; 3 days ago
 Main PID: 22403 (ntpd)
   CGroup: /system.slice/ntpd.service
           /usr/sbin/ntpd -u ntp:ntp -g           

# ll /var/run/ntpd.*
ls: cannot access /var/run/ntpd.*: No such file or directory


Restart ntpd on failed node.


Subscribe to Oracle FAQ aggregator