Feed aggregator

TLS 1.2 Certified with E-Business Suite 12.1

Steven Chan - 6 hours 30 min ago

I'm pleased to announce that Oracle E-Business Suite 12.1 inbound, outbound, and loopback connections are now certified with TLS 1.2, 1.1, and 1.0. If you have not already migrated from SSL to TLS, you should begin planning the migration for your environment. 

For more information on patching and configuration requirements when migrating to TLS 1.2 from TLS 1.0 or SSL or enabling TLS for the first time, refer to the following My Oracle Support Knowledge Document:

Migrating to TLS 1.2 per the steps and configuration outlined in MOS Note 376700.1 will do the following:

  • Address recent security vulnerabilities (e.g. POODLE, FREAK, LOGIAM, RC4NOMORE)
  • Migrate to new OpenSSL libraries which will change the method by which you generate and import your certificate

Configuration Options

  • Configure TLS 1.2 with Backward Compatibility

    The default Oracle E-Business Suite 12.1 configuration allows for the handshake between the client and server to negotiate and use the highest version of TLS (either 1.2, 1.1, or 1.0) supported by both parties.

    For example, if the outbound connection used by iProcurement is by default configured for TLS 1.2, 1.1 and 1.0 and if a call is made from Oracle E-Business Suite iProcurement to an external site that supports TLS 1.2 and a common cipher suite is found, then TLS 1.2 will be used. If a call is made from Oracle E-Business Suite iProcurement to an external site that supports TLS 1.1 and a common cipher suite is found, then the handshake negotiation will resolve to use TLS 1.1.

  • Configure TLS 1.2 Only (Optional Configuration)

You may optionally configure Oracle E-Business Suite to use TLS 1.2 only for all inbound, outbound and loopback connections.

Warning: If you restrict Oracle E-Business Suite 12.1 to use only TLS 1.2, this configuration could result in the inability to connect with other sites or browsers that do not support TLS 1.2.
  • Disable the HTTP Port (Optional Configuration)

You may optionally configure the Oracle HTTP Sever (OHS) delivered with the Oracle E-Business Suite application technology stack to disable the HTTP port and use the HTTPS port only.

Where can I learn more?
There are several guides and documents that cover Oracle E-Business Suite 12.1 secure configuration and encryption. You can learn more by reading the following:

SSL or TLS 1.0 Reference Note

If you are using SSL or TLS 1.0 and need to review your current configuration or renew your certificate, you may refer to the following:

Related Articles

Categories: APPS Blogs

Positive Pay Implementation – Step by Step Guide

OracleApps Epicenter - 6 hours 35 min ago
Now that you know what Positive Pay is, you need to find out how to start using Positive Pay. First, we need to start by saying that EVERY bank handles Positive Pay differently. The steps/outline presented here are just a representation of what the most common implementation procedure could look like. 1. Contact your bank […]
Categories: APPS Blogs

Lost Concatenation

Jonathan Lewis - 12 hours 15 min ago

This note models one feature of a problem that came up at a client site recently from a system running 12.1.0.2 – a possible bug in the way the optimizer handles a multi-column in-list that can lead to extremely bad cardinality estimates.

The original query was a simple three table join which produced a bad plan with extremely bad cardinality estimates; there was, however, a type-mismatch in one of the predicates (of the form “varchar_col = numeric”), and when this design flaw was addressed the plan changed dramatically and produced good cardinality estimates. The analysis of the plan, 10053 trace, and 10046 trace files done in-house suggested that the problem might relate in some way to an error in the handling of SQL Plan Directives to estimate cardinalities.

This was one of my “solve it in a couple of hours over the internet” assignments and I’d been sent a sample of the original query with the 10046 and 10053 trace files, and a modified version of the query that bypassed the problem, again including the 10046 and 10053 trace files, with a request to explain the problem and produce a simple test case to pass to Oracle support.

The first thing I noticed was that there was something very strange about the execution plan. Here’s the query and plan in from my simplified model, showing the same anomaly:


select  /*+ no_expand */
        count(*)
from    t1, t2
where
        t2.shipment_order_id = t1.order_id
and     (t1.id, t2.v1) in ( (5000, 98), (5000, 99))
;

-------------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name  | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |       |       |       |       |   331 (100)|          |
|   1 |  SORT AGGREGATE                       |       |     1 |    19 |       |            |          |
|*  2 |   HASH JOIN                           |       |     1 |    19 |  2056K|   331   (5)| 00:00:01 |
|   3 |    TABLE ACCESS FULL                  | T2    |   100K|   878K|       |   219   (3)| 00:00:01 |
|   4 |    TABLE ACCESS BY INDEX ROWID BATCHED| T1    |   100K|   976K|       |     2   (0)| 00:00:01 |
|   5 |     BITMAP CONVERSION TO ROWIDS       |       |       |       |       |            |          |
|   6 |      BITMAP OR                        |       |       |       |       |            |          |
|   7 |       BITMAP CONVERSION FROM ROWIDS   |       |       |       |       |            |          |
|*  8 |        INDEX RANGE SCAN               | T1_PK |       |       |       |     1   (0)| 00:00:01 |
|   9 |       BITMAP CONVERSION FROM ROWIDS   |       |       |       |       |            |          |
|* 10 |        INDEX RANGE SCAN               | T1_PK |       |       |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T2"."SHIPMENT_ORDER_ID"="T1"."ORDER_ID")
       filter((
                  (TO_NUMBER("T2"."V1")=98 AND "T1"."ID"=5000) 
               OR (TO_NUMBER("T2"."V1")=99 AND "T1"."ID"=5000)
       ))
   8 - access("T1"."ID"=5000)
  10 - access("T1"."ID"=5000)

Before going on I meed to remind you that this is modelling a production problem. I had to use a hint to block a transformation that the optimizer wanted to do with my data set and statistics, I’ve got a deliberate type-mismatch in the data definitions, and there’s a simple rewrite of the SQL that would ensure that Oracle does something completely different).

The thing that caught my eye was the use of the bitmap transformation (operations 5,7,9) using exactly the same index range scan twice (operations 8,10). Furthermore, though not visible in the plan, the index in question was (as the name suggests) the primary key index on the table and it was a single column index – and “primary key = constant” should produce an “index unique scan” not a range scan.

Once you’ve added in the fact that operations 8 and 10 are the same “primary key = constant” predicates, you can also pick up on the fact that the cardinality calculation for the table access to table t1 can’t possibly produce more than one row – but it’s reporting a cardinality estimate of 100K rows (which happens to be the number of rows in the table.)

As a final point, you can see that there are no “Notes” about Dynamic Statistics or SQL Directives – this particular issue is not caused by anything to do with 12c sampling. In fact, having created the model, I ran it on 11.2.0.4 and got the same strange bitmap conversion and cardinality estimate. In the case of the client, the first pass the optimizer took went through exactly the same sort of process and produced a plan which was (probably) appropriate for a query where the driving table was going to produce (in their case) an estimated 4 million rows – but not appropriate for the actual 1 row that should have been identified.

In my example, if I allowed concatenation (i.e. removed the no_expand hint) I got the following plan:


------------------------------------------------------------------------------------------------
| Id  | Operation                              | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                       |       |       |       |     8 (100)|          |
|   1 |  SORT AGGREGATE                        |       |     1 |    19 |            |          |
|   2 |   CONCATENATION                        |       |       |       |            |          |
|   3 |    NESTED LOOPS                        |       |     1 |    19 |     4   (0)| 00:00:01 |
|   4 |     TABLE ACCESS BY INDEX ROWID        | T1    |     1 |    10 |     2   (0)| 00:00:01 |
|*  5 |      INDEX UNIQUE SCAN                 | T1_PK |     1 |       |     1   (0)| 00:00:01 |
|*  6 |     TABLE ACCESS BY INDEX ROWID BATCHED| T2    |     1 |     9 |     2   (0)| 00:00:01 |
|*  7 |      INDEX RANGE SCAN                  | T2_I1 |     1 |       |     1   (0)| 00:00:01 |
|   8 |    NESTED LOOPS                        |       |     1 |    19 |     4   (0)| 00:00:01 |
|   9 |     TABLE ACCESS BY INDEX ROWID        | T1    |     1 |    10 |     2   (0)| 00:00:01 |
|* 10 |      INDEX UNIQUE SCAN                 | T1_PK |     1 |       |     1   (0)| 00:00:01 |
|* 11 |     TABLE ACCESS BY INDEX ROWID BATCHED| T2    |     1 |     9 |     2   (0)| 00:00:01 |
|* 12 |      INDEX RANGE SCAN                  | T2_I1 |     1 |       |     1   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   5 - access("T1"."ID"=5000)
   6 - filter(TO_NUMBER("T2"."V1")=99)
   7 - access("T2"."SHIPMENT_ORDER_ID"="T1"."ORDER_ID")
  10 - access("T1"."ID"=5000)
  11 - filter((TO_NUMBER("T2"."V1")=98 AND (LNNVL(TO_NUMBER("T2"."V1")=99) OR
              LNNVL("T1"."ID"=5000))))
  12 - access("T2"."SHIPMENT_ORDER_ID"="T1"."ORDER_ID")

This is a much more appropriate plan – and similar to the type of plan the client saw when they eliminated the type-mismatch problem (I got a completely different plan when I used character values ’98’ and ’99’ in the in-list or when I used a numeric column with numeric literals).

Examining my 10053 trace file I found the following:

  • In the BASE STATISTICAL INFORMATION, the optimizer had picked up column statistics about the order_id column, but not about the id column in the in-list – this explained why the cardinality estimate was 100K, Oracle had “lost” the predicate.
  • In the “SINGLE TABLE ACCESS PATH”, the optimizer had acquired the statistics about the id column and calculated the cost of using the t1_pk index to access the table for a single key (AllEqUnique), then calculated the cost of doing a bitmap conversion twice (remember we have two entries in the in-list – it looks like the optimizer has “rediscovered” the predicate). But it had still kept the table cardinality of 4M.

After coming up with a bad plan thanks to this basic cardinality error, the 10053 trace file for the client’s query then went on to consider or-expansion (concatenation). Looking at this part of their trace file I could see that the BASE STATISTICAL INFORMATION now included the columns relevant to the in-list and the SINGLE TABLE ACCESS PATH cardinalities were suitable. Moreover when we got to the GENERAL PLANS the join to the second table in the join order showed a very sensible cost and cardinality – unfortunately, having been sensible up to that point, the optimizer then decided that an SQL Plan Directive should be used to generate a dynamic sampling query to check the join cardinality and the generated query again “lost” the in-list predicate, resulting in a “corrected” cardinality estimate of 6M instead of a correct cardinality estimate of 1. As usual, this massive over-estimate resulted in Oracle picking the wrong join method with a huge cost for the final join in the client’s query – so the optimizer discarded the or-expansion transformation and ran with the bad bitmap/hash join plan.

Bottom line for the client – we may have seen the same “lose the predicate” bug appearing in two different ways, or we may have seen two different “lose the predicate” bugs – either way a massive over-estimate due to “lost” predicates during cardinality calculations resulted in Oracle picking a very bad plan.

Footnote:

If you want to do further testing on the model, here’s the code to generate the data:


create table t1
nologging
as
with generator as (
        select  rownum id
        from    dual
        connect by
                level <= 1e4
)
select
        rownum                                  id,
        rownum                                  order_id,
        rpad('x',100)                           padding
from
        generator, generator
where
        rownum <= 1e5
;

execute dbms_stats.gather_table_stats(user,'t1')

alter table t1 modify order_id not null;
alter table t1 add constraint t1_pk primary key(id);


create table t2
nologging
as
with generator as (
        select  rownum id
        from    dual
        connect by
                level <= 1e4
)
select
        rownum                                  shipment_order_id,
        mod(rownum-1,1000)                      n1,
        cast(mod(rownum-1,1000) as varchar2(6)) v1,
        rpad('x',100)                           padding
from
        generator, generator
where
        rownum <= 1e5
;

execute dbms_stats.gather_table_stats(user,'t2')

alter table t2 modify shipment_order_id not null;
create index t2_i1 on t2(shipment_order_id);

The interesting question now is WHY does Oracle lose the predicate – unfortunately my model may be too simplistic to allow us to work that out, but it might be sufficient to make it easy for an Oracle developer to see what’s going on and how best to address it. There is one bug on MoS (23343961) that might be related in some way, but I wasn’t convinced that the description was really close enough.

Update

This issue is now recorded on MoS as: Bug 24350407 : WRONG CARDINALITY ESTIMATION IN PRESENCE OF BITMAP OR

 


Links for 2016-07-25 [del.icio.us]

Categories: DBA Blogs

Regarding Listener

Tom Kyte - Mon, 2016-07-25 16:06
Hello sir, Currently i am working in Clover infotech as a ORACLE DBA...actually i wanted to know that is there any hard limit for number of Listeners in oracle . Means how many listeners we can configure in single database and also how many con...
Categories: DBA Blogs

Recover Catalog Manager

Tom Kyte - Mon, 2016-07-25 16:06
I understand that Recover Catalog Manager does has the metadata of the registered database, if the registered database has been recovered by doing restlogs(once we reset the logs we cannot use the old backups using the control file of the database as...
Categories: DBA Blogs

Difference between procedure and stored procedure.

Tom Kyte - Mon, 2016-07-25 16:06
Hi Tom, I want to know what is the difference between procedure and stored procedure? Thanks, Deekshith.
Categories: DBA Blogs

Interval partitioning

Tom Kyte - Mon, 2016-07-25 16:06
Hi Tom, I am trying to create a partitioned table so that a date-wise partition is created on inserting a new row for release_date column. But please note that release_date column is having number data type (as per design) and people want to create...
Categories: DBA Blogs

How to use case statement inside xml table

Tom Kyte - Mon, 2016-07-25 16:06
Hi, I have a table like below create table emptable ( id number primary key, emps varchar2 ( 500 )) ; with this data in it insert into emptable values ( 1, '<?xml version="1.0" encoding="UTF-8" standalone="yes"?><employee emp_no="...
Categories: DBA Blogs

Copy table data from one database to another with table partitions and references

Tom Kyte - Mon, 2016-07-25 16:06
Hi Guys, Need your help. I have 4-5 different tables in Oracle database(11g). I have to copy the data from all these tables from there(1st DB) to another database(2nd DB). Source database is having partitioned tables and data in one table could...
Categories: DBA Blogs

WebCenter SIG Webcast: WebCenter Content & Imaging

WebCenter Team - Mon, 2016-07-25 13:41
 Oracle WebCenter Content & Imaging Product Roadmap Review

The WebCenter SIG is hosting a webcast on July 26th at 12:00pm Central time.

Featured Speaker: Marcus Diaz, Sr. Principal Product Manager Oracle Corp.

This webcast will recap all the major enhancements in the WebCenter Content & Imaging product line for the 11.1.1.8, 11.1.1.9 , 12.2.1.0 & 12.2.1.1 releases and will discuss possible candidate enhancements for the WebCenter Content 12.2.1.3 release.

Register for the webcast here!

EBS Release 12.x certified with Safari 9 on Apple OS X

Steven Chan - Mon, 2016-07-25 12:25
Apple logoOracle E-Business Suite Release 12.1.3 and 12.2.4 or higher are now certified with Apple Mac OS X with the following desktop configurations:
  • Mac OS X 10.11 ("El Capitan" version 10.11.5 or later 10.11 updates)
  • Mac OS X 10.10 ("Yosemite" version 10.10.2 or later 10.10 updates)
  • Safari version 9 (9.1.1 or later 9.x updates)
  • Oracle JRE 8 plugin (1.8.0_91 or higher)
Users should review all relevant information along with other specific patching requirements and known limitations posted in the Notes listed below.

More information on this can be found in the document:

Categories: APPS Blogs

Understanding Positive Pay

OracleApps Epicenter - Mon, 2016-07-25 10:36
Positive Pay can best be described as a fraud prevention program or tool. Technology has increasingly facilitated the ability of criminals to create counterfeit checks and false identification that can be used to engage in fraudulent check activities. As a result, companies must adopt practices to protect against check fraud. Positive pay can provide this […]
Categories: APPS Blogs

Getting started with Ansible – Installing OS packages, creating groups and users

Yann Neuhaus - Mon, 2016-07-25 08:13

It has been quite a while since the first post in this series: “Getting started with Ansible – Preparations“. If you recap from the initial post Ansible was running on the control host and this simple Ansible command:

ansible postgres-servers -a "/bin/echo 11" -f 5

… was successfully executed against the “postgres-servers” group. So far, so good. Getting Ansible up and running for just this would not be very usefull, so lets see where we might go from here.

When you start to think on what you want to automate you should start to think on how you want to organize your Ansible stuff. The documentation provides some guidelines which might or might not fit your needs. For the scope of this series lets stick to what the documentation is recommeding as one possible way to go. The directory layout on the control host will then be:

[ansible@ansiblecontrol ~]$ sudo mkdir /opt/ansible
[ansible@ansiblecontrol ~]$ sudo chown ansible:ansible /opt/ansible
[ansible@ansiblecontrol ~]$ touch /opt/ansible/development                  # the inventory file for the development hosts      
[ansible@ansiblecontrol ~]$ touch /opt/ansible/staging                      # the inventory file for the staging hosts
[ansible@ansiblecontrol ~]$ touch /opt/ansible/production                   # the inventory file for the production hosts
[ansible@ansiblecontrol ~]$ mkdir /opt/ansible/roles/
[ansible@ansiblecontrol ~]$ mkdir /opt/ansible/roles/common                 # a role valid for "common" stuff
[ansible@ansiblecontrol ~]$ mkdir /opt/ansible/roles/common/tasks
[ansible@ansiblecontrol ~]$ mkdir /opt/ansible/roles/common/handlers
[ansible@ansiblecontrol ~]$ mkdir /opt/ansible/roles/common/templates
[ansible@ansiblecontrol ~]$ mkdir /opt/ansible/roles/common/files
[ansible@ansiblecontrol ~]$ mkdir /opt/ansible/roles/common/vars
[ansible@ansiblecontrol ~]$ mkdir /opt/ansible/roles/common/meta
[ansible@ansiblecontrol ~]$ mkdir /opt/ansible/roles/postgresqldbserver    # a role vaild for the PostgreSQL stuff
[ansible@ansiblecontrol ~]$ mkdir /opt/ansible/roles/postgresqldbserver/tasks
[ansible@ansiblecontrol ~]$ mkdir /opt/ansible/roles/postgresqldbserver/handlers
[ansible@ansiblecontrol ~]$ mkdir /opt/ansible/roles/postgresqldbserver/templates
[ansible@ansiblecontrol ~]$ mkdir /opt/ansible/roles/postgresqldbserver/files
[ansible@ansiblecontrol ~]$ mkdir /opt/ansible/roles/postgresqldbserver/vars
[ansible@ansiblecontrol ~]$ mkdir /opt/ansible/roles/postgresqldbserver/meta

The concept of roles is explained in the documentation and you should definitely read that. We’ll come back to this later.

For now let’s place our two PostgreSQL hosts into the “development” inventory:

[ansible@ansiblecontrol ~]$ echo "[postgresql-servers]" >> /opt/ansible/development
[ansible@ansiblecontrol ~]$ echo "192.168.22.171" >> /opt/ansible/development
[ansible@ansiblecontrol ~]$ echo "192.168.22.172" >> /opt/ansible/development

Passing our new inventory file to Ansible we should be able to perform the same simple task as in the first post:

[ansible@ansiblecontrol ~]$ ansible -i /opt/ansible/development postgresql-servers -a "/bin/echo 11"
192.168.22.172 | SUCCESS | rc=0 >>
11

192.168.22.171 | SUCCESS | rc=0 >>
11

Ok, fine, this still works. When it comes to PostgreSQL one of the first steps when we want to install from source is to install all the operating system packages which are required. How could we do that with Ansible?

The initial step is to tell Ansible where to look for our roles. This is done by specifying the “roles_path” configuration parameter in the ansible.cfg configuration file:

[ansible@ansiblecontrol ~]$ cat /etc/ansible/ansible.cfg | grep roles | grep -v "#"
roles_path    = /opt/ansible/roles

From here on we need to setup our role by creating an initial “site.yml” file:

[ansible@ansiblecontrol ansible]$ cat roles/postgresqldbserver/site.yml 
---
# This playbook deploys a single PostgreSQL instance from the source code

- hosts: postgresql-servers
  become: true
  become_user: root

  roles:
    - postgresqldbserver

You can see from the above that the “postgresql-servers” group is referenced. Additionally notice the “become” and the “become_user” flags. As we’re going to use yum to install the packages we need a way to become root on the target system and this is how you can instruct Ansible to do so.
Time to specify on how we want to install the packages. This is quite easy as well:

[ansible@ansiblecontrol ansible]$ cat roles/postgresqldbserver/tasks/main.yml 
---
- name: Install PostgreSQL dependencies
  yum: name={{item}} state=present
  with_items:
   - gcc
   - openldap-devel
   - python-devel
   - readline-devel
   - openssl-devel
   - redhat-lsb
   - bison
   - flex
   - perl-ExtUtils-Embed
   - zlib-devel
   - crypto-utils
   - openssl-devel
   - pam-devel
   - libxml2-devel
   - libxslt-devel
   - tcl
   - tcl-devel
   - openssh-clients
   - bzip2
   - net-tools
   - wget
   - screen
   - ksh

What did we do here? We created our first task. We tell Ansible to use “yum” to install our “items” (which are the packages we want to install). You can check the documentation for more information on the yum module.

Lets see if it works and we can execute our first task on both PostgreSQL nodes:

[ansible@ansiblecontrol ~]$ ansible-playbook -i /opt/ansible/development /opt/ansible/roles/postgresqldbserver/site.yml

PLAY [postgresql-servers] ******************************************************

TASK [setup] *******************************************************************
ok: [192.168.22.171]
ok: [192.168.22.172]

TASK [postgresqldbserver : Install PostgreSQL dependencies] ********************
changed: [192.168.22.172] => (item=[u'gcc', u'openldap-devel', u'python-devel', u'readline-devel', u'openssl-devel', u'redhat-lsb', u'bison', u'flex', u'perl-ExtUtils-Embed', u'zlib-devel', u'crypto-utils', u'openssl-devel', u'pam-devel', u'libxml2-devel', u'libxslt-devel', u'tcl', u'tcl-devel', u'openssh-clients', u'bzip2', u'net-tools', u'wget', u'screen', u'ksh'])
changed: [192.168.22.171] => (item=[u'gcc', u'openldap-devel', u'python-devel', u'readline-devel', u'openssl-devel', u'redhat-lsb', u'bison', u'flex', u'perl-ExtUtils-Embed', u'zlib-devel', u'crypto-utils', u'openssl-devel', u'pam-devel', u'libxml2-devel', u'libxslt-devel', u'tcl', u'tcl-devel', u'openssh-clients', u'bzip2', u'net-tools', u'wget', u'screen', u'ksh'])

PLAY RECAP *********************************************************************
192.168.22.171             : ok=2    changed=1    unreachable=0    failed=0   
192.168.22.172             : ok=2    changed=1    unreachable=0    failed=0   

Cool, we just installed all the dependencies on both nodes with one Ansible command. We additionally want an operating system group for our PostgreSQL deployment so we add the following lines to the playbook:

- name: Add PostgreSQL operating system group
  group: name=postgressomegroup state=present

Execute the playbook again:

[ansible@ansiblecontrol ~]$ ansible-playbook -i /opt/ansible/development /opt/ansible/roles/postgresqldbserver/site.yml

PLAY [postgresql-servers] ******************************************************

TASK [setup] *******************************************************************
ok: [192.168.22.172]
ok: [192.168.22.171]

TASK [postgresqldbserver : Install PostgreSQL dependencies] ********************
ok: [192.168.22.172] => (item=[u'gcc', u'openldap-devel', u'python-devel', u'readline-devel', u'openssl-devel', u'redhat-lsb', u'bison', u'flex', u'perl-ExtUtils-Embed', u'zlib-devel', u'crypto-utils', u'openssl-devel', u'pam-devel', u'libxml2-devel', u'libxslt-devel', u'tcl', u'tcl-devel', u'openssh-clients', u'bzip2', u'net-tools', u'wget', u'screen', u'ksh'])
ok: [192.168.22.171] => (item=[u'gcc', u'openldap-devel', u'python-devel', u'readline-devel', u'openssl-devel', u'redhat-lsb', u'bison', u'flex', u'perl-ExtUtils-Embed', u'zlib-devel', u'crypto-utils', u'openssl-devel', u'pam-devel', u'libxml2-devel', u'libxslt-devel', u'tcl', u'tcl-devel', u'openssh-clients', u'bzip2', u'net-tools', u'wget', u'screen', u'ksh'])

TASK [postgresqldbserver : Add PostgreSQL operating system group] **************
changed: [192.168.22.171]
changed: [192.168.22.172]

PLAY RECAP *********************************************************************
192.168.22.171             : ok=3    changed=1    unreachable=0    failed=0   
192.168.22.172             : ok=3    changed=1    unreachable=0    failed=0   

We did not change any of the packages but added the group. Lets add the PostgreSQL operating system user by adding these lines to the playbook:

- name: Add PostgreSQL operating system user
  user: name=postgres comment="PostgreSQL binaries owner" group=postgres

Execute again:

[ansible@ansiblecontrol ~]$ ansible-playbook -i /opt/ansible/development /opt/ansible/roles/postgresqldbserver/site.yml

PLAY [postgresql-servers] ******************************************************

TASK [setup] *******************************************************************
ok: [192.168.22.172]
ok: [192.168.22.171]

TASK [postgresqldbserver : Install PostgreSQL dependencies] ********************
ok: [192.168.22.171] => (item=[u'gcc', u'openldap-devel', u'python-devel', u'readline-devel', u'openssl-devel', u'redhat-lsb', u'bison', u'flex', u'perl-ExtUtils-Embed', u'zlib-devel', u'crypto-utils', u'openssl-devel', u'pam-devel', u'libxml2-devel', u'libxslt-devel', u'tcl', u'tcl-devel', u'openssh-clients', u'bzip2', u'net-tools', u'wget', u'screen', u'ksh'])
ok: [192.168.22.172] => (item=[u'gcc', u'openldap-devel', u'python-devel', u'readline-devel', u'openssl-devel', u'redhat-lsb', u'bison', u'flex', u'perl-ExtUtils-Embed', u'zlib-devel', u'crypto-utils', u'openssl-devel', u'pam-devel', u'libxml2-devel', u'libxslt-devel', u'tcl', u'tcl-devel', u'openssh-clients', u'bzip2', u'net-tools', u'wget', u'screen', u'ksh'])

TASK [postgresqldbserver : Add PostgreSQL operating system group] **************
changed: [192.168.22.171]
changed: [192.168.22.172]

TASK [postgresqldbserver : Add PostgreSQL operating system user] ***************
changed: [192.168.22.171]
changed: [192.168.22.172]

PLAY RECAP *********************************************************************
192.168.22.171             : ok=4    changed=2    unreachable=0    failed=0   
192.168.22.172             : ok=4    changed=2    unreachable=0    failed=0   

Really cool and simple. Just to prove lets connect to one of the nodes and check if the postgres user really is there:

[root@ansiblepg2 ~] id -a postgres
uid=1001(postgres) gid=1001(postgres) groups=1001(postgres)
[root@ansiblepg2 ~] 

Perfect. In the next post we’ll install the PostgreSQL binaries.

For you reference this is the playbook as it looks now:

[ansible@ansiblecontrol ansible]$ cat roles/postgresqldbserver/tasks/main.yml
---
- name: Install PostgreSQL dependencies
  yum: name={{item}} state=present
  with_items:
   - gcc
   - openldap-devel
   - python-devel
   - readline-devel
   - openssl-devel
   - redhat-lsb
   - bison
   - flex
   - perl-ExtUtils-Embed
   - zlib-devel
   - crypto-utils
   - openssl-devel
   - pam-devel
   - libxml2-devel
   - libxslt-devel
   - tcl
   - tcl-devel
   - openssh-clients
   - bzip2
   - net-tools
   - wget
   - screen
   - ksh

- name: Add PostgreSQL operating system group
  group: name=postgressomegroup state=present

- name: Add PostgreSQL operating system user
  user: name=postgres comment="PostgreSQL binaries owner" group=postgres
 

Cet article Getting started with Ansible – Installing OS packages, creating groups and users est apparu en premier sur Blog dbi services.

Telstra WIFI API Consumer on Pivotal Cloud Foundry

Pas Apicella - Mon, 2016-07-25 07:23
If you heard of Telstra WIFI API you will know it will allow you to search for WIFI Hotspots within a given radius and can be used after signing in for Telstra.dev account at https://dev.telstra.com/ to obtain the Hotpots within a given Radius and Lat/Long location.

The WIFI API for Telstra is described at the link below.

  https://dev.telstra.com/content/wifi-api

The following application I built on Pivotal Cloud Foundry consumes this Telstra WIFI API service and using the Google Map API along with Spring Boot will show you all the WIFI Hotspots Telstra provides from a mobile device or a Web Browser at your current location. The live URL is as follows. You will need to agree to share your location and enable Location services from your browser when on a mobile device for the MAP to be of any use. Lastly this is only useful within Australia of course.

http://pas-telstrawifi.cfapps.io/



Source Code as follows:

https://github.com/papicella/TelstraWIFIAPIPublic

More Information

https://dev.telstra.com/content/wifi-api
Categories: Fusion Middleware

Glens Falls Hospital, University of Pittsburgh Medical Center (UPMC) and a New York Medical Center Move to Oracle Cloud

Oracle Press Releases - Mon, 2016-07-25 07:00
Press Release
Glens Falls Hospital, University of Pittsburgh Medical Center (UPMC) and a New York Medical Center Move to Oracle Cloud Oracle’s modern, integrated cloud solutions help medical centers cost-effectively drive flexible, scalable innovation

Redwood Shores, Calif.—Jul 25, 2016

Oracle announced today that Glens Falls Hospital, University of Pittsburgh Medical Center (UPMC) and NYU Langone Medical Center have turned to Oracle Cloud to modernize their healthcare insight and administrative resources for staff and patients. These organizations join many other hospitals, healthcare providers and research centers that have moved the management of their people and business processes to the Oracle Cloud.

Today, the healthcare industry faces pressure from all sides—healthcare reform mandates, changing market conditions, increasing regulatory requirements and the constant need for new revenue streams. Therefore, it is essential for healthcare organizations to be sensitive to securing information. An added complexity is that the workforce is diverse, ranging from hourly to highly skilled workers. Mobile technology has also changed the way information is received and used. Healthcare providers are being asked to do more for less, all while juggling challenges on how to acquire and retain talent, move toward a more digital environment, cut costs and consolidate HR processes. They are turning to cloud technology to adapt to these complex needs.

With Oracle Human Capital Management (HCM) Cloud, healthcare institutions have a single, unified platform to grow with confidence and focus less on process and more on quality patient care. Oracle’s Cloud solutions provide universal reporting and analytics that allow organizations easy access to a complete perspective of its HR functions.

“As the healthcare industry transforms, healthcare institutions need to transform their HCM systems to keep pace with competition, as well as source, employ and retain the best talent. Many suffer from disparate systems that don’t work across functions,” said Chris Leone, senior vice president, Applications Development, Oracle. “Oracle HCM Cloud technology not only provides an easy experience for end users to access and navigate through a variety of platforms—including mobile—but also one that is cost-effective and integrated across all business functions, turning information into meaningful action to help improve employee experience.”

As business models change, providers need systems that can keep up with them. A few key areas in which Oracle’s HCM Cloud systems are helping include:

  • Sourcing and retaining talent with mobile applications: When improving and saving lives, the people who touch patients become incredibly important. Oracle’s Cloud technology provides tools that are modern, easy to use and personalized for each individual employee. With most hospital staff constantly on the go, mobile apps put information in employees’ hands for quick and easy information access. Mobile apps also help employees better connect with the business to take control of their jobs, their development and their careers. This leads to higher levels of productivity and engagement.
  • Cost reduction: Standardizing and automating business processes that eliminate multiple systems and duplicate tasks help reduce IT costs. This offers providers increased efficiency and new market opportunities to grow their businesses.
  • Data insight: Oracle’s Cloud solutions provide real-time business insights into data trends and analytics that directly impact the workforce and profitability. Oracle’s Cloud technology can predict potential staffing and skill gaps, as well as assess profits and losses to manage costs.

“We needed an integrated system that would allow us to provide excellent healthcare service to our patients, while still serving our employees with more convenient HR systems that include global payroll functionalities,” said Kyle Brock, vice president of Human Resources, Glens Falls Hospital. “By going with the entire Oracle HCM Cloud suite, we were able to realize significant cost savings through contract consolidation to increase team productivity and improve overall department efficiency. We were also able to shift our entire HR operations to the Cloud, while not impacting other hospital priorities.”

UPMC, the largest non-governmental employer in Pennsylvania, integrates more than 60,000 employees, 20 hospitals, 500 doctors’ offices and outpatient sites, a 2.9-million-member health insurance division, and international and commercial operations. “Healthcare is faced with increased and far more diverse requirements than even a few years ago,” said John Galley, senior vice president & chief HR officer, UPMC. “As UPMC leads the way in creating patient-centered, value-based healthcare, it is crucial that we have a modern platform in place to help us address new challenges, keep costs low, and deliver flexibility. Oracle’s Cloud-based platform and vision for healthcare made it the right choice for UPMC.”

Contact Info
Jennifer Yamamoto
Oracle
+1.916.761.9555
jennifer.yamamoto@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Jennifer Yamamoto

  • +1.916.761.9555

How to Find the Row That Has the Maximum Value for a Column in Oracle

Complete IT Professional - Mon, 2016-07-25 06:00
Do you need to find the data for a row which has the maximum value for a specific column in Oracle? Learn how to do it and see some alternative answers in this article. The Table Structure Let’s say you have a table called sales_volume that looks like this: CITY START_DATE SALES El Paso 27/May/15 […]
Categories: Development

PeopleSoft Guest User Security

Being hospitable and welcoming to guests is usually considered good manners.  That said, being a gracious host does not mean you should be careless with your security.

With regard to PeopleSoft application security, the user GUEST is a default account created with the installation of PeopleSoft.  When performing a PeopleSoft security audit, several attributes of the GUEST user are reviewed, including the following -  take a look today at your settings:

For the GUEST user:

  • Change the default password
  • Ensure does not have access to sensitive menus and/or roles, including not having access to the following:
  • The role ‘PeopleSoft User’
  • Any role that includes the permission list PTPT1000
  • The role ‘PAPP_USER’
  • Any role that includes the permission list PAPP0002

If you have questions, please contact us at info@integrigy.com

Michael A. Miller, CISSP-ISSMP, CCSP

References

PeopleSoft Database Security

PeopleSoft Security Quick Reference

Auditing, Oracle PeopleSoft
Categories: APPS Blogs, Security Blogs

Securing Application Express when using Oracle REST Data Services (ORDS)

Joel Kallman - Sun, 2016-07-24 21:54
If you are using Oracle REST Data Services as the "PL/SQL Gateway" for Oracle Application Express, ensure that your ORDS configuration includes the following line:

wwv_flow_epg_include_modules.authorize

It is important that you do this, and let me explain why.

Fundamentally, the APEX "engine" is really nothing more than a big PL/SQL program running inside the Oracle Database.  When a browser makes a request for a page in an APEX application, that request is mapped to a PL/SQL procedure which is running inside the database.  If you examine an APEX URL in your browser, you may see something like 'f?p=...', and this is invoking a PL/SQL procedure in the database named 'F' with a parameter named 'P'.

There are a number of procedures in the APEX engine which are intended to be invoked from a URL.  But there may be other procedures in your database, possibly owned by users other than the Application Express user, which are not intended to be called from a URL.  In some cases, these other procedures could leak information or introduce some other class of security issue.  There should be a simple list of procedures which are permitted to be invoked from a URL, and all others should be blocked.  This is known as a "whitelist", and fortunately, there is a native facility in APEX which defines this whitelist.  You just need to tell ORDS about this whitelist.

When you configure ORDS with the following entry in the configuration file:

wwv_flow_epg_include_modules.authorize

You are instructing ORDS to validate the PL/SQL procedure requested in the URL using the PL/SQL function wwv_flow_epg_include_modules.authorize.  This whitelist will contain all of the necessary entry points into the APEX engine, nothing more, nothing less.

If you rely upon functionality in your application which makes use of PL/SQL procedures not defined in this whitelist, this functionality will break when you specify the security.requestValidationFunction.  I often encounter customers who invoke PL/SQL procedures in their application schema to download files, but there are better (and more secure) ways to do this, which would not break when implementing this whitelist.

Like any change to infrastructure or configuration, you should thoroughly test your applications with this setting prior to introducing it into a production environment.  But if one or two things break because of this change, don't use that as an excuse to not implement this configuration change.  Identify the issues and correct them.  While there is a method in place to extend the whitelist, in practice, this should be seldom used.

If you're using ORDS as a mod_plsql replacement for your PL/SQL Web Toolkit application and not using APEX, then please avoid this configuration setting.  APEX typically won't be installed in your database, and the whitelist will be irrelevant for your application.

The function wwv_flow_epg_include_modules.authorize has been around for more than 10 years (our teammate Scott added it in 2005), and it has been a part of the embedded PL/SQL Gateway and mod_plsql default configuration for a long time.  And while it has been documented for use with ORDS, a reasonable person might ask why this isn't simply part of the default configuration of APEX & ORDS.  I did confirm with the ORDS team that this will be included in the default configuration when using the PL/SQL Gateway of ORDS, beginning in ORDS 3.0.7.

Pages

Subscribe to Oracle FAQ aggregator