Feed aggregator

Oracle home and connecting to the database

Tom Kyte - Wed, 2016-07-27 04:46
Sir im a beginer to oracle database my query is,How we can connect to a database if we do not know the ORACLE_HOME. eg: if we have 5 databases running in a server and there are 5 different ORACLE_HOME ( ie, ORACLE_HOME is different for each datab...
Categories: DBA Blogs

Hierarchical XML

Tom Kyte - Wed, 2016-07-27 04:46
Hi Team, I have the below data in a table:- id parent_id 1 2 3 1 4 1 5 2 6 2 7 3 8 3 I have tried hierarchical functions and I have got the below data SELECT id, parent_id, LEVEL, CONNECT_...
Categories: DBA Blogs

SQL to calculate date week numbers

Tom Kyte - Wed, 2016-07-27 04:46
Hi, I would appreciate if someone could point me in the right direction here. How do I write this SQl statement for below. Here is the Criteria below: I'm trying to go back 52 weeks based on (today's date). So for next week this week will be tw...
Categories: DBA Blogs

ADF BC REST 12.2.1.0 Running Live in Oracle Java Cloud

Andrejus Baranovski - Wed, 2016-07-27 04:22
It passed almost two years since my previous post about Oracle Java Cloud and ADF - End-To-End ADF Cloud Deployment Process. There is huge improvement in Oracle Cloud, great progress done in these two years by Oracle. Access and management of Oracle Cloud environment is so smooth now, it is even hard to say you are working with your own on premise servers or cloud machines. This is great, I'm impressed - well done Oracle.

I have implemented ADF BC REST application with JDEV 12.2.1.0 and deployed it to Java Cloud. You can access REST service yourself from this URL: http://140.86.3.179/restapp/rest/1/Employees (username: redsam, password: welcome1).

I can monitor deployed application with ADF BC REST through Enterprise Manager in the Cloud. ADF BC REST request/response is going through Oracle RESTServlet, this is visible in the statistics:


There are no changes required to ADF application to be deployed to the Cloud. Simply make sure to provide correct datasource name, to be able to load data from Database Cloud. I have created jdbc/HrDS data source in Java Cloud, to point to Database Cloud. Using the same data source for AM configuration, this AM will be responsible to return data from ADF BC for REST requests:


Database Cloud offers public IP, it can be used to access DB from outside. If you want to reference DB schema from data source created in Java Cloud, instead of using public IP you should use DB instance name (example: RedSamuraiDB) and service name for pluggable DB created in Database Cloud (example: PDB1.ltredsamurai.oraclecloud.internal):


ADF applications deployed on Java Cloud will be fetching data from Database Cloud through data source.

Make sure to define WAR deployment profile for ADF application and specify context root:


ADF application should contain EAR deployment profile, which will reference WAR profile:


Uncheck "Auto Generate and Synchronize WebLogic JDBC Descriptors During Deployment" option in Application Properties (we don't need to deploy internal JDBC connection to Cloud):


Oracle Cloud offers services control dashboard. Here we can monitor active services, create new ones or stop existing ones. There is a list of options how to manage Java Cloud instance. Since my ADF BC REST application is enabled with ADF Security, I could go to WebLogic Server Console and create test user directly in DefaultAuthenticator:


Test user can be created exactly in the same way as on WebLogic running on-premise:


You can deploy ADF app on Oracle Cloud through EAR, using Enterprise Manager (don't forget to activate changes and start application). Application status can be reviewed from deployments list:


Try to access ADF BC REST service http://140.86.3.179/restapp/rest/1/Employees and enter redsam/welcome1 for login:


You should see JSON format data in response from ADF BC REST:


Download sample ADF BC REST application, the one I was using to deploy to Java Cloud - ADFBCRestApp_v9.zip.

New OA Framework 12.2.4 Update 11 Now Available

Steven Chan - Wed, 2016-07-27 02:05

Web-based content in Oracle E-Business Suite 12 runs on the Oracle Application Framework (OAF or "OA Framework") user interface libraries and infrastructure.   Since the release of Oracle E-Business Suite 12.2 in 2013, we have released several updates to Oracle Application Framework to fix performance, security, and stability issues. 

These updates are provided in cumulative Release Update Packs, and cumulative Bundle Patches that can be applied on top of the Release Update Pack. "Cumulative" means that the latest RUP or Bundle Patch contains everything released earlier.

The latest OAF update for EBS 12.2.4 is now available:


Where is the documentation for this update?

Instructions for installing this OAF Release Update Pack are here:

Who should apply this patch?

All EBS 12.2 users should apply this patch.  Future OAF patches for EBS 12.2 will require this patch as a prerequisite. 

What's new in this update?

Fixes are included for following critical issues:

  • MAC validations are failing for JTT  URL cookie "jfn" in interoperability flows.
  • Message box bean is not accessibility complaint.
  • Proxy user session throws 'insufficient privileges' error when home page mode is flat.
  • Empty MDS customization documents are present in the db tables after upgrading to Release 12.2.3 of EBS and consequently, the users cannot access the pages.
  • In a KFF which has validation disabled, invalid values entered by end user are not retained after navigating back from another page.

This Bundle Patch requires the R12.ATG_PF.C.Delta.4 Release Update Pack as a mandatory prerequisite.

Related Articles


Categories: APPS Blogs

CI, DevOps and ALM for Oracle SOA Suite with Oracle Developer Cloud Service

Shay Shmeltzer - Tue, 2016-07-26 18:31

We have a lot of developers who are using JDeveloper to develop applications with Oracle SOA Suite, and in this blog I wanted to show them how the combination of JDeveloper along with Oracle Developer Cloud Service can help automate their whole development and delivery lifecycle.

One unique aspect of Developer Cloud Service is that it has an instance of JDeveloper available in the build environment. This allows customers who are building Oracle SOA artifacts to leverage the OJDeploy mechanism to package their applications as part of a continuous integration cycle just like they do during development time.

With the improved DevCS integration that we added in JDeveloper 12.2.1, developers can go beyond integration with the Git server offered by DevCS and can now interact with the DevCS task tracking system directly as well as associate code changes to specific tasks they are working on.  

In this 10 minutes video I show:

  • Creating Ant based builds for Oracle SOA artifacts
  • Automating Continuous Integration build and packaging for Oracle SOA from Developer Cloud Service
  • Managing SOA project code with Git and Developer Cloud Service 
  • Tracking tasks from JDeveloper and monitor agile development in Developer Cloud Service

By the way, for those who rather use Maven to automate their builds - this is totally possible and supported in DevCS as well. There is a set of videos that show you how to do that here

(If you haven't seen how quick and easy it is to create a DevCS project, create a git repo, track tasks and create agile team boards - see this video and this one too).

Note that developers get access to Developer Cloud Service with every subscription to the Oracle SOA Cloud Service. 

Here are the two ant files used in the video:

Note that in the build.properties you'll want to change the application and project names to match the ones you are working on.

As you see in the video the build.xml is generated for you automatically and you just need to add the line:

<property environment="env" />

build.properties

oracle.commons=../../../../oracle_common/
oracle.commons=../../../../oracle_common/
install.dir=../../../..
oracle.home=${env.ORACLE_HOME_SOA_12_2_1}
oracle.jdeveloper.workspace.path=${env.WORKSPACE}/e2e-1201-composites.jws
middleware.home=${env.MIDDLEWARE_HOME_SOA_12_2_1}
workspace=${env.WORKSPACE}
oracle.jdeveloper.ant.library=${env.ORACLE_HOME_SOA_12_2_1}/jdev/lib/ant-jdeveloper.jar
oracle.jdeveloper.deploy.dir=${env.WORKSPACE}/ProcessOrder/deploy
oracle.jdeveloper.ojdeploy.path=${oracle.home}/jdev/bin/ojdeploy
javac.nowarn=off
oracle.jdeveloper.project.name=ProcessOrder
oracle.jdeveloper.deploy.outputfile=${env.WORKSPACE}/e2e-1201-composites/ProcessOrder/deploy/${profile.name}
output.dir=classes
javac.deprecation=off
oracle.jdeveloper.deploy.profile.name=*
javac.debug=on

build.xml

<?xml version="1.0" encoding="UTF-8" ?>
<!--Ant buildfile generated by Oracle JDeveloper-->
<!--Generated Jul 25, 2016 5:11:09 PM-->
<project xmlns="antlib:org.apache.tools.ant" name="ProcessOrder" default="all" basedir=".">
<property environment="env" />
<property file="build.properties"/>
<path id="library.SOA.Designtime">
<pathelement location="${install.dir}/soa/plugins/jdeveloper/extensions/oracle.sca.modeler.jar"/>
</path>
<path id="library.SOA.Runtime">
<pathelement location="${install.dir}/soa/soa/modules/oracle.soa.fabric_11.1.1/fabric-runtime.jar"/>
<pathelement location="${install.dir}/soa/soa/modules/oracle.soa.fabric_11.1.1/tracking-api.jar"/>
<pathelement location="${install.dir}/soa/soa/modules/oracle.soa.fabric_11.1.1/tracking-core.jar"/>
<pathelement location="${install.dir}/soa/soa/modules/oracle.soa.fabric_11.1.1/edn.jar"/>
<pathelement location="${install.dir}/soa/soa/modules/oracle.soa.mgmt_11.1.1/soa-infra-mgmt.jar"/>
<pathelement location="${oracle.commons}/modules/com.oracle.webservices.fabric-common-api.jar"/>
</path>
<path id="library.BPEL.Runtime">
<pathelement location="${install.dir}/soa/soa/modules/oracle.soa.bpel_11.1.1/orabpel.jar"/>
</path>
<path id="library.Mediator.Runtime">
<pathelement location="${install.dir}/soa/soa/modules/oracle.soa.mediator_11.1.1/mediator_client.jar"/>
</path>
<path id="library.MDS.Runtime">
<pathelement location="${oracle.commons}/modules/oracle.mds/mdsrt.jar"/>
</path>
<path id="library.BC4J.Service.Runtime">
<pathelement location="${oracle.commons}/modules/oracle.adf.model/adfbcsvc.jar"/>
<pathelement location="${oracle.commons}/modules/oracle.adf.model/adfbcsvc-share.jar"/>
<pathelement location="${oracle.commons}/modules/commonj.sdo.backward.jar"/>
<pathelement location="${oracle.commons}/modules/commonj.sdo.jar"/>
<pathelement location="${oracle.commons}/modules/oracle.toplink/eclipselink.jar"/>
<pathelement location="${oracle.commons}/modules/com.oracle.webservices.fmw.wsclient-impl.jar"/>
<pathelement location="${oracle.commons}/modules/com.oracle.webservices.fmw.jrf-ws-api.jar"/>
<pathelement location="${oracle.commons}/modules/com.oracle.webservices.fmw.web-common-schemas-impl.jar"/>
</path>
<path id="classpath">
<path refid="library.SOA.Designtime"/>
<path refid="library.SOA.Runtime"/>
<path refid="library.BPEL.Runtime"/>
<path refid="library.Mediator.Runtime"/>
<path refid="library.MDS.Runtime"/>
<path refid="library.BC4J.Service.Runtime"/>
</path>
<target name="init">
<tstamp/>
<mkdir dir="${output.dir}"/>
</target>
<target name="all" description="Build the project" depends="deploy,compile,copy"/>
<target name="clean" description="Clean the project">
<delete includeemptydirs="true" quiet="true">
<fileset dir="${output.dir}" includes="**/*"/>
</delete>
</target>
<target name="deploy" description="Deploy JDeveloper profiles" depends="init">
<taskdef name="ojdeploy" classname="oracle.jdeveloper.deploy.ant.OJDeployAntTask" uri="oraclelib:OJDeployAntTask"
classpath="${oracle.jdeveloper.ant.library}"/>
<ora:ojdeploy xmlns:ora="oraclelib:OJDeployAntTask" executable="${oracle.jdeveloper.ojdeploy.path}"
ora:buildscript="${oracle.jdeveloper.deploy.dir}/ojdeploy-build.xml"
ora:statuslog="${oracle.jdeveloper.deploy.dir}/ojdeploy-statuslog.xml">
<ora:deploy>
<ora:parameter name="workspace" value="${oracle.jdeveloper.workspace.path}"/>
<ora:parameter name="project" value="${oracle.jdeveloper.project.name}"/>
<ora:parameter name="profile" value="${oracle.jdeveloper.deploy.profile.name}"/>
<ora:parameter name="nocompile" value="false"/>
<ora:parameter name="outputfile" value="${oracle.jdeveloper.deploy.outputfile}"/>
</ora:deploy>
</ora:ojdeploy>
</target>
<target name="compile" description="Compile Java source files" depends="init">
<javac destdir="${output.dir}" classpathref="classpath" debug="${javac.debug}" nowarn="${javac.nowarn}"
deprecation="${javac.deprecation}" encoding="UTF8" source="1.8" target="1.8">
<src path="SOA/SCA-INF/src"/>
</javac>
</target>
<target name="copy" description="Copy files to output directory" depends="init">
<patternset id="copy.patterns">
<include name="**/*.GIF"/>
<include name="**/*.JPEG"/>
<include name="**/*.JPG"/>
<include name="**/*.PNG"/>
<include name="**/*.cpx"/>
<include name="**/*.dcx"/>
<include name="**/*.ejx"/>
<include name="**/*.gif"/>
<include name="**/*.ini"/>
<include name="**/*.jpeg"/>
<include name="**/*.jpg"/>
<include name="**/*.png"/>
<include name="**/*.properties"/>
<include name="**/*.sva"/>
<include name="**/*.tag"/>
<include name="**/*.tld"/>
<include name="**/*.wsdl"/>
<include name="**/*.xcfg"/>
<include name="**/*.xlf"/>
<include name="**/*.xml"/>
<include name="**/*.xsd"/>
<include name="**/*.xsl"/>
<include name="**/*.exm"/>
<include name="**/*.xml"/>
<exclude name="build.xml"/>
</patternset>
<copy todir="${output.dir}">
<fileset dir="SOA/SCA-INF/src">
<patternset refid="copy.patterns"/>
</fileset>
<fileset dir=".">
<patternset refid="copy.patterns"/>
</fileset>
</copy>
</target>
</project>

Categories: Development

Oracle E-Business Suite 12.1 and 12.2 Support for TLS 1.2 Added

Oracle has released support for TLS 1.2 in Oracle E-Business Suite 12.1 and 12.2.  Previously, Oracle E-Business Suite only supported SSLv3 and TLS 1.0, which are no longer approved for use with Federal systems and are not PCI-DSS compliant as of June 2014.  For TLS 1.2 support, new My Oracle Support (MOS) documents are available:

Enabling TLS in Oracle E-Business Suite Release 12.2 (Doc ID 1367293.1)

Enabling TLS in Oracle E-Business Suite Release 12.1 (Doc ID 376700.1)

Oracle E-Business Suite 11.5 and 12.0 are desupported, therefore, these versions will continue to only support SSLv3 and TLS 1.0.

Integrigy recommends all Oracle E-Business Suite implementations use an external SSL/TLS termination point, such as an F5 BIG-IP load balancer, rather than the Oracle E-Business Suite TLS implementation in order to provide a more robust TLS implementation and allow for faster patching of the SSL technology stack.  In addition, an external TLS termination point is usually maintained by network and/or security staff for multiple applications, thus off-loading this responsibility from the Oracle DBAs who often have only limited experience with the complexity of network encryption and certificates.  Although, the one disadvantage is that the network traffic between the load balancer and Oracle E-Business Suite application server is unencrypted, however, this is normally limited to VLANs within the data center.

Encryption, Oracle E-Business Suite
Categories: APPS Blogs, Security Blogs

Explore the Next Era of Commerce @ Oracle Openworld 2016

Linda Fishman Hoyle - Tue, 2016-07-26 15:21

A Guest Post by Jeri Kelley, Senior Principal Product Manager, Oracle (pictured left)

The CX Commerce track at Oracle OpenWorld 2016 is where commerce professionals come together to learn, network and explore. Don’t miss out on many of the great reasons to attend including in-depth product updates, customer success stories, hands-on labs, networking events, and more.

Customer Success Stories, Product Experts and Live Demonstrations

Commerce attendees can explore best practices and share knowledge with sessions including:

  • Roadmap sessions for Oracle Commerce and Oracle Commerce Cloud
  • Thought leadership sessions featuring Oracle strategy and industry experts
  • Best practice customer panels
  • Hands on Labs
  • Customer and partner case studies

We encourage you to expand your horizons and take part in more than just sessions focused on commerce. Explore all that CX Central @ OpenWorld has to offer.

Kick-Off with the Commerce General Session

Join Oracle Commerce strategy experts, customers, and our partner TATA Consulting Services to learn tips, techniques, and best practices on what commerce experts are thinking about and how they are designing commerce engagements for the future.

  • Re-defining Digital Commerce for 2020 [GEN 6314]

Sessions of Special Interest

  • Commerce @ Oracle: Vision and Strategy [CON6315]
  • Cloud Ready? Key Considerations for Adopting SaaS for Commerce [CON7162]
  • The next evolution of B2B eCommerce with industry expert Justin King [CON7165]
  • Electrolux Expands OmniChannel Strategy With New Multi-Site Commerce Platform [CON7197]
  • Preparing the business for a digital commerce technology transformation (with Mason Companies) [CON7168]

Commerce Demo Zone

Take a break in the CX-Commerce Demo Zone and see the latest Commerce @ Oracle product demonstrations led by members of the commerce product management and sales consulting teams. See the latest features and learn from our customer sites:

  • Oracle Commerce Cloud – Learn all about the latest release
  • Customer Showcase – Stop by and take a tour of featured customer sites powered by Oracle Commerce Cloud

All sessions and the demo zone for customer experience will be located on the 2nd floor of Moscone West in San Francisco.

Customer Events

An Oracle OpenWorld preview would not be complete without a mention of customer appreciation events:

  • Commerce Customer Dinner @ The Waterfront Restaurant – By invitation only and a chance to network with Oracle Commerce product management and commerce peers
  • Oracle Appreciation Event to be announced

Commerce at a Glance

Visit Commerce—CX Central @ OpenWorld for full details on speakers, conference sessions, exhibits, and entertainment.

For more information.

Registration is now open. Visit Oracle OpenWorld 2016 for information on CX Sales session and other details.

See what attendees are already saying about Oracle OpenWorld Commerce Track and more by joining the conversation on social media: Twitter, #oow16, Facebook, LinkedIn, blog.

We look forward to seeing everyone in San Francisco September 18 – 22, 2016.

TLS 1.2 Certified with E-Business Suite 12.1

Steven Chan - Tue, 2016-07-26 10:31

I'm pleased to announce that Oracle E-Business Suite 12.1 inbound, outbound, and loopback connections are now certified with TLS 1.2, 1.1, and 1.0. If you have not already migrated from SSL to TLS, you should begin planning the migration for your environment. 

For more information on patching and configuration requirements when migrating to TLS 1.2 from TLS 1.0 or SSL or enabling TLS for the first time, refer to the following My Oracle Support Knowledge Document:

Migrating to TLS 1.2 per the steps and configuration outlined in MOS Note 376700.1 will do the following:

  • Address recent security vulnerabilities (e.g. POODLE, FREAK, LOGIAM, RC4NOMORE)
  • Migrate to new OpenSSL libraries which will change the method by which you generate and import your certificate

Configuration Options

  • Configure TLS 1.2 with Backward Compatibility

    The default Oracle E-Business Suite 12.1 configuration allows for the handshake between the client and server to negotiate and use the highest version of TLS (either 1.2, 1.1, or 1.0) supported by both parties.

    For example, if the outbound connection used by iProcurement is by default configured for TLS 1.2, 1.1 and 1.0 and if a call is made from Oracle E-Business Suite iProcurement to an external site that supports TLS 1.2 and a common cipher suite is found, then TLS 1.2 will be used. If a call is made from Oracle E-Business Suite iProcurement to an external site that supports TLS 1.1 and a common cipher suite is found, then the handshake negotiation will resolve to use TLS 1.1.

  • Configure TLS 1.2 Only (Optional Configuration)

You may optionally configure Oracle E-Business Suite to use TLS 1.2 only for all inbound, outbound and loopback connections.

Warning: If you restrict Oracle E-Business Suite 12.1 to use only TLS 1.2, this configuration could result in the inability to connect with other sites or browsers that do not support TLS 1.2.
  • Disable the HTTP Port (Optional Configuration)

You may optionally configure the Oracle HTTP Sever (OHS) delivered with the Oracle E-Business Suite application technology stack to disable the HTTP port and use the HTTPS port only.

Where can I learn more?
There are several guides and documents that cover Oracle E-Business Suite 12.1 secure configuration and encryption. You can learn more by reading the following:

SSL or TLS 1.0 Reference Note

If you are using SSL or TLS 1.0 and need to review your current configuration or renew your certificate, you may refer to the following:

Related Articles

Categories: APPS Blogs

Positive Pay Implementation – Step by Step Guide

OracleApps Epicenter - Tue, 2016-07-26 10:27
Now that you know what Positive Pay is, you need to find out how to start using Positive Pay. First, we need to start by saying that EVERY bank handles Positive Pay differently. The steps/outline presented here are just a representation of what the most common implementation procedure could look like. 1. Contact your bank […]
Categories: APPS Blogs

Getting started with Ansible – Creating the PostgreSQL instance

Yann Neuhaus - Tue, 2016-07-26 08:20

In the last three posts we did the initial Ansible setup, installed the operating system packages, created the PostgreSQL group and user and downloaded, compiled and installed the PostgreSQL binaries from source. In this post we’ll look at how we can use Ansible to create our first PostgreSQL instance.

As a reminder this is our current playbook:

[ansible@ansiblecontrol ansible]$ pwd
/opt/ansible
[ansible@ansiblecontrol ansible]$ cat roles/postgresqldbserver/tasks/main.yml 
---
- name: Install PostgreSQL dependencies
  yum: name={{item}} state=present
  with_items:
   - gcc
   - openldap-devel
   - python-devel
   - readline-devel
   - openssl-devel
   - redhat-lsb
   - bison
   - flex
   - perl-ExtUtils-Embed
   - zlib-devel
   - crypto-utils
   - openssl-devel
   - pam-devel
   - libxml2-devel
   - libxslt-devel
   - tcl
   - tcl-devel
   - openssh-clients
   - bzip2
   - net-tools
   - wget
   - screen
   - ksh

- name: Add PostgreSQL operating system group
  group: name=postgressomegroup state=present

- name: Add PostgreSQL operating system user
  user: name=postgres comment="PostgreSQL binaries owner" group=postgres

- name: Download the PostgreSQL 9.5.3 sources
  get_url: url=https://ftp.postgresql.org/pub/source/v9.5.3/postgresql-9.5.3.tar.bz2 dest=/var/tmp mode=0755

- name: Copy PostgreSQL install script to targets
  copy: src=install_pg953.sh dest=/var/tmp/install_pg953.sh owner=root group=root mode="u=rwx"

- name: Compile and install PostgreSQL
  shell: /var/tmp/install_pg953.sh >> /var/tmp/install_pg_log
  args:
    executable: /usr/bin/bash

What we need now is an additional task which does the creation of the PostgreSQL cluster on disk. As with the installation from source we’ll need to do some scripting for this. Here is a very basic script for doing this:

#!/usr/bin/bash

PGUSER="postgres"
PGGROUP="postgres"
DATADIRECTORY="/u02/pgdata/PG1"
XLOGLOCATION="/u03/pgdata/PG1"
PGHOME="/u01/app/postgres/product/95/db_3/"
POSTGRESDBPASSWORD="postgres"

mkdir -p ${DATADIRECTORY}
mkdir -p ${XLOGLOCATION}
chown ${PGUSER}:${PGGROUP} ${DATADIRECTORY}
chown ${PGUSER}:${PGGROUP} ${XLOGLOCATION}

su - ${PGUSER} -c "echo ${POSTGRESDBPASSWORD} > /var/tmp/tmp_pwd"
su - ${PGUSER} -c "${PGHOME}/bin/initdb -D ${DATADIRECTORY} --pwfile=/var/tmp/tmp_pwd -X ${XLOGLOCATION} -k"

rm -f /var/tmp/tmp_pwd

su - ${PGUSER} -c "${PGHOME}/bin/pg_ctl -D ${DATADIRECTORY} start"

As with the PostgreSQL installation script we’ll put this into the “files” directory of our “postgresqldbserver” role:

[ansible@ansiblecontrol ansible]$ ls roles/postgresqldbserver/files/
create_pg953_cluster.sh  install_pg953.sh

We’ll use the same logic again and add two new tasks to our existing playbook:

- name: Copy the PostgreSQL cluster creation script to the targets
  copy: src=create_pg953_cluster.sh dest=/var/tmp/create_pg953_cluster.sh owner=root group=root mode="u=rwx"

- name: Create and start the PostgreSQL instance
  shell: /var/tmp/create_pg953_cluster.sh >> /var/tmp/create_pg_cluster.log
  args:
    executable: /usr/bin/bash

Once the playbook is executed again:

[ansible@ansiblecontrol ansible]$ ansible-playbook -i /opt/ansible/development /opt/ansible/roles/postgresqldbserver/site.yml

PLAY [postgresql-servers] ******************************************************

TASK [setup] *******************************************************************
ok: [192.168.22.171]
ok: [192.168.22.172]

TASK [postgresqldbserver : Install PostgreSQL dependencies] ********************
ok: [192.168.22.172] => (item=[u'gcc', u'openldap-devel', u'python-devel', u'readline-devel', u'openssl-devel', u'redhat-lsb', u'bison', u'flex', u'perl-ExtUtils-Embed', u'zlib-devel', u'crypto-utils', u'openssl-devel', u'pam-devel', u'libxml2-devel', u'libxslt-devel', u'tcl', u'tcl-devel', u'openssh-clients', u'bzip2', u'net-tools', u'wget', u'screen', u'ksh'])
ok: [192.168.22.171] => (item=[u'gcc', u'openldap-devel', u'python-devel', u'readline-devel', u'openssl-devel', u'redhat-lsb', u'bison', u'flex', u'perl-ExtUtils-Embed', u'zlib-devel', u'crypto-utils', u'openssl-devel', u'pam-devel', u'libxml2-devel', u'libxslt-devel', u'tcl', u'tcl-devel', u'openssh-clients', u'bzip2', u'net-tools', u'wget', u'screen', u'ksh'])

TASK [postgresqldbserver : Add PostgreSQL operating system group] **************
ok: [192.168.22.171]
ok: [192.168.22.172]

TASK [postgresqldbserver : Add PostgreSQL operating system user] ***************
ok: [192.168.22.172]
ok: [192.168.22.171]

TASK [postgresqldbserver : Download the PostgreSQL 9.5.3 sources] **************
ok: [192.168.22.172]
ok: [192.168.22.171]

TASK [postgresqldbserver : Copy PostgreSQL install script to targets] **********
changed: [192.168.22.171]
changed: [192.168.22.172]

TASK [postgresqldbserver : Compile and install PostgreSQL] *********************
changed: [192.168.22.172]
changed: [192.168.22.171]

TASK [postgresqldbserver : Copy the PostgreSQL cluster creation script to the targets] ***
ok: [192.168.22.171]
changed: [192.168.22.172]

TASK [postgresqldbserver : Create and start the PostgreSQL instance] ***********
changed: [192.168.22.172]
changed: [192.168.22.171]

PLAY RECAP *********************************************************************
192.168.22.171             : ok=9    changed=3    unreachable=0    failed=0   
192.168.22.172             : ok=9    changed=4    unreachable=0    failed=0   

… we should have a running PostgreSQL instance on both nodes, lets check:

[root@ansiblepg1 tmp]# ps -ef | grep postgres
postgres 17284     1  0 08:47 ?        00:00:00 /u01/app/postgres/product/95/db_3/bin/postgres -D /u02/pgdata/PG1
postgres 17288 17284  0 08:47 ?        00:00:00 postgres: checkpointer process   
postgres 17289 17284  0 08:47 ?        00:00:00 postgres: writer process   
postgres 17290 17284  0 08:47 ?        00:00:00 postgres: wal writer process   
postgres 17291 17284  0 08:47 ?        00:00:00 postgres: autovacuum launcher process   
postgres 17292 17284  0 08:47 ?        00:00:00 postgres: stats collector process   
root     17294 10223  0 08:47 pts/1    00:00:00 grep --color=auto postgres
[root@ansiblepg2 ~]# ps -ef | grep postgres
postgres 16951     1  0 08:47 ?        00:00:00 /u01/app/postgres/product/95/db_3/bin/postgres -D /u02/pgdata/PG1
postgres 16955 16951  0 08:47 ?        00:00:00 postgres: checkpointer process   
postgres 16956 16951  0 08:47 ?        00:00:00 postgres: writer process   
postgres 16957 16951  0 08:47 ?        00:00:00 postgres: wal writer process   
postgres 16958 16951  0 08:47 ?        00:00:00 postgres: autovacuum launcher process   
postgres 16959 16951  0 08:47 ?        00:00:00 postgres: stats collector process   
root     16985 16964  0 08:48 pts/0    00:00:00 grep --color=auto postgres

Isn’t that cool? Sure, there is much hard coding in here which needs to be extended by using variables. This is a topic for another post.

For your reference here is the complete playbook:

[ansible@ansiblecontrol ansible]$ cat /opt/ansible/roles/postgresqldbserver/tasks/main.yml 
---
- name: Install PostgreSQL dependencies
  yum: name={{item}} state=present
  with_items:
   - gcc
   - openldap-devel
   - python-devel
   - readline-devel
   - openssl-devel
   - redhat-lsb
   - bison
   - flex
   - perl-ExtUtils-Embed
   - zlib-devel
   - crypto-utils
   - openssl-devel
   - pam-devel
   - libxml2-devel
   - libxslt-devel
   - tcl
   - tcl-devel
   - openssh-clients
   - bzip2
   - net-tools
   - wget
   - screen
   - ksh

- name: Add PostgreSQL operating system group
  group: name=postgressomegroup state=present

- name: Add PostgreSQL operating system user
  user: name=postgres comment="PostgreSQL binaries owner" group=postgres

- name: Download the PostgreSQL 9.5.3 sources
  get_url: url=https://ftp.postgresql.org/pub/source/v9.5.3/postgresql-9.5.3.tar.bz2 dest=/var/tmp mode=0755

- name: Copy PostgreSQL install script to targets
  copy: src=install_pg953.sh dest=/var/tmp/install_pg953.sh owner=root group=root mode="u=rwx"

- name: Compile and install PostgreSQL
  shell: /var/tmp/install_pg953.sh >> /var/tmp/install_pg_log
  args:
    executable: /usr/bin/bash

- name: Copy the PostgreSQL cluster creation script to the targets
  copy: src=create_pg953_cluster.sh dest=/var/tmp/create_pg953_cluster.sh owner=root group=root mode="u=rwx"

- name: Create and start the PostgreSQL instance
  shell: /var/tmp/create_pg953_cluster.sh >> /var/tmp/create_pg_cluster.log
  args:
    executable: /usr/bin/bash
 

Cet article Getting started with Ansible – Creating the PostgreSQL instance est apparu en premier sur Blog dbi services.

Lost Concatenation

Jonathan Lewis - Tue, 2016-07-26 04:46

This note models one feature of a problem that came up at a client site recently from a system running 12.1.0.2 – a possible bug in the way the optimizer handles a multi-column in-list that can lead to extremely bad cardinality estimates.

The original query was a simple three table join which produced a bad plan with extremely bad cardinality estimates; there was, however, a type-mismatch in one of the predicates (of the form “varchar_col = numeric”), and when this design flaw was addressed the plan changed dramatically and produced good cardinality estimates. The analysis of the plan, 10053 trace, and 10046 trace files done in-house suggested that the problem might relate in some way to an error in the handling of SQL Plan Directives to estimate cardinalities.

This was one of my “solve it in a couple of hours over the internet” assignments and I’d been sent a sample of the original query with the 10046 and 10053 trace files, and a modified version of the query that bypassed the problem, again including the 10046 and 10053 trace files, with a request to explain the problem and produce a simple test case to pass to Oracle support.

The first thing I noticed was that there was something very strange about the execution plan. Here’s the query and plan in from my simplified model, showing the same anomaly:


select  /*+ no_expand */
        count(*)
from    t1, t2
where
        t2.shipment_order_id = t1.order_id
and     (t1.id, t2.v1) in ( (5000, 98), (5000, 99))
;

-------------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name  | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |       |       |       |       |   331 (100)|          |
|   1 |  SORT AGGREGATE                       |       |     1 |    19 |       |            |          |
|*  2 |   HASH JOIN                           |       |     1 |    19 |  2056K|   331   (5)| 00:00:01 |
|   3 |    TABLE ACCESS FULL                  | T2    |   100K|   878K|       |   219   (3)| 00:00:01 |
|   4 |    TABLE ACCESS BY INDEX ROWID BATCHED| T1    |   100K|   976K|       |     2   (0)| 00:00:01 |
|   5 |     BITMAP CONVERSION TO ROWIDS       |       |       |       |       |            |          |
|   6 |      BITMAP OR                        |       |       |       |       |            |          |
|   7 |       BITMAP CONVERSION FROM ROWIDS   |       |       |       |       |            |          |
|*  8 |        INDEX RANGE SCAN               | T1_PK |       |       |       |     1   (0)| 00:00:01 |
|   9 |       BITMAP CONVERSION FROM ROWIDS   |       |       |       |       |            |          |
|* 10 |        INDEX RANGE SCAN               | T1_PK |       |       |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T2"."SHIPMENT_ORDER_ID"="T1"."ORDER_ID")
       filter((
                  (TO_NUMBER("T2"."V1")=98 AND "T1"."ID"=5000) 
               OR (TO_NUMBER("T2"."V1")=99 AND "T1"."ID"=5000)
       ))
   8 - access("T1"."ID"=5000)
  10 - access("T1"."ID"=5000)

Before going on I meed to remind you that this is modelling a production problem. I had to use a hint to block a transformation that the optimizer wanted to do with my data set and statistics, I’ve got a deliberate type-mismatch in the data definitions, and there’s a simple rewrite of the SQL that would ensure that Oracle does something completely different).

The thing that caught my eye was the use of the bitmap transformation (operations 5,7,9) using exactly the same index range scan twice (operations 8,10). Furthermore, though not visible in the plan, the index in question was (as the name suggests) the primary key index on the table and it was a single column index – and “primary key = constant” should produce an “index unique scan” not a range scan.

Once you’ve added in the fact that operations 8 and 10 are the same “primary key = constant” predicates, you can also pick up on the fact that the cardinality calculation for the table access to table t1 can’t possibly produce more than one row – but it’s reporting a cardinality estimate of 100K rows (which happens to be the number of rows in the table.)

As a final point, you can see that there are no “Notes” about Dynamic Statistics or SQL Directives – this particular issue is not caused by anything to do with 12c sampling. In fact, having created the model, I ran it on 11.2.0.4 and got the same strange bitmap conversion and cardinality estimate. In the case of the client, the first pass the optimizer took went through exactly the same sort of process and produced a plan which was (probably) appropriate for a query where the driving table was going to produce (in their case) an estimated 4 million rows – but not appropriate for the actual 1 row that should have been identified.

In my example, if I allowed concatenation (i.e. removed the no_expand hint) I got the following plan:


------------------------------------------------------------------------------------------------
| Id  | Operation                              | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                       |       |       |       |     8 (100)|          |
|   1 |  SORT AGGREGATE                        |       |     1 |    19 |            |          |
|   2 |   CONCATENATION                        |       |       |       |            |          |
|   3 |    NESTED LOOPS                        |       |     1 |    19 |     4   (0)| 00:00:01 |
|   4 |     TABLE ACCESS BY INDEX ROWID        | T1    |     1 |    10 |     2   (0)| 00:00:01 |
|*  5 |      INDEX UNIQUE SCAN                 | T1_PK |     1 |       |     1   (0)| 00:00:01 |
|*  6 |     TABLE ACCESS BY INDEX ROWID BATCHED| T2    |     1 |     9 |     2   (0)| 00:00:01 |
|*  7 |      INDEX RANGE SCAN                  | T2_I1 |     1 |       |     1   (0)| 00:00:01 |
|   8 |    NESTED LOOPS                        |       |     1 |    19 |     4   (0)| 00:00:01 |
|   9 |     TABLE ACCESS BY INDEX ROWID        | T1    |     1 |    10 |     2   (0)| 00:00:01 |
|* 10 |      INDEX UNIQUE SCAN                 | T1_PK |     1 |       |     1   (0)| 00:00:01 |
|* 11 |     TABLE ACCESS BY INDEX ROWID BATCHED| T2    |     1 |     9 |     2   (0)| 00:00:01 |
|* 12 |      INDEX RANGE SCAN                  | T2_I1 |     1 |       |     1   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   5 - access("T1"."ID"=5000)
   6 - filter(TO_NUMBER("T2"."V1")=99)
   7 - access("T2"."SHIPMENT_ORDER_ID"="T1"."ORDER_ID")
  10 - access("T1"."ID"=5000)
  11 - filter((TO_NUMBER("T2"."V1")=98 AND (LNNVL(TO_NUMBER("T2"."V1")=99) OR
              LNNVL("T1"."ID"=5000))))
  12 - access("T2"."SHIPMENT_ORDER_ID"="T1"."ORDER_ID")

This is a much more appropriate plan – and similar to the type of plan the client saw when they eliminated the type-mismatch problem (I got a completely different plan when I used character values ’98’ and ’99’ in the in-list or when I used a numeric column with numeric literals).

Examining my 10053 trace file I found the following:

  • In the BASE STATISTICAL INFORMATION, the optimizer had picked up column statistics about the order_id column, but not about the id column in the in-list – this explained why the cardinality estimate was 100K, Oracle had “lost” the predicate.
  • In the “SINGLE TABLE ACCESS PATH”, the optimizer had acquired the statistics about the id column and calculated the cost of using the t1_pk index to access the table for a single key (AllEqUnique), then calculated the cost of doing a bitmap conversion twice (remember we have two entries in the in-list – it looks like the optimizer has “rediscovered” the predicate). But it had still kept the table cardinality of 4M.

After coming up with a bad plan thanks to this basic cardinality error, the 10053 trace file for the client’s query then went on to consider or-expansion (concatenation). Looking at this part of their trace file I could see that the BASE STATISTICAL INFORMATION now included the columns relevant to the in-list and the SINGLE TABLE ACCESS PATH cardinalities were suitable. Moreover when we got to the GENERAL PLANS the join to the second table in the join order showed a very sensible cost and cardinality – unfortunately, having been sensible up to that point, the optimizer then decided that an SQL Plan Directive should be used to generate a dynamic sampling query to check the join cardinality and the generated query again “lost” the in-list predicate, resulting in a “corrected” cardinality estimate of 6M instead of a correct cardinality estimate of 1. As usual, this massive over-estimate resulted in Oracle picking the wrong join method with a huge cost for the final join in the client’s query – so the optimizer discarded the or-expansion transformation and ran with the bad bitmap/hash join plan.

Bottom line for the client – we may have seen the same “lose the predicate” bug appearing in two different ways, or we may have seen two different “lose the predicate” bugs – either way a massive over-estimate due to “lost” predicates during cardinality calculations resulted in Oracle picking a very bad plan.

Footnote:

If you want to do further testing on the model, here’s the code to generate the data:


create table t1
nologging
as
with generator as (
        select  rownum id
        from    dual
        connect by
                level <= 1e4
)
select
        rownum                                  id,
        rownum                                  order_id,
        rpad('x',100)                           padding
from
        generator, generator
where
        rownum <= 1e5
;

execute dbms_stats.gather_table_stats(user,'t1')

alter table t1 modify order_id not null;
alter table t1 add constraint t1_pk primary key(id);


create table t2
nologging
as
with generator as (
        select  rownum id
        from    dual
        connect by
                level <= 1e4
)
select
        rownum                                  shipment_order_id,
        mod(rownum-1,1000)                      n1,
        cast(mod(rownum-1,1000) as varchar2(6)) v1,
        rpad('x',100)                           padding
from
        generator, generator
where
        rownum <= 1e5
;

execute dbms_stats.gather_table_stats(user,'t2')

alter table t2 modify shipment_order_id not null;
create index t2_i1 on t2(shipment_order_id);

The interesting question now is WHY does Oracle lose the predicate – unfortunately my model may be too simplistic to allow us to work that out, but it might be sufficient to make it easy for an Oracle developer to see what’s going on and how best to address it. There is one bug on MoS (23343961) that might be related in some way, but I wasn’t convinced that the description was really close enough.

Update

This issue is now recorded on MoS as: Bug 24350407 : WRONG CARDINALITY ESTIMATION IN PRESENCE OF BITMAP OR

 


Links for 2016-07-25 [del.icio.us]

Categories: DBA Blogs

Getting started with Ansible – Download the PostgreSQL sources, compile and install

Yann Neuhaus - Mon, 2016-07-25 23:21

In the last post in this series we looked at how you can instruct Ansible to install packages on the operating system using the yum module and how you can create groups and users by using the group and the user modules. In this post we’ll look at how you can download the PostgreSQL sources, compile and then finally install the binaries by extending our existing playbook.

If you remember the last post our current Ansible playbook looks like this:

[ansible@ansiblecontrol ansible]$ pwd
/opt/ansible
[ansible@ansiblecontrol ansible]$ cat roles/postgresqldbserver/tasks/main.yml 
---
- name: Install PostgreSQL dependencies
  yum: name={{item}} state=present
  with_items:
   - gcc
   - openldap-devel
   - python-devel
   - readline-devel
   - openssl-devel
   - redhat-lsb
   - bison
   - flex
   - perl-ExtUtils-Embed
   - zlib-devel
   - crypto-utils
   - openssl-devel
   - pam-devel
   - libxml2-devel
   - libxslt-devel
   - tcl
   - tcl-devel
   - openssh-clients
   - bzip2
   - net-tools
   - wget
   - screen
   - ksh

- name: Add PostgreSQL operating system group
  group: name=postgressomegroup state=present

- name: Add PostgreSQL operating system user
  user: name=postgres comment="PostgreSQL binaries owner" group=postgres

Basically we have three tasks:

  • Install the required operating system packages
  • Create the PostgreSQL operating system group
  • Create the PostgreSQL operating system user

The next task we want Ansible to do is to download the PostgreSQL sources. In this example we’ll download from the official PostgreSQL servers but if you do not have connectivity to the outside world this can be a local server in your company’s network as well. The module we’ll use for that is get_url_module. All we need to do is to add the following lines to our playbook:

- name: Download the PostgreSQL 9.5.3 sources
  get_url: url=https://ftp.postgresql.org/pub/source/v9.5.3/postgresql-9.5.3.tar.bz2 dest=/var/tmp mode=0755

Once we execute the playbook again:

[ansible@ansiblecontrol ~]$ ansible-playbook -i /opt/ansible/development /opt/ansible/roles/postgresqldbserver/site.yml

PLAY [postgresql-servers] ******************************************************

TASK [setup] *******************************************************************
ok: [192.168.22.172]
ok: [192.168.22.171]

TASK [postgresqldbserver : Install PostgreSQL dependencies] ********************
ok: [192.168.22.171] => (item=[u'gcc', u'openldap-devel', u'python-devel', u'readline-devel', u'openssl-devel', u'redhat-lsb', u'bison', u'flex', u'perl-ExtUtils-Embed', u'zlib-devel', u'crypto-utils', u'openssl-devel', u'pam-devel', u'libxml2-devel', u'libxslt-devel', u'tcl', u'tcl-devel', u'openssh-clients', u'bzip2', u'net-tools', u'wget', u'screen', u'ksh'])
ok: [192.168.22.172] => (item=[u'gcc', u'openldap-devel', u'python-devel', u'readline-devel', u'openssl-devel', u'redhat-lsb', u'bison', u'flex', u'perl-ExtUtils-Embed', u'zlib-devel', u'crypto-utils', u'openssl-devel', u'pam-devel', u'libxml2-devel', u'libxslt-devel', u'tcl', u'tcl-devel', u'openssh-clients', u'bzip2', u'net-tools', u'wget', u'screen', u'ksh'])

TASK [postgresqldbserver : Add PostgreSQL operating system group] **************
ok: [192.168.22.172]
ok: [192.168.22.171]

TASK [postgresqldbserver : Add PostgreSQL operating system user] ***************
ok: [192.168.22.172]
ok: [192.168.22.171]

TASK [postgresqldbserver : Download the PostgreSQL 9.5.3 sources] **************
changed: [192.168.22.171]
changed: [192.168.22.172]

PLAY RECAP *********************************************************************
192.168.22.171             : ok=5    changed=1    unreachable=0    failed=0   
192.168.22.172             : ok=5    changed=1    unreachable=0    failed=0   

The sources are available on the PostgreSQL hosts:

[root@ansiblepg1 ~] ls /var/tmp/post*
/var/tmp/postgresql-9.5.3.tar.bz2

Now we need to do some scripting as there is no pre-defined module for installing PostgreSQL from source. Here is a very basic script to do that:

#!/usr/bin/bash
PGSOURCE="/var/tmp/postgresql-9.5.3.tar.bz2"
PGUSER="postgres"
PGGROUP="postgres"
PGHOME="/u01/app/postgres/product/95/db_3"
SEGSIZE=2
BLOCKSIZE=8
mkdir -p /u01/app
chown ${PGUSER}:${PGGROUP} /u01/app
su - ${PGUSER} -c "cd /var/tmp/; tar -axf ${PGSOURCE}"
su - ${PGUSER} -c "cd /var/tmp/postgresql-9.5.3; ./configure --prefix=${PGHOME} \
            --exec-prefix=${PGHOME} \
            --bindir=${PGHOME}/bin \
            --libdir=${PGHOME}/lib \
            --sysconfdir=${PGHOME}/etc \
            --includedir=${PGHOME}/include \
            --datarootdir=${PGHOME}/share \
            --datadir=${PGHOME}/share \
            --with-pgport=5432 \
            --with-perl \
            --with-python \
            --with-tcl \
            --with-openssl \
            --with-pam \
            --with-ldap \
            --with-libxml \
            --with-libxslt \
            --with-segsize=${SEGSIZE} \
            --with-blocksize=${BLOCKSIZE} \
            --with-wal-segsize=16  \
            --with-extra-version=\" dbi services build\""
su - ${PGUSER} -c "cd /var/tmp/postgresql-9.5.3; make world"
su - ${PGUSER} -c "cd /var/tmp/postgresql-9.5.3; make install"
su - ${PGUSER} -c "cd /var/tmp/postgresql-9.5.3/contrib; make install"
rm -rf /var/tmp/postgresql*

Place a file with this contents under the files directory of our role:

roles/postgresqldbserver/files/install_pg953.sh

There is another Ansible module called copy which we now can use to copy the file from our roles directory to the target server. All we need to do is to add the following lines to our playbook:

- name: Copy PostgreSQL install script to targets
  copy: src=install_pg953.sh dest=/var/tmp/install_pg953.sh owner=postgres group=postgres mode="u=rwx"

Once we execute the playbook the file is distributed to all targets (here the check for the first node):

[root@ansiblepg1 ~] ls /var/tmp/install*
/var/tmp/install_pg953.sh

The only thing we need to do from now on to get PostgreSQL installed on the target system is to exexute this file. How can we do that? Very easy by using the shell module. Add these lines to the playbook:

- name: Compile and install PostgreSQL
  shell: /var/tmp/install_pg953.sh >> /var/tmp/install_pg_log
  args:
    executable: /usr/bin/bash

… re-execute the playbook:

[ansible@ansiblecontrol ansible]$ ansible-playbook -i /opt/ansible/development /opt/ansible/roles/postgresqldbserver/site.yml

PLAY [postgresql-servers] ******************************************************

TASK [setup] *******************************************************************
ok: [192.168.22.172]
ok: [192.168.22.171]

TASK [postgresqldbserver : Install PostgreSQL dependencies] ********************
ok: [192.168.22.171] => (item=[u'gcc', u'openldap-devel', u'python-devel', u'readline-devel', u'openssl-devel', u'redhat-lsb', u'bison', u'flex', u'perl-ExtUtils-Embed', u'zlib-devel', u'crypto-utils', u'openssl-devel', u'pam-devel', u'libxml2-devel', u'libxslt-devel', u'tcl', u'tcl-devel', u'openssh-clients', u'bzip2', u'net-tools', u'wget', u'screen', u'ksh'])
ok: [192.168.22.172] => (item=[u'gcc', u'openldap-devel', u'python-devel', u'readline-devel', u'openssl-devel', u'redhat-lsb', u'bison', u'flex', u'perl-ExtUtils-Embed', u'zlib-devel', u'crypto-utils', u'openssl-devel', u'pam-devel', u'libxml2-devel', u'libxslt-devel', u'tcl', u'tcl-devel', u'openssh-clients', u'bzip2', u'net-tools', u'wget', u'screen', u'ksh'])

TASK [postgresqldbserver : Add PostgreSQL operating system group] **************
ok: [192.168.22.172]
ok: [192.168.22.171]

TASK [postgresqldbserver : Add PostgreSQL operating system user] ***************
ok: [192.168.22.172]
ok: [192.168.22.171]

TASK [postgresqldbserver : Download the PostgreSQL 9.5.3 sources] **************
changed: [192.168.22.172]
changed: [192.168.22.171]

TASK [postgresqldbserver : Copy PostgreSQL install script to targets] **********
ok: [192.168.22.171]
ok: [192.168.22.172]

TASK [postgresqldbserver : Compile and install PostgreSQL] *********************
changed: [192.168.22.171]
changed: [192.168.22.172]

PLAY RECAP *********************************************************************
192.168.22.171             : ok=7    changed=2    unreachable=0    failed=0   
192.168.22.172             : ok=7    changed=2    unreachable=0    failed=0   

… and we are done. Just to prove it:

[root@ansiblepg1 ~] /u01/app/postgres/product/95/db_3/bin/psql --version
psql (PostgreSQL) 9.5.3 dbi services build
[root@ansiblepg2 ~] /u01/app/postgres/product/95/db_3/bin/psql --version
psql (PostgreSQL) 9.5.3 dbi services build

PostgreSQL is installed and ready to create a new database cluster (which is the topic for the next post).

Btw: The two steps of copying the script to the targets and then execute it can be combined into one step by using the script module.
Btw2: Of course you might do the same with Oracle, Mysql, MongoDB, Cassandra, …

 

Cet article Getting started with Ansible – Download the PostgreSQL sources, compile and install est apparu en premier sur Blog dbi services.

Regarding Listener

Tom Kyte - Mon, 2016-07-25 16:06
Hello sir, Currently i am working in Clover infotech as a ORACLE DBA...actually i wanted to know that is there any hard limit for number of Listeners in oracle . Means how many listeners we can configure in single database and also how many con...
Categories: DBA Blogs

Recover Catalog Manager

Tom Kyte - Mon, 2016-07-25 16:06
I understand that Recover Catalog Manager does has the metadata of the registered database, if the registered database has been recovered by doing restlogs(once we reset the logs we cannot use the old backups using the control file of the database as...
Categories: DBA Blogs

Difference between procedure and stored procedure.

Tom Kyte - Mon, 2016-07-25 16:06
Hi Tom, I want to know what is the difference between procedure and stored procedure? Thanks, Deekshith.
Categories: DBA Blogs

Interval partitioning

Tom Kyte - Mon, 2016-07-25 16:06
Hi Tom, I am trying to create a partitioned table so that a date-wise partition is created on inserting a new row for release_date column. But please note that release_date column is having number data type (as per design) and people want to create...
Categories: DBA Blogs

How to use case statement inside xml table

Tom Kyte - Mon, 2016-07-25 16:06
Hi, I have a table like below create table emptable ( id number primary key, emps varchar2 ( 500 )) ; with this data in it insert into emptable values ( 1, '<?xml version="1.0" encoding="UTF-8" standalone="yes"?><employee emp_no="...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator