Feed aggregator

ORA-01031 in view DML with dual reference

Tom Kyte - Mon, 2019-01-07 12:26
At a customer I work for there is a design standard that each DB-view should start with a so-called 'comments' block. This is to ensure that comments are stored in the data dictionary in the DB. create or replace view xxxx as with comments as ( ...
Categories: DBA Blogs

what are tables with MDXT in the name and can I delete them?

Tom Kyte - Mon, 2019-01-07 12:26
Hi, I have a datawarehouse with 5000 tables in it. <code> select count(*) from tab where TABTYPE = 'TABLE' </code> In this datawarehouse there are also tables with MDXT or MDRT or BIN in the tablename, more then 4600!! 400 actually use 4...
Categories: DBA Blogs

Replicating Data to another database in the same server

Tom Kyte - Mon, 2019-01-07 12:26
Hello, Ask Tom Team. Happy New Year for all of you. I have two single instance databases (A and B) in the same server. I need to replicate from A to B (some columns of some tables) in order to customers can view the data through a web applicat...
Categories: DBA Blogs

REMAP_TABLESPACE not working during impdp

Tom Kyte - Mon, 2019-01-07 12:26
Hello, I have a Pluggable database OTB1 with multiple schemas with different tablespaces. Tablespaces names are in lowercase. (SCHEMAS AND TABLESPACES NAMES ARE SAME, BUT ONE IN LOWER CASE AND OTHER IN UPPER CASE) <code>SQL> select username fro...
Categories: DBA Blogs

Creating partition on json column in Oracle Database

Tom Kyte - Mon, 2019-01-07 12:26
Hi, Is there a way to create partition on oracle table based on values or keys from JSON column?
Categories: DBA Blogs

Create sequences inside procedure

Tom Kyte - Mon, 2019-01-07 12:26
Hi, Is it possible to create Sequences within a procedure using execute immediate dynamic SQL, just as we can create other objects like table/index etc ? If not what is the workaround for creating sequences within a procedure by executing the ...
Categories: DBA Blogs

Global Footwear Retailer CCC Group Taps Oracle to Boost Profitability

Oracle Press Releases - Mon, 2019-01-07 07:00
Press Release
Global Footwear Retailer CCC Group Taps Oracle to Boost Profitability Fastest Growing Footwear Retailer in Europe Implements Oracle Retail to Improve Gross Margin by 4.2 Percent

Redwood Shores, Calif.—Jan 7, 2019

Polish Footwear Retailer CCC Group is propelling its digital transformation with retail technology from Oracle. CCC Group is one of the fastest growing omnichannel footwear retailers in Europe, selling over 50 million pairs of shoes annually in 1200 stores and across 23 countries online. With a business that is quickly scaling, CCC recognized the need to optimize margins and minimize unnecessary markdowns on an inventory spanning 45,000 products. Using Oracle Retail Merchandising and Planning and Optimization solutions, the company is able to maximize profitability that will fuel expansion.

Using Oracle Retail Clearance Optimization, CCC Group quickly experienced a 4.2 percent increase in gross margin and a 7.1 percent increase in inventory sell-through. The technology provides CCC Group with a new business process that provides its analysts with fact-driven markdown recommendations and exception reporting based on inventory style, color and location.

With the support of Oracle PartnerNetwork Gold Level Partner Pronos, CCC Group began implementing additional solutions including Oracle Retail Merchandise Financial Planning and Assortment Planning followed by allocation, replenishment and merchandising. Combined with Oracle Retail Clearance Optimization, CCC Group can now reconcile the impact of discounts on inventory and automatically gain approval on pricing with Oracle Retail Merchandise Financial Planning.

“Oracle Retail Clearance Optimization provides us with the flexibility to adjust to our business needs and markets while providing a stable, intelligent and scalable platform. We have shifted from a 100 percent manual process to a solution that runs millions of combinations to come up with the best recommendations for our experts,” said Piotr Pawłowski, Chief Information Officer, The CCC Group. “Pronos has provided great local, in-language implementation experience with global expertise. We look forward to continuing our digital transformation with the completion of our Oracle Retail Merchandise Operations Management implementation.”

“In our recent consumer research, we discovered that 78 percent of EMEA consumers indicated that competitive pricing and promotions were essential to their shopping journey,” said Mike Webster, senior vice president and general manager, Oracle Retail. “With this strategic investment in Oracle technology CCC Group is able to maximize revenue opportunities while implementing strategic promotions for customers that drive brand differentiation and long-term loyalty.”

CCC Group is continuing their transformation by implementing Oracle Retail Merchandise Operations Management, supported by Accenture, to establish a single view of inventory that drives greater efficiency across omnichannel retail operations. These investments will empower planners, merchants, pricing experts and operations analysts make more strategic decisions with the adoption of industry best practices and modern technology.

Contact Info
Matt Torres
Oracle
4155951584
matt.torres@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

About Oracle Retail

Oracle is the modern platform for retail. Oracle provides retailers with a complete, open, and integrated platform for best-of-breed business applications, cloud services, and hardware that are engineered to work together. Leading fashion, grocery, and specialty retailers use Oracle solutions to accelerate from best practice to next practice, drive operational agility and refine the customer experience. For more information, visit our website at www.oracle.com/retail.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Matt Torres

  • 4155951584

Using Ansible to bring up a three node Patroni cluster in minutes

Yann Neuhaus - Mon, 2019-01-07 01:48

Automation is key today, nobody wants to do the same tasks over and over and again. Cloud without automation is not even possible. There are several tools around that help with automation and one of the most popular is Ansible. We already have several posts about Ansible on our blog platform but this one will be rather long. Setting up PostgreSQL high available architectures is our daily business and we as well try to automate as much as possible. We do not only automate to save time, even more important we automate to avoid human errors. What we will share with this post is how you could use Ansible to bring up a three node Patroni cluster from scratch.

Disclaimer: Please see what we show here as a kind of template. You might need to adjust several bits to fit into your environment, other bits for sure can be solved more elegant by using advanced features of Ansible. Anyway, using this template you should be able to bring up one PostgreSQL master instance, two replicas, Patroni and HAProxy in minutes on CentOS 7. This should work the same for Red Hat 7 but if you want to do the same on Debian based systems or SUSE you for sure need to adjust some of the Ansible tasks. This post does not explain how Ansible works nor does it explain what Patroni or HAProxy is.

The starting point is a CentOS minimal installation with just the postgres user and group created and sudo permissions for postgres. That’s it:

postgres@patroni1 ~]$ id -a
uid=1000(postgres) gid=1000(postgres) groups=1000(postgres) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
[postgres@patroni1 ~]$ lsb_release -a
LSB Version:	:core-4.1-amd64:core-4.1-noarch:cxx-4.1-amd64:cxx-4.1-noarch:desktop-4.1-amd64:desktop-4.1-noarch:languages-4.1-amd64:languages-4.1-noarch:printing-4.1-amd64:printing-4.1-noarch
Distributor ID:	CentOS
Description:	CentOS Linux release 7.5.1804 (Core) 
Release:	7.5.1804
Codename:	Core
[postgres@patroni1 ~]$ sudo cat /etc/sudoers | grep postgres
postgres	ALL=(ALL)	NOPASSWD: ALL
[postgres@patroni1 ~]$ 

This is the Ansible directory structure on my workstation:

dwe@dwe:~/Documents/dbiProjects/dbi_dmk_postgres/dmk/ansible$ tree
.
├── _commands.sh
├── _init_dirs.sh
├── patroni
└── roles
    ├── common
    │   ├── files
    │   │   └── PostgreSQL-DMK-17-09.4.zip
    │   ├── handlers
    │   ├── meta
    │   ├── tasks
    │   │   └── main.yml
    │   ├── templates
    │   │   └── compile.sh.j2
    │   └── vars
    └── patroni
        ├── files
        │   ├── etcd.service
        │   └── patroni.service
        ├── handlers
        ├── meta
        ├── site.retry
        ├── site.yml
        ├── tasks
        │   └── main.yml
        ├── templates
        │   ├── etcd.conf.j2
        │   ├── haproxy.cfg.j2
        │   ├── hosts.j2
        │   ├── keepalived.conf.j2
        │   └── patroni.yml.j2
        └── vars

You can use the _init_dirs.sh script to create that but it is pretty much the Ansible default anyway:

dwe@dwe:~/Documents/dbiProjects/dbi_dmk_postgres/dmk/ansible$ cat _init_dirs.sh 
#!/bin/bash
touch patroni
mkdir roles/
mkdir roles/common
mkdir roles/common/tasks
mkdir roles/common/handlers
mkdir roles/common/templates
mkdir roles/common/files
mkdir roles/common/vars
mkdir roles/common/meta
mkdir roles/patroni
mkdir roles/patroni/tasks
mkdir roles/patroni/handlers
mkdir roles/patroni/templates
mkdir roles/patroni/files
mkdir roles/patroni/vars
mkdir roles/patroni/meta

What you always need with Ansible is the inventory and in our case it looks like this:

dwe@dwe:~/Documents/dbiProjects/dbi_dmk_postgres/dmk/ansible$ cat patroni 
[patroni-servers]
192.168.22.240 keepalived_role=MASTER keepalived_priority=102 ansible_hostname=patroni1 ansible_hostname_fqdn=patroni1.it.dbi-services.com
192.168.22.241 keepalived_role=SLAVE keepalived_priority=101 ansible_hostname=patroni2 ansible_hostname_fqdn=patroni2.it.dbi-services.com
192.168.22.242 keepalived_role=SLAVE keepalived_priority=100 ansible_hostname=patroni3 ansible_hostname_fqdn=patroni3.it.dbi-services.com

[patroni-servers:vars]
postgresql_version=11.1
postgresql_major_version=11
dmk_postgresql_version=11/db_1
etcd_vserion=3.3.10
postgres_user=postgres
postgres_group=postgres
dmk_version=17-09.4
cluster_name=PG1
blank=' '
virtual_ip=192.168.22.245

As you can see there are three machines and several variables defined. The *dmk* stuff if for our management kit, just ignore/delete that for your environment.

We have two roles, one common and one for Patroni. The common role is responsible for doing the common stuff and can be used for single instance PostgreSQL deployments as well so lets start with this one:

dwe@dwe:~/Documents/dbiProjects/dbi_dmk_postgres/dmk/ansible$ cat roles/common/tasks/main.yml 
- name: Install all dependencies for PostgreSQL 
  yum: name={{item}} state=present
  with_items:
   - gcc
   - openldap-devel
   - python-devel
   - readline-devel
   - redhat-lsb
   - bison
   - flex
   - perl-ExtUtils-Embed
   - zlib-devel
   - crypto-utils
   - openssl-devel
   - pam-devel
   - libxml2-devel
   - libxslt-devel
   - openssh-clients
   - bzip2
   - net-tools
   - wget
   - screen
   - unzip
   - sysstat
   - xorg-x11-xauth
   - systemd-devel
   - bash-completion

- name: Remove iwl packages
  yum: name={{item}} state=removed
  with_items:
   - iwl*

- name: upgrade all packages
  yum:
    name: '*'
    state: latest

- file:
    path: /u01/app/{{ postgres_user }}/local
    state: directory
    mode: 0700
    recurse: yes
    owner: "{{ postgres_user }}"
    group: "{{ postgres_group }}"

- file:
    path: /u01
    owner: "{{ postgres_user }}"
    group: "{{ postgres_user }}"
    mode: 0700

- file:
    path: /u01/app
    owner: "{{ postgres_user }}"
    group: "{{ postgres_group }}"
    mode: 0700

- file:
    path: /u01/app/{{ postgres_user }}
    owner: "{{ postgres_user }}"
    group: "{{ postgres_group }}"
    mode: 0700

- file:
    path: /u02/pgdata/
    state: directory
    mode: 0700
    owner: "{{ postgres_user }}"
    group: "{{ postgres_group }}"

- file:
    path: /u02/pgdata/{{ postgresql_major_version }}
    state: directory
    mode: 0700
    owner: "{{ postgres_user }}"
    group: "{{ postgres_group }}"

- file:
    path: /u02/pgdata/{{ postgresql_major_version }}/{{ cluster_name }}
    state: directory
    mode: 0700
    owner: "{{ postgres_user }}"
    group: "{{ postgres_group }}"

- file:
    path: /u99/pgdata/
    state: directory
    mode: 0700
    owner: "{{ postgres_user }}"
    group: "{{ postgres_group }}"

- file:
    path: /etc/pgtab
    state: touch
    owner: "{{ postgres_user }}"
    group: "{{ postgres_group }}"
    mode: 0600

- name: check if PostgreSQL source code exists
  stat: 
    path: /home/{{ postgres_user }}/source.tar.bz2
  register: source_available

- name: Download the PostgreSQL source code if it is not already there
  get_url:
    url: https://ftp.postgresql.org/pub/source/v{{ postgresql_version }}/postgresql-{{ postgresql_version }}.tar.bz2
    dest: /home/{{ postgres_user }}/source.tar.bz2
    mode: 0775
  when: source_available.stat.exists == false

- name: Check if PostgreSQL is already installed
  stat:
    path: /u01/app/{{ postgres_user }}/product/{{ dmk_postgresql_version }}/bin/postgres
  register: postrgresql_is_installed

- name: extract the sources when PostgreSQL is not already installed
  shell: cd /home/{{ postgres_user }}; tar -axf source.tar.bz2
  become: yes
  become_user: "{{ postgres_user }}"
  when: postrgresql_is_installed.stat.exists == false

- template:
    src: compile.sh.j2
    dest: /home/{{ postgres_user }}/postgresql-{{ postgresql_version }}/compile.sh
    owner: "{{ postgres_user }}"
    group: "{{ postgres_group }}"
    mode: 0700

- name: Install PostgreSQL from source code
  shell: cd /home/{{ postgres_user }}/postgresql-{{ postgresql_version }}; ./compile.sh
  become: yes
  become_user: "{{ postgres_user }}"
  when: postrgresql_is_installed.stat.exists == false

- name: check if DMK for PostgreSQL source code exists
  stat:
    path: /u01/app/{{ postgres_user }}/local/PostgreSQL-DMK-{{ dmk_version }}.zip
  register: dmk_source_available

- name: check if DMK for PostgreSQL is extracted
  stat:
    path: /u01/app/{{ postgres_user }}/local/dmk/bin/dmk.bash
  register: dmk_extracted

- name: Copy DMK source distribution
  copy:
    src: PostgreSQL-DMK-{{ dmk_version }}.zip
    dest: /u01/app/{{ postgres_user }}/local/PostgreSQL-DMK-{{ dmk_version }}.zip
    owner: "{{ postgres_user }}"
    group: "{{ postgres_group }}"
    mode: 0700
  when: dmk_source_available.stat.exists == false

- name: extract DMK
  shell: cd /u01/app/{{ postgres_user }}/local; unzip PostgreSQL-DMK-{{ dmk_version }}.zip
  become: yes
  become_user: "{{ postgres_user }}"
  when: dmk_extracted.stat.exists == false

- name: check if DMK is installed
  stat:
    path: /home/{{ postgres_user }}/.DMK_HOME
  register: dmk_installed

- lineinfile:
    path: /etc/pgtab
    line: 'pg{{ postgresql_version }}:/u01/app/{{ postgres_user }}/product/{{ dmk_postgresql_version }}:dummy:9999:D'
    create: no
  when: dmk_installed.stat.exists == false

- name: Execute DMK for the first time
  shell: /u01/app/{{ postgres_user }}/local/dmk/bin/dmk.bash; cat /u01/app/{{ postgres_user }}/local/dmk/templates/profile/dmk.postgres.profile >> /home/{{ postgres_user }}/.bash_profile
  become: yes
  become_user: "{{ postgres_user }}"
  when: dmk_installed.stat.exists == false

This should be more or less self explaining so we will only summarize what it does:

  • Install required packages for compiling PostgreSQL from source
  • Remove the iwl* packages
  • Update all packages to the latest release
  • Create the directory structure
  • Download the PostgreSQL source code, compile and install
  • Install our DMK

As said, this role can be included in any other PostgreSQL setup as it only does basic stuff. There is one template used here, which is compile.sh.j2:

dwe@dwe:~/Documents/dbiProjects/dbi_dmk_postgres/dmk/ansible$ cat roles/common/templates/compile.sh.j2 
PGHOME=/u01/app/postgres/product/{{ dmk_postgresql_version }}
SEGSIZE=2
BLOCKSIZE=8

./configure --prefix=${PGHOME} \
            --exec-prefix=${PGHOME} \
            --bindir=${PGHOME}/bin \
            --libdir=${PGHOME}/lib \
            --sysconfdir=${PGHOME}/etc \
            --includedir=${PGHOME}/include \
            --datarootdir=${PGHOME}/share \
            --datadir=${PGHOME}/share \
            --with-pgport=5432 \
            --with-perl \
            --with-python \
            --with-openssl \
            --with-pam \
            --with-ldap \
            --with-libxml \
            --with-libxslt \
            --with-segsize=${SEGSIZE} \
            --with-blocksize=${BLOCKSIZE} \
	    --with-systemd "
make -j 2 all
make install
cd contrib
make -j 2 install

This one is our standard way of bringing PostgreSQL onto the system and the only parameter is the PostgreSQL version we use for the directory name. No magic, simple stuff and that’s it for the common role.

Coming to the Patroni role. Here is it:

dwe@dwe:~/Documents/dbiProjects/dbi_dmk_postgres/dmk/ansible$ cat roles/patroni/tasks/main.yml 
---

- name: check if epel rpm already is there
  stat:
    path: /root/epel-release-latest-7.noarch.rpm
  register: epel_rpm_available

- name: Download the EPEL rpm
  get_url:
    url: http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
    dest: /root/epel-release-latest-7.noarch.rpm
    mode: 0440
  when: epel_rpm_available.stat.exists == false

- name: check if epel repository is already installed
  stat:
    path: /etc/yum.repos.d/epel.repo
  register: epel_installed


- name: Install the EPEL rpm
  shell: yum localinstall -y /root/epel-release-latest-7.noarch.rpm
  args: 
    warn: false
  when: epel_installed.stat.exists == false

- name: Install all dependencies for Patroni
  yum: name={{item}} state=present
  with_items:
   - python-pip
   - PyYAML
   - bind-utils
   - keepalived
   - haproxy

# create the hosts file
- template:
    src: hosts.j2
    dest: /etc/hosts
    owner: root
    group: root
    mode: 0644

- name: Create the file to load the watchdog module
  file:
    path: /etc/modules-load.d/softdog.conf
    state: touch

- name: Add the watchdog module
  shell: modprobe softdog

- name: Change ownershhip of the watchdog device
  shell: chown postgres /dev/watchdog
  args:
    warn: false

- name: check if etcd sources already exist
  stat:
    path: /home/{{ postgres_user }}/etcd-v{{ etcd_vserion }}-linux-amd64.tar.gz
  register: etcd_source_available

- name: Download etcd
  get_url:
    url: https://github.com/etcd-io/etcd/releases/download/v{{ etcd_vserion }}/etcd-v{{ etcd_vserion }}-linux-amd64.tar.gz
    dest: /home/{{ postgres_user }}/etcd-v{{ etcd_vserion }}-linux-amd64.tar.gz
    mode: 0755
  when: etcd_source_available.stat.exists == false

- name: check if etcd is available in DMK
  stat:
    path: /u01/app/{{ postgres_user }}/local/dmk/bin/etcd
  register: etcd_copied_to_dmk

- name: extract etcd
  shell: cd /home/{{ postgres_user }}/; tar -axf etcd-v{{ etcd_vserion }}-linux-amd64.tar.gz
  become: yes
  become_user: "{{ postgres_user }}"
  when: etcd_copied_to_dmk.stat.exists == false

- name: copy etcd to DMK
  shell: cp /home/{{ postgres_user }}/etcd-v{{ etcd_vserion }}-linux-amd64/etcd* /u01/app/{{ postgres_user }}/local/dmk/bin/
  become: yes
  become_user: "{{ postgres_user }}"
  when: etcd_copied_to_dmk.stat.exists == false

- template:
    src: etcd.conf.j2
    dest: /u01/app/{{ postgres_user }}/local/dmk/etc/etcd.conf
    owner: "{{ postgres_user }}"
    group: "{{ postgres_group }}"
    mode: 0700

- name: Copy the etcd systemd service file
  copy:
    src: etcd.service
    dest: /etc/systemd/system/etcd.service
    owner: root
    group: root
    mode: 0755

- file:
    path: /u02/pgdata/etcd
    state: directory
    mode: 0700
    recurse: yes
    owner: "{{ postgres_user }}"
    group: "{{ postgres_group }}"

- name: force systemd to reread configs
  systemd:
    daemon_reload: yes

- name: Enable the systemd etcd service
  systemd:
    name: etcd
    enabled: yes

- name: Start the systemd etcd service
  shell: systemctl start etcd.service

- name: check if patroni is alraedy installed
  stat:
    path: /home/{{ postgres_user }}/.local/bin/patroni
  register: patroni_is_installed

- name: install and upgrade pip
  shell: pip install --upgrade pip
  when: patroni_is_installed.stat.exists == false
 
- name: install and upgrade setuptools
  become: yes
  become_user: "{{ postgres_user }}"
  shell: pip install --upgrade --user setuptools
  when: patroni_is_installed.stat.exists == false

- name: install psycopg2-binary
  become: yes
  become_user: "{{ postgres_user }}"
  shell: pip install --user psycopg2-binary
  when: patroni_is_installed.stat.exists == false

- name: install patroni
  become: yes
  become_user: "{{ postgres_user }}"
  shell: pip install --user patroni[etcd]
  when: patroni_is_installed.stat.exists == false

- file:
    src: /home/{{ postgres_user }}/.local/bin/patroni
    dest: /u01/app/{{ postgres_user }}/local/dmk/bin/patroni
    owner: "{{ postgres_user }}"
    group: "{{ postgres_user }}"
    state: link

- file:
    src: /home/{{ postgres_user }}/.local/bin/patronictl
    dest: /u01/app/{{ postgres_user }}/local/dmk/bin/patronictl
    owner: "{{ postgres_user }}"
    group: "{{ postgres_user }}"
    state: link

- template:
    src: patroni.yml.j2
    dest: /u01/app/{{ postgres_user }}/local/dmk/etc/patroni.yml
    owner: "{{ postgres_user }}"
    group: "{{ postgres_group }}"
    mode: 0600

- name: Copy the patroni systemd service file
  copy:
    src: patroni.service
    dest: /etc/systemd/system/patroni.service
    owner: root
    group: root
    mode: 0755

- name: force systemd to reread configs 
  systemd:
    daemon_reload: yes

- name: Enable the systemd etcd service
  systemd:
    name: patroni
    enabled: yes

# add the instance to /etc/pgtab so DMK is aware of if
- lineinfile:
    path: /etc/pgtab
    line: '{{ cluster_name }}:/u01/app/{{ postgres_user }}/product/{{ dmk_postgresql_version }}:/u02/pgdata/{{ postgresql_major_version }}/{{ cluster_name }}:5432:N'

- template:
    src: haproxy.cfg.j2
    dest: /etc/haproxy/haproxy.cfg
    owner: "{{ postgres_user }}"
    group: "{{ postgres_group }}"
    mode: 0600

- name: Enable the systemd haproxy service
  systemd:
    name: haproxy
    enabled: yes

# we need to set this so haproxy can be started
- name: Set selinux context for ha proxy
  shell: setsebool -P haproxy_connect_any=1

- template:
    src: keepalived.conf.j2
    dest: /etc/keepalived/keepalived.conf
    owner: "{{ postgres_user }}"
    group: "{{ postgres_group }}"
    mode: 0600
  with_items:
    - { role: "{{ hostvars[inventory_hostname].keepalived_role }}" , priority: "{{ hostvars[inventory_hostname].keepalived_priority }}" }

What it does:

  • Install the Extra Packages for Enterprise Linux (EPEL)
  • Install the dependencies for Patroni, HAProxy
  • Create the /etc/hosts file
  • Enable the watchdog service
  • Download and install etcd
  • Integrate etcd into systemd
  • Install Patroni, create the configuration files and integrate it into systemd
  • Install and configure HAProxy

This role uses several templates. The first one is used to create /etc/hosts:

dwe@dwe:~/Documents/dbiProjects/dbi_dmk_postgres/dmk/ansible$ cat roles/patroni/templates/hosts.j2 
#jinja2: trim_blocks:False
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

{%- for h in ansible_play_hosts %}
{{ hostvars[h]['ansible_enp0s8']['ipv4']['address'] }} {{ hostvars[h]['ansible_hostname'] }}
{% endfor %}

The second one is used to create the etcd configuration:

dwe@dwe:~/Documents/dbiProjects/dbi_dmk_postgres/dmk/ansible$ cat roles/patroni/templates/etcd.conf.j2 
name: {{ ansible_hostname }}
data-dir: /u02/pgdata/etcd
initial-advertise-peer-urls: http://{{ hostvars[inventory_hostname]['ansible_enp0s8']['ipv4']['address'] }}:2380
listen-peer-urls: http://{{ hostvars[inventory_hostname]['ansible_enp0s8']['ipv4']['address'] }}:2380
listen-client-urls: http://{{ hostvars[inventory_hostname]['ansible_enp0s8']['ipv4']['address'] }}:2379,http://localhost:2379
advertise-client-urls: http://{{ hostvars[inventory_hostname]['ansible_enp0s8']['ipv4']['address'] }}:2379
initial-cluster:{{ blank }} {%- for h in ansible_play_hosts %}
{{ hostvars[h]['ansible_hostname'] }}=http://{{ hostvars[h]['ansible_enp0s8']['ipv4']['address'] }}:2380{% if not loop.last %},{% endif %}
{% endfor %}

The third one creates the Patroni configuration:

dwe@dwe:~/Documents/dbiProjects/dbi_dmk_postgres/dmk/ansible$ cat roles/patroni/templates/patroni.yml.j2 
scope: {{ cluster_name }}
#namespace: /service/
name: {{ ansible_hostname }}

restapi:
  listen: {{ hostvars[inventory_hostname]['ansible_enp0s8']['ipv4']['address'] }}:8008
  connect_address: {{ hostvars[inventory_hostname]['ansible_enp0s8']['ipv4']['address'] }}:8008
#  certfile: /etc/ssl/certs/ssl-cert-snakeoil.pem
#  keyfile: /etc/ssl/private/ssl-cert-snakeoil.key
#  authentication:
#    username: username
#    password: password

# ctl:
#   insecure: false # Allow connections to SSL sites without certs
#   certfile: /etc/ssl/certs/ssl-cert-snakeoil.pem
#   cacert: /etc/ssl/certs/ssl-cacert-snakeoil.pem

etcd:
  host: 127.0.0.1:2379

bootstrap:
  # this section will be written into Etcd:///config after initializing new cluster
  # and all other cluster members will use it as a `global configuration`
  dcs:
    ttl: 30
    loop_wait: 10
    retry_timeout: 10
    maximum_lag_on_failover: 1048576
    postgresql:
      use_pg_rewind: true
      use_slots: true
      parameters:
        wal_level: 'hot_standby'
        hot_standby: "on"
        wal_keep_segments: 8
        max_replication_slots: 10
        wal_log_hints: "on"
        listen_addresses: '*'
        port: 5432
        logging_collector: 'on'
        log_truncate_on_rotation: 'on'
        log_filename: 'postgresql-%a.log'
        log_rotation_age: '1440'
        log_line_prefix: '%m - %l - %p - %h - %u@%d - %x'
        log_directory: 'pg_log'
        log_min_messages: 'WARNING'
        log_autovacuum_min_duration: '60s'
        log_min_error_statement: 'NOTICE'
        log_min_duration_statement: '30s'
        log_checkpoints: 'on'
        log_statement: 'ddl'
        log_lock_waits: 'on'
        log_temp_files: '0'
        log_timezone: 'Europe/Zurich'
        log_connections: 'on'
        log_disconnections: 'on'
        log_duration: 'on'
        client_min_messages: 'WARNING'
        wal_level: 'replica'
        hot_standby_feedback: 'on'
        max_wal_senders: '10'
        shared_buffers: '128MB'
        work_mem: '8MB'
        effective_cache_size: '512MB'
        maintenance_work_mem: '64MB'
        wal_compression: 'off'
        max_wal_senders: '20'
        shared_preload_libraries: 'pg_stat_statements'
        autovacuum_max_workers: '6'
        autovacuum_vacuum_scale_factor: '0.1'
        autovacuum_vacuum_threshold: '50'
        archive_mode: 'on'
        archive_command: '/bin/true'
        wal_log_hints: 'on'
#      recovery_conf:
#        restore_command: cp ../wal_archive/%f %p

  # some desired options for 'initdb'
  initdb:  # Note: It needs to be a list (some options need values, others are switches)
  - encoding: UTF8
  - data-checksums

  pg_hba:  # Add following lines to pg_hba.conf after running 'initdb'
  - host replication replicator 192.168.22.0/24 md5
  - host all all 192.168.22.0/24 md5
#  - hostssl all all 0.0.0.0/0 md5

  # Additional script to be launched after initial cluster creation (will be passed the connection URL as parameter)
# post_init: /usr/local/bin/setup_cluster.sh

  # Some additional users users which needs to be created after initializing new cluster
  users:
    admin:
      password: admin
      options:
        - createrole
        - createdb
    replicator:
      password: postgres
      options:
        - superuser

postgresql:
  listen: {{ hostvars[inventory_hostname]['ansible_enp0s8']['ipv4']['address'] }}:5432
  connect_address: {{ hostvars[inventory_hostname]['ansible_enp0s8']['ipv4']['address'] }}:5432
  data_dir: /u02/pgdata/{{ postgresql_major_version }}/{{ cluster_name }}/
  bin_dir: /u01/app/{{ postgres_user }}/product/{{ dmk_postgresql_version }}/bin
#  config_dir:
  pgpass: /u01/app/{{ postgres_user }}/local/dmk/etc/pgpass0
  authentication:
    replication:
      username: replicator
      password: postgres
    superuser:
      username: postgres
      password: postgres
  parameters:
    unix_socket_directories: '/tmp'

watchdog:
  mode: automatic # Allowed values: off, automatic, required
  device: /dev/watchdog
  safety_margin: 5

tags:
    nofailover: false
    noloadbalance: false
    clonefrom: false
    nosync: false

Then we have the configuration template for HAProxy:

dwe@dwe:~/Documents/dbiProjects/dbi_dmk_postgres/dmk/ansible$ cat roles/patroni/templates/haproxy.cfg.j2 
#jinja2: trim_blocks:False
global
    maxconn 100

defaults
    log global
    mode tcp
    retries 2
    timeout client 30m
    timeout connect 4s
    timeout server 30m
    timeout check 5s

listen stats
    mode http
    bind *:7000
    stats enable
    stats uri /
    # stats auth haproxy:haproxy
    # stats refresh 10s

listen {{ cluster_name }}
    bind *:5000
    option httpchk
    http-check expect status 200
    default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions
    {%- for h in ansible_play_hosts %}
    server postgresql_{{ hostvars[h]['ansible_enp0s8']['ipv4']['address'] }}_5432 {{ hostvars[h]['ansible_enp0s8']['ipv4']['address'] }}:5432 maxconn 100 check port 8008
    {% endfor %}

Finally the template for keepalived:

dwe@dwe:~/Documents/dbiProjects/dbi_dmk_postgres/dmk/ansible$ cat roles/patroni/templates/keepalived.conf.j2 
vrrp_script chk_haproxy {
    script "killall -0 haproxy"
    interval 2
    weight 2
}

vrrp_instance VI_1 {
    interface enp0s8
    state {{ item.role }} 
    virtual_router_id 51
    priority {{ item.priority }}
    virtual_ipaddress {
      {{ virtual_ip }}
  }
  track_script {
    chk_haproxy
  }
}

What is left are the systemd service files. The one for etcd:

dwe@dwe:~/Documents/dbiProjects/dbi_dmk_postgres/dmk/ansible$ cat roles/patroni/files/etcd.service 
#
# systemd integration for etcd 
# Put this file under /etc/systemd/system/etcd.service
#     then: systemctl daemon-reload
#     then: systemctl list-unit-files | grep etcd
#     then: systemctl enable etcd.service
#

[Unit]
Description=dbi services etcd service
After=network.target

[Service]
User=postgres
Type=notify
ExecStart=/u01/app/postgres/local/dmk/bin/etcd --config-file /u01/app/postgres/local/dmk/etc/etcd.conf
Restart=always
RestartSec=10s
LimitNOFILE=40000

[Install]
WantedBy=multi-user.target

For Patroni:

dwe@dwe:~/Documents/dbiProjects/dbi_dmk_postgres/dmk/ansible$ cat roles/patroni/files/patroni.service 
#
# systemd integration for patroni 
# Put this file under /etc/systemd/system/patroni.service
#     then: systemctl daemon-reload
#     then: systemctl list-unit-files | grep patroni
#     then: systemctl enable patroni.service
#

[Unit]
Description=dbi services patroni service
After=etcd.service syslog.target network.target

[Service]
User=postgres
Group=postgres
Type=simple
ExecStartPre=-/usr/bin/sudo /sbin/modprobe softdog
ExecStartPre=-/usr/bin/sudo /bin/chown postgres /dev/watchdog
ExecStart=/u01/app/postgres/local/dmk/bin/patroni /u01/app/postgres/local/dmk/etc/patroni.yml
ExecReload=/bin/kill -s HUP $MAINPID
KillMode=process
Restart=no
TimeoutSec=30

[Install]
WantedBy=multi-user.target

The last bit is the site definition which combines all of the above.

dwe@dwe:~/Documents/dbiProjects/dbi_dmk_postgres/dmk/ansible$ cat roles/patroni/site.yml 
---
# This playbook deploys a three node patroni PostgreSQL cluster with HAProxy

- hosts: patroni-servers
  become: true
  become_user: root

  roles:
    - common
    - patroni

Once all of that is in place the palybook can be executed:

dwe@dwe:~/Documents/dbiProjects/dbi_dmk_postgres/dmk/ansible$ ansible-playbook -i ../patroni patroni/site.yml -u postgres

This runs for a couple of minutes as especially upgrading all the operating system packages and comling PostgreSQL will take some time. Once it completed you only need to reboot the systems and your cluster is ready:

postgres@patroni1:/home/postgres/ [pg11.1] patronictl list 
+---------+----------+----------------+--------+---------+-----------+
| Cluster |  Member  |      Host      |  Role  |  State  | Lag in MB |
+---------+----------+----------------+--------+---------+-----------+
|   PG1   | patroni1 | 192.168.22.240 | Leader | running |       0.0 |
|   PG1   | patroni2 | 192.168.22.241 |        | running |       0.0 |
|   PG1   | patroni3 | 192.168.22.242 |        | running |       0.0 |
+---------+----------+----------------+--------+---------+-----------+

HAProy is running as well on all three nodes and you can check that by pointing your browser to any of the hosts on port 7000:
Selection_062

Hope that helps.

Cet article Using Ansible to bring up a three node Patroni cluster in minutes est apparu en premier sur Blog dbi services.

Welcome to 2019 for the Oracle Utilities products

Anthony Shorten - Sun, 2019-01-06 17:33

Welcome to 2019 for all my blog readers. 2019 is looking like a stellar year for the Oracle Utilities products with exciting new features and new versions being introduced across the year. I look forward to writing articles about new features as well as articles outlining advanced techniques for existing features. I will endeavor to write an article every week or so which is a challenge as we get very busy over certain periods but I will try and keep up as much as practical.

It is going to be an exciting year for the Oracle Utilities Application Framework and Utilities Testing Accelerator with exciting new and updated features. Also remember this year we have Edge conferences in the USA, Australia and England which I will be attending, so feel free to come and chat to me there as well.

[Blog] Unique Features of Dell Boomi Platform

Online Apps DBA - Sat, 2019-01-05 05:16

Up for Learning More About Dell Boomi, CrowdSourcing and Some Unique Features Visit: https://k21academy.com/dellboomi16 and Consider our New Exciting Blog About: ✔ What is Crowdsourcing ✔ Unique Features of Dell Boomi Platform Up for Learning More About Dell Boomi, CrowdSourcing and Some Unique Features Visit: https://k21academy.com/dellboomi16 and Consider our New Exciting Blog About: ✔ What […]

The post [Blog] Unique Features of Dell Boomi Platform appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Partner Webcast – Conditional and Adaptive Access on Oracle Identity Cloud Service

Cloud services are an essential part of modern business,increasing both opportunities and risks. Oracle Cloud is built around multiple layers of security defense throughout the technology stack,...

We share our skills to maximize your revenue!
Categories: DBA Blogs

[Solved] Issue while applying EBS 12.2.8 patch ‘26787767’

Online Apps DBA - Sat, 2019-01-05 01:34

Hitting issues while applying Oracle E-Business Suite 12.2.8 patch ‘26787767’ and want to know how to troubleshoot it with the complete solution. Visit: https://k21academy.com/appsdba43 and learn in our New Blog Covering: ✔ Issue while applying Oracle E-Business Suite 12.2.8 patch ‘26787767 ✔ Error in Logs ✔ Cause and Fix of the Issue Hitting issues while […]

The post [Solved] Issue while applying EBS 12.2.8 patch ‘26787767’ appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

ODA: The Patch version must be 12.2.1.4.0

Yann Neuhaus - Fri, 2019-01-04 09:43

When trying to patch your virtualized ODA with patch 12.1.2.12.0 and immediately after that with 12.2.1.4.0, you might consider to load the patches like this:


[root@XX1 Oracle_Soft_Repo]# oakcli unpack -package /cloudfs/Oracle_Soft_Repo/p26433712_1212120_Linux-x86-64_1of2.zip
Unpacking will take some time, Please wait...
Successfully unpacked the files to repository.
[root@XX1 Oracle_Soft_Repo]# oakcli unpack -package /cloudfs/Oracle_Soft_Repo/p26433712_1212120_Linux-x86-64_2of2.zip
Unpacking will take some time, Please wait...
Successfully unpacked the files to repository.
[root@XX1 Oracle_Soft_Repo]# oakcli unpack -package /cloudfs/Oracle_Soft_Repo/p28216780_122140_Linux-x86-64_1of3.zip
Unpacking will take some time, Please wait...
Successfully unpacked the files to repository.
[root@XX1 Oracle_Soft_Repo]# oakcli unpack -package /cloudfs/Oracle_Soft_Repo/p28216780_122140_Linux-x86-64_2of3.zip
Unpacking will take some time, Please wait...
Successfully unpacked the files to repository.
[root@XX1 Oracle_Soft_Repo]# oakcli unpack -package /cloudfs/Oracle_Soft_Repo/p28216780_122140_Linux-x86-64_3of3.zip
Unpacking will take some time, Please wait...
Successfully unpacked the files to repository.

After that, a verification attempt shows following strange error:


[root@XX1 tmp]# oakcli update -patch 12.1.2.12.0 --verify
ERROR : The Patch version must be 12.2.1.4.0

Reason is that only the newest patch version is saved in a flag file, so that less recent versions are not known:


[root@XX1 ~]# cat /opt/oracle/oak/pkgrepos/System/VERSION
version=12.2.1.4.0

To install patch 12.1.2.12.0, both patches have to be removed from repository first:


[root@XX1 ~]# oakcli manage cleanrepo --ver 12.2.1.4.0
Deleting the following files...
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/OAK/12.2.1.4.0/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Expander/ORACLE/DE2-24P/0018/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Expander/ORACLE/DE2-24C/0018/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Expander/ORACLE/DE3-24C/0306/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Expander/SUN/T4-es-Storage/0342/Base
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/HMP/2.4.1.0.11/Base
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/IPMI/1.8.12.4/Base
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/ASR/5.7.7/Base
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/DB/12.2.0.1.180417/Patches/27674384
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/DB/12.1.0.2.180417/Patches/27338029
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/DB/11.2.0.4.180417/Patches/27338049
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/OEL/6.9/Patches/6.9.1
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/OVS/12.2.1.4.0/Base
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/GI/12.2.0.1.180417/Base

[root@XX1 ~]# oakcli manage cleanrepo --ver 12.1.2.12.0
Deleting the following files...
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/OAK/12.1.2.12.0/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Expander/ORACLE/DE2-24P/0018/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Expander/ORACLE/DE2-24C/0018/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Expander/ORACLE/DE3-24C/0291/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Expander/SUN/T4-es-Storage/0342/Base
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/HMP/2.3.5.2.8/Base
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/IPMI/1.8.12.4/Base
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/ASR/5.5.1/Base
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/DB/12.1.0.2.170814/Patches/26609783
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/DB/11.2.0.4.170814/Patches/26609445
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/DB/11.2.0.3.15/Patches/20760997
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/DB/11.2.0.2.12/Patches/17082367
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/OEL/6.8/Patches/6.8.4
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/OVS/12.1.2.12.0/Base

After that, patch 12.1.2.1.12.0 has to be reloaded into repository:


[root@XX1 ~]# oakcli unpack -package /cloudfs/Oracle_Soft_Repo/p26433712_1212120_Linux-x86-64_1of2.zip
Unpacking will take some time, Please wait...
Successfully unpacked the files to repository.
[root@XX1 ~]# oakcli unpack -package /cloudfs/Oracle_Soft_Repo/p26433712_1212120_Linux-x86-64_2of2.zip
Unpacking will take some time, Please wait...
Successfully unpacked the files to repository.

Now repository is aware of patch 12.1.2.12.0:


[root@XX1 ~]# cat /opt/oracle/oak/pkgrepos/System/VERSION
version=12.1.2.12.0

[root@XX1 ~]# oakcli update -patch 12.1.2.12.0 --verify

INFO: 2018-11-29 14:14:34: Reading the metadata file now...

Component Name Installed Version Proposed Patch Version

--------------- ------------------ -----------------

Controller_INT 4.650.00-7176 Up-to-date

Controller_EXT 13.00.00.00 Up-to-date

Expander 0018 Up-to-date

SSD_SHARED {

[ c1d20,c1d21,c1d22, A29A Up-to-date

c1d23 ]

[ c1d16,c1d17,c1d18, A29A Up-to-date

c1d19 ]

}

HDD_LOCAL A7E0 Up-to-date

HDD_SHARED {

[ c1d0,c1d1,c1d2,c1d PAG1 Up-to-date

3,c1d4,c1d5,c1d6,c1d

8,c1d9,c1d10,c1d11,c

1d12,c1d13,c1d15 ]

[ c1d7,c1d14 ] PD51 PAG1

}

ILOM 3.2.9.23 r116695 Up-to-date

BIOS 30110000 Up-to-date

IPMI 1.8.12.4 Up-to-date

HMP 2.3.5.2.8 Up-to-date

OAK 12.1.2.11.0 12.1.2.12.0

OL 6.8 Up-to-date

OVM 3.4.3 Up-to-date

GI_HOME 12.1.0.2.170418(2517 12.1.0.2.170814(2660

1037,25942196) 9783,26609945)

DB_HOME 12.1.0.2.170418(2517 12.1.0.2.170814(2660

1037,25942196) 9783,26609945)

After installation of patch 12.1.2.12.0, patch 12.2.1.4.0 has to be loaded into repository and after that it can be installed.

Conclusion: It is only possible to load one patch into repository, before loading a newer patch, the former patch has to be installed.

Cet article ODA: The Patch version must be 12.2.1.4.0 est apparu en premier sur Blog dbi services.

Creating a local kubectl config file for the proxy to your Kubernetes API server

Pas Apicella - Fri, 2019-01-04 03:04
On my Mac accessing the CONFIG file of kubectl exist in a painful location as follows

  $HOME/.kube/config

When using the command "kubectl proxy" and invoking the UI requires you to browse to the CONFIG file which finder doesn't expose easily. One way around this is as follows

1. Save a copy of that config file in your current directory as follows

papicella@papicella:~/temp$ cat ~/.kube/config > kubeconfig

2. Invoke "kubectl proxy" to start a UI server to your K8's cluster

papicella@papicella:~/temp$ kubectl proxy
Starting to serve on 127.0.0.1:8001

3. Navigate to the UI using an URL as follows

http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/overview



4. At this point we can browse to the easily accessible TEMP directory to the file "kubeconfig" we created at step #1 and then click "Sing In" button


Categories: Fusion Middleware

Installing and Configuring Oracle 18cXE on CentOS

The Anti-Kyte - Thu, 2019-01-03 11:50

After seven years, the much anticipated Oracle 18c Express Edition (XE) has finally seen the light of day.
Reaction to the new version can be summed up as…

It’s the Dog’s Wotsits !

Gerald Venzl, the person we have to thank for this new and vastly improved version of XE, has already published an installation guide.

At this point you may well ask yourself that what – apart from gratuitous puppy pics and cheesy-snack-based puns – is the difference between that post and this.

Well, if you’re a long-time user of 11gXE and you’re looking to upgrade then you will find 18cXE a rather different proposition.
The introduction of Multitenant databases aside, 18cXE differs greatly from it’s predecessor in terms of it’s functional scope.
Wheras 11gXE was – broadly speaking – functionally equivalent to Oracle Standard Edition, the approach for 18cXE has been to shoe-horn in as many Enterprise Edition features as possible.
No doubt, this will leave you anxious to play with the new version. However, there are some “home comforts” that were present in the old version that you’ll need to configure yourself this time around.
What I’m going to go through is :

  • Installing 18cXE on a Red Hat compatible distro (CentOS7)
  • Connecting to the database and exploring the containers
  • Checking the TNS Listener
  • Manual and Automatic Startup and Shutdown of the database and listener
  • Setting and persisting the Oracle environment variables
  • Accessing Enterprise Manager Express
  • Installing the HR demo application in a Pluggable Database (PDB)
  • Configuring the firewall to allow remote access to Oracle

The steps documented here have been performed on a vanilla installation of CentOS7. As such, they should work pretty much unaltered for other Red Hat based distros based on or similar to Red Hat Enterprise Linux (RHEL) version 7.

Before all of that though…

Resource Limits for 18cXE

As with previous versions of XE, there are limitations on the system resources that will be used. These include :

  • 2 CPU cores ( up from 1 in 11gXE)
  • 2 GB Ram ( 1 GB in 11g)
  • 12GB of User Data ( 11GB in 11g)
  • A maximum of 3 PDBs

In addition, you can only install one instance of XE per host. However, it does seem to be possible to install XE alongside other Oracle Editions on the same host.

One final point to note – the amount of space taken up by the initial installation is not insignificant. The space usage in the $ORACLE_BASE is :

sudo du -h -d1
5.3G	./product
76K	./oraInventory
0	./checkpoints
12M	./diag
20M	./cfgtoollogs
4.9M	./admin
0	./audit
3.4G	./oradata
8.7G	.

This is worth bearing in mind when sizing your environment.
Additionally, if you’re tight on space, you may also consider removing the rpm files once the installation is complete as this frees up 2.4GB (although not in $ORACLE_BASE).
Speaking of rpm files…

Downloading and installation

Head over to the Downloads page and download the Oracle Database 18c Express Edition for Linux x64 version.

If you’re running a Red Hat compatible distro that’s not Oracle Linux, you’ll also need the Oracle Database Preinstall RPM for RHEL and CentOS. I’m running on CentOS7 so I’ll get the Release 7 version of this file.

At this point, we should now have two rpm files :

-rw-rw-r--. 1 mike mike      18244 Dec 25 17:37 oracle-database-preinstall-18c-1.0-1.el7.x86_64.rpm
-rw-rw-r--. 1 mike mike 2574155124 Dec 25 17:37 oracle-database-xe-18c-1.0-1.x86_64.rpm

Next, we need to become root for a bit. If you’re worried that all this power might go to your head, fear not, I’ll let you know when we can become mere mortals again. For now though :

sudo -s
[sudo] password for mike: 
#

Now we can install the RPMs. The preinstall first (note that you need to have an internet connection available when running this)…

yum localinstall oracle-database-preinstall-18c-1.0-1.el7.x86_64.rpm 

…This results in :

Loaded plugins: fastestmirror, langpacks
Examining oracle-database-preinstall-18c-1.0-1.el7.x86_64.rpm: oracle-database-preinstall-18c-1.0-1.el7.x86_64
Marking oracle-database-preinstall-18c-1.0-1.el7.x86_64.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package oracle-database-preinstall-18c.x86_64 0:1.0-1.el7 will be installed
--> Processing Dependency: compat-libcap1 for package: oracle-database-preinstall-18c-1.0-1.el7.x86_64
Loading mirror speeds from cached hostfile
 * base: mirrors.vooservers.com
 * extras: mirror.sov.uk.goscomb.net
 * updates: mirrors.vooservers.com
base                                                                                     | 3.6 kB  00:00:00     
extras                                                                                   | 3.4 kB  00:00:00     
updates                                                                                  | 3.4 kB  00:00:00     
--> Processing Dependency: compat-libstdc++-33 for package: oracle-database-preinstall-18c-1.0-1.el7.x86_64
--> Processing Dependency: ksh for package: oracle-database-preinstall-18c-1.0-1.el7.x86_64
--> Processing Dependency: libaio-devel for package: oracle-database-preinstall-18c-1.0-1.el7.x86_64
--> Running transaction check
---> Package compat-libcap1.x86_64 0:1.10-7.el7 will be installed
---> Package compat-libstdc++-33.x86_64 0:3.2.3-72.el7 will be installed
---> Package ksh.x86_64 0:20120801-139.el7 will be installed
---> Package libaio-devel.x86_64 0:0.3.109-13.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================================================
 Package                        Arch   Version           Repository                                        Size
================================================================================================================
Installing:
 oracle-database-preinstall-18c x86_64 1.0-1.el7         /oracle-database-preinstall-18c-1.0-1.el7.x86_64  55 k
Installing for dependencies:
 compat-libcap1                 x86_64 1.10-7.el7        base                                              19 k
 compat-libstdc++-33            x86_64 3.2.3-72.el7      base                                             191 k
 ksh                            x86_64 20120801-139.el7  base                                             885 k
 libaio-devel                   x86_64 0.3.109-13.el7    base                                              13 k

Transaction Summary
================================================================================================================
Install  1 Package (+4 Dependent packages)

Total size: 1.1 M
Total download size: 1.1 M
Installed size: 4.0 M
Is this ok [y/d/N]: 

Enter ‘y’ and…

Downloading packages:
(1/4): compat-libcap1-1.10-7.el7.x86_64.rpm                                              |  19 kB  00:00:00     
(2/4): compat-libstdc++-33-3.2.3-72.el7.x86_64.rpm                                       | 191 kB  00:00:00     
(3/4): libaio-devel-0.3.109-13.el7.x86_64.rpm                                            |  13 kB  00:00:00     
(4/4): ksh-20120801-139.el7.x86_64.rpm                                                   | 885 kB  00:00:00     
----------------------------------------------------------------------------------------------------------------
Total                                                                           1.8 MB/s | 1.1 MB  00:00:00     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : compat-libstdc++-33-3.2.3-72.el7.x86_64                                                      1/5 
  Installing : libaio-devel-0.3.109-13.el7.x86_64                                                           2/5 
  Installing : compat-libcap1-1.10-7.el7.x86_64                                                             3/5 
  Installing : ksh-20120801-139.el7.x86_64                                                                  4/5 
  Installing : oracle-database-preinstall-18c-1.0-1.el7.x86_64                                              5/5 
  Verifying  : ksh-20120801-139.el7.x86_64                                                                  1/5 
  Verifying  : compat-libcap1-1.10-7.el7.x86_64                                                             2/5 
  Verifying  : oracle-database-preinstall-18c-1.0-1.el7.x86_64                                              3/5 
  Verifying  : libaio-devel-0.3.109-13.el7.x86_64                                                           4/5 
  Verifying  : compat-libstdc++-33-3.2.3-72.el7.x86_64                                                      5/5 

Installed:
  oracle-database-preinstall-18c.x86_64 0:1.0-1.el7                                                             

Dependency Installed:
  compat-libcap1.x86_64 0:1.10-7.el7   compat-libstdc++-33.x86_64 0:3.2.3-72.el7 ksh.x86_64 0:20120801-139.el7
  libaio-devel.x86_64 0:0.3.109-13.el7

Complete!

Now for the main event…

yum localinstall oracle-database-xe-18c-1.0-1.x86_64.rpm

…which results in ( after quite a while) …

Loaded plugins: fastestmirror, langpacks
Examining oracle-database-xe-18c-1.0-1.x86_64.rpm: oracle-database-xe-18c-1.0-1.x86_64
Marking oracle-database-xe-18c-1.0-1.x86_64.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package oracle-database-xe-18c.x86_64 0:1.0-1 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================================================
 Package                        Arch           Version       Repository                                    Size
================================================================================================================
Installing:
 oracle-database-xe-18c         x86_64         1.0-1         /oracle-database-xe-18c-1.0-1.x86_64         5.2 G

Transaction Summary
================================================================================================================
Install  1 Package

Total size: 5.2 G
Installed size: 5.2 G
Is this ok [y/d/N]: 

Once again, enter ‘y’…then go and get a coffee (or other bevarage if you prefer), this next bit takes a while…

Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : oracle-database-xe-18c-1.0-1.x86_64                                                          1/1 
[INFO] Executing post installation scripts...
[INFO] Oracle home installed successfully and ready to be configured.
To configure Oracle Database XE, optionally modify the parameters in '/etc/sysconfig/oracle-xe-18c.conf' and then execute '/etc/init.d/oracle-xe-18c configure' as root.
  Verifying  : oracle-database-xe-18c-1.0-1.x86_64                                                          1/1 

Installed:
  oracle-database-xe-18c.x86_64 0:1.0-1                                                                         

Complete!

Finally, we need to run the configuration.
NOTE : I quit the previous session and began a new one as root before running this.

sudo -s
/etc/init.d/oracle-xe-18c configure

…once again this takes a while to complete, which is fair enough because there’s a bit going on :

Specify a password to be used for database accounts. Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9]. Note that the same password will be used for SYS, SYSTEM and PDBADMIN accounts:
Confirm the password:
Configuring Oracle Listener.
Listener configuration succeeded.
Configuring Oracle Database XE.
Enter SYS user password: 
*********
Enter SYSTEM user password: 
*********
Enter PDBADMIN User Password: 
**********
Prepare for db operation
7% complete
Copying database files
29% complete
Creating and starting Oracle instance
30% complete
31% complete
34% complete
38% complete
41% complete
43% complete
Completing Database Creation
47% complete
50% complete
Creating Pluggable Databases
54% complete
71% complete
Executing Post Configuration Actions
93% complete
Running Custom Scripts
100% complete
Database creation complete. For details check the logfiles at:
 /opt/oracle/cfgtoollogs/dbca/XE.
Database Information:
Global Database Name:XE
System Identifier(SID):XE
Look at the log file "/opt/oracle/cfgtoollogs/dbca/XE/XE.log" for further details.

Connect to Oracle Database using one of the connect strings:
     Pluggable database: frea.virtualbox:1522/XEPDB1
     Multitenant container database: frea.virtualbox:1522
Use https://localhost:5500/em to access Oracle Enterprise Manager for Oracle Database XE

At this point we can stop being root.

Connecting to the database

First up, we need to make sure that the appropriate environment variables are set. So run the following, entering XE when prompted for the ORACLE_SID…

 . oraenv
ORACLE_SID = [mike] ? XE
ORACLE_BASE environment variable is not being set since this
information is not available for the current user ID mike.
You can set ORACLE_BASE manually if it is required.
Resetting ORACLE_BASE to its previous value or ORACLE_HOME
The Oracle base has been set to /opt/oracle/product/18c/dbhomeXE

Now we should be able to connect to the database via sqlplus :

sqlplus system

We can now confirm that the database is up :

select instance_name, version, status
from v$instance;

INSTANCE_NAME    VERSION	      STATUS
---------------- ----------------- ------------
XE		    18.0.0.0.0	      OPEN

One significant new feature of 18c XE as compared with it’s predecessor is the capability to use the database as a container (CDB) for zero or more Pluggable Databases (PDBs).
In the case of XE, you can have up to three PDBs and we can see that one has already been created as part of the installation :

select con_id, name
from v$containers;

    CON_ID NAME
---------- ------------------------------
	 1 CDB$ROOT
	 2 PDB$SEED
	 3 XEPDB1

In this case :

  • CDB$ROOT is the Container Database
  • PDB$SEED is a read-only template for creating PDBS
  • XEPDB1 is a PDB

In the CDB, we can see details of the PDB seed database and the PDB itself :

select con_id, name, open_mode 
from v$pdbs;

    CON_ID NAME 			  OPEN_MODE
---------- ------------------------------ ----------
	 2 PDB$SEED			  READ ONLY
	 3 XEPDB1			  READ WRITE

However, if we switch to the PDB…

alter session set container = XEPDB1;

…the same query returns information only about the current PDB…

select con_id, name, open_mode 
from v$pdbs;

    CON_ID NAME 			  OPEN_MODE
---------- ------------------------------ ----------
	 3 XEPDB1			  READ WRITE

If you want to check which PDB you are in you can use :

select sys_context('userenv', 'con_name') from dual;

In the CDB this should return :

CDB$ROOT

in our PDB however, we should get :

XEPDB1
Checking the Listener

For ongoing administration operations from the OS, you’ll need to add your user to a couple of groups. In my case, my user is “mike” :

sudo usermod -a -G dba mike
sudo usermod -a -G oinstall mike

Once you’ve added these groups to your user you need to log off and log on again for them to take effect.
You should now be able to check the status of the Net Listener by means of the lsnrctl utility.

Having first run oraenv as before to set your environment…

lsnrctl status

When the listener is up, you should get something like :

LSNRCTL for Linux: Version 18.0.0.0.0 - Production on 26-DEC-2018 20:38:31

Copyright (c) 1991, 2018, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=frea.virtualbox)(PORT=1522)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 18.0.0.0.0 - Production
Start Date                26-DEC-2018 19:24:54
Uptime                    0 days 1 hr. 13 min. 37 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Default Service           XE
Listener Parameter File   /opt/oracle/product/18c/dbhomeXE/network/admin/listener.ora
Listener Log File         /opt/oracle/diag/tnslsnr/frea/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=frea.virtualbox)(PORT=1522)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1522)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=127.0.0.1)(PORT=5500))(Security=(my_wallet_directory=/opt/oracle/product/18c/dbhomeXE/admin/XE/xdb_wallet))(Presentation=HTTP)(Session=RAW))
Services Summary...
Service "7de2a3259d9c3747e0530f84f25ce87c" has 1 instance(s).
  Instance "XE", status READY, has 1 handler(s) for this service...
Service "XE" has 1 instance(s).
  Instance "XE", status READY, has 1 handler(s) for this service...
Service "XEXDB" has 1 instance(s).
  Instance "XE", status READY, has 1 handler(s) for this service...
Service "xepdb1" has 1 instance(s).
  Instance "XE", status READY, has 1 handler(s) for this service...
The command completed successfully

If however, it’s not running, you’ll get :


LSNRCTL for Linux: Version 18.0.0.0.0 - Production on 26-DEC-2018 20:40:30

Copyright (c) 1991, 2018, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=frea.virtualbox)(PORT=1522)))
TNS-12541: TNS:no listener
 TNS-12560: TNS:protocol adapter error
  TNS-00511: No listener
   Linux Error: 111: Connection refused
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1522)))
TNS-12541: TNS:no listener
 TNS-12560: TNS:protocol adapter error
  TNS-00511: No listener
   Linux Error: 2: No such file or directory

This brings us onto…

Starting and Stopping Oracle

The first time you restart the server after the installation, you will find that neither the database nor the TNS Listener are running.

To start them up from the command line you can run :

sudo /etc/init.d/oracle-xe-18c start

To shut them down, it’s :

sudo /etc/init.d/oracle-xe-18c stop

If, like me, you are configuring your server for the sole or main purpose of running Oracle, then you may want the database and listener to start when the server does.

To do this, switch to root…

sudo -s

…and set the oracle-xe-18c service to start on boot…

systemctl daemon-reload
systemctl enable oracle-xe-18c

The output will probably be something like :

oracle-xe-18c.service is not a native service, redirecting to /sbin/chkconfig.
Executing /sbin/chkconfig oracle-xe-18c on
[root@frea mike]# systemctl status oracle-xe-18c
 oracle-xe-18c.service - SYSV: This script is responsible for taking care of configuring the RPM Oracle XE Database and its associated services.
   Loaded: loaded (/etc/rc.d/init.d/oracle-xe-18c; bad; vendor preset: disabled)
   Active: inactive (dead)
     Docs: man:systemd-sysv-generator(8)

If you then reboot the server, you should be able to confirm that the service is up by running…

systemctl status -l oracle-xe-18c

…which should return something like …

 oracle-xe-18c.service - SYSV: This script is responsible for taking care of configuring the RPM Oracle XE Database and its associated services.
   Loaded: loaded (/etc/rc.d/init.d/oracle-xe-18c; bad; vendor preset: disabled)
   Active: active (exited) since Fri 2018-12-28 13:20:23 GMT; 1min 48s ago
     Docs: man:systemd-sysv-generator(8)
  Process: 3475 ExecStart=/etc/rc.d/init.d/oracle-xe-18c start (code=exited, status=0/SUCCESS)
    Tasks: 0

Dec 28 13:19:59 frea.virtualbox systemd[1]: Starting SYSV: This script is responsible for taking care of configuring the RPM Oracle XE Database and its associated services....
Dec 28 13:19:59 frea.virtualbox oracle-xe-18c[3475]: Starting Oracle Net Listener.
Dec 28 13:19:59 frea.virtualbox su[3510]: (to oracle) root on none
Dec 28 13:20:02 frea.virtualbox oracle-xe-18c[3475]: Oracle Net Listener started.
Dec 28 13:20:02 frea.virtualbox oracle-xe-18c[3475]: Starting Oracle Database instance XE.
Dec 28 13:20:02 frea.virtualbox su[3864]: (to oracle) root on none
Dec 28 13:20:23 frea.virtualbox oracle-xe-18c[3475]: Oracle Database instance XE started.
Dec 28 13:20:23 frea.virtualbox systemd[1]: Started SYSV: This script is responsible for taking care of configuring the RPM Oracle XE Database and its associated services..
Setting Oracle Environment Variables

You’ll have noticed that, up until now, we have to set some environment variables every time we want to interact with the database from the server command line.
To do this, we need to run :

. oraenv

When you run this script, you will always get asked for ORACLE_SID value :

. oraenv
ORACLE_SID = [mike] ? XE
The Oracle base has been set to /opt/oracle

We can see that the oraenv script affects four environment variables :

echo $ORACLE_SID
XE
echo $ORACLE_HOME
/opt/oracle/product/18c/dbhomeXE
echo $ORACLE_BASE
/opt/oracle
echo $PATH
...snip.../opt/oracle/product/18c/dbhomeXE/bin

You have some options as to how you can manage these environment variables.
One option is to setup some environment variables to prevent oraenv prompting for the SID every time it’s run.
Another is to set the environment variables automatically for all sessions.

Stopping oraenv prompting for input

To do this we need to ensure that the ORACLE_SID environment variable is set prior to invoking the script and also that the ORAENV_ASK variable is set to NO.
We can see the result of this with the following quick test :

export ORACLE_SID=XE
export ORAENV_ASK=NO
. oraenv
The Oracle base has been set to /opt/oracle

To set these environment variables automatically, we can simply define them in the /etc/profile.d/sh.local script :

sudo nano /etc/profile.d/sh.local

Add the two variable assigments :

export ORACLE_SID=XE
export ORAENV_ASK=NO

You will need to logout and login again for this change to take effect.

Setting the environment variables automatically

If you want to dispense with the need to call the oraenv script altogether, you can simply add a script with a .sh extension to the /etc/profile.d directory…

nano /etc/profile.d/set_oraenv.sh

…which should contain…

export ORACLE_SID=XE
export ORACLE_HOME=/opt/oracle/product/18c/dbhomeXE
export ORACLE_BASE=/opt/oracle
export PATH=$PATH:/opt/oracle/product/18c/dbhomeXE/bin

Next time you fire up a shell ( assuming you’re using bash – the default on CentOS), these environment variables will be set.

Enterprise Manager Express

Whilst 11gXE came with a database home page which consisted of an Apex application to administer XE, 18c does not come with Apex.
The good news is that, in common with all other 18c Editions, it comes with Enterprise Manager Express – a purpose built Admin tool.
Acessing it should be fairly simple. You just open a web browser (Firefox being the default on CentOS) and point it at the address specified in the output from our configuration run earlier. In our case this is :

https://localhost:5500/em

However, you may be ever so slightly disappointed…

Fortunately for us, someone was good enough to document the solution for this particular problem.

In short, we need to follow the link to the Adobe download site and select the .tar.gz option for the Flash Download :

After this, we should now have the following :

ls -l *.tar.gz
-rwxrwx---. 1 mike mike 9045426 Dec 22 15:02 flash_player_npapi_linux.x86_64.tar.gz

Next, we extract the libflashplayer.so file from the archive…

tar -xf flash_player_npapi_linux.x86_64.tar.gz *libflashplayer.so
ls -l libflashplayer.so
-rw-rw-r--. 1 mike mike 16607528 Nov 29 23:06 libflashplayer.so

…and copy it to the location that Firefox expects it to be…

sudo cp libflashplayer.so /usr/lib64/mozilla/plugins/.

…before finally setting the file ownership and permissions…

cd /usr/lib64/mozilla/plugins
sudo chmod 755 libflashplayer.so
sudo chgrp root libflashplayer.so
sudo chown root libflashplayer.so

Our file should now look like this :

libflashplayer.so
-rwxr-xr-x. 1 root root 16607528 Dec 31 17:05 libflashplayer.so

If we go to the EM page now :

Activate the plugin and login as sys (as sysdba) :

Eventually, you should see the Enterprise Manager Home Page :

I’ll leave you to explore for a bit.

Installing the HR demo application

Unlike it’s predecessor, 18cXE does not come with the HR demo application pre-installed. However, it does include the scripts that enable us to perform this installation ourselves.

As this is an application as opposed to a system-wide utility, we’re going to install it in the PDB rather than the main CDB.

We’ll need to switch to the oracle OS user so that we have permissions to write to the log file that we’re going to specify. Then we connect to the database…

sudo su oracle
sqlplus system

Once connected :

alter session set container = XEPDB1;
select sys_context('userenv', 'con_name') from dual;

SYS_CONTEXT('USERENV','CON_NAME')
--------------------------------------------------------------------------------
XEPDB1

Now we’ve confirmed that we’re in the PDB, simply run :

@$ORACLE_HOME/demo/schema/human_resources/hr_main.sql

This script will prompt for :

  1. the password for the HR user – enter an appropriate password and remember it as you will need it to access the new HR schema
  2. the default tablespace to use for the HR user – enter USERS
  3. the temporary tablespace to use for the HR user – enter TEMP
  4. the path of the log file written by this installation script – enter $ORACLE_HOME/demo/schema/log

NOTE – the script does not obfuscate the password you enter but echos it to the screen. In any case, you may consider that changing it shortly after installation is a wise move.

The output will look something like this :


specify password for HR as parameter 1:
Enter value for 1: mysupersecretpassword

specify default tablespeace for HR as parameter 2:
Enter value for 2: USERS

specify temporary tablespace for HR as parameter 3:
Enter value for 3: TEMP

specify log path as parameter 4:
Enter value for 4: $ORACLE_HOME/demo/schema/log


PL/SQL procedure successfully completed.


User created.


User altered.


User altered.


Grant succeeded.


Grant succeeded.


Session altered.

...snip...
Comment created.


Comment created.


Comment created.


Commit complete.


PL/SQL procedure successfully completed.

We should now see that we have a “local” user called HR :

select account_status, default_tablespace, temporary_tablespace, common
from dba_users
where username = 'HR';

ACCOUNT_STATUS                   DEFAULT_TABLESPACE             TEMPORARY_TABLESPACE           COM
-------------------------------- ------------------------------ ------------------------------ ---
OPEN                             USERS                          TEMP                           NO

As the account is not locked, we can connect to it from SQL*Plus. Note that we’ll have to use the connect string for the PDB (as specified in the installation feedback earlier) as the schema does not exist in the CDB :

sqlplus hr/mysupersecretpassword@frea.virtualbox:1522/xepdb1

Alternatively we could use a method which doesn’t record the password in the bash history…

sqlplus /nolog

SQL*Plus: Release 18.0.0.0.0 - Production on Wed Jan 2 16:55:31 2019
Version 18.4.0.0.0

Copyright (c) 1982, 2018, Oracle.  All rights reserved.

SQL> conn hr/mysupersecretpassword@frea.virtualbox:1522/xepdb1
Connected.
SQL> 

If you want to see the difference, just try both connection methods in the same Terminal session and then run :

history |grep sqlplus

It’s probably worth remembering this if you are running on a shared environment.

Anyway, we can now see that the HR schema has the following objects :

Acessing the database from remote machines

Up to this point we’ve been working on the database server itself. This is fine if you’re running your Red-Hat based system as your desktop ( although in that case it’s more likely to be Fedora than CentOS), but if you want to be able to access it remotely, you’ll need to configure the firewall to allow remote access to specific ports.

Our objectives here are :

  1. to allow access to the database from a client machine via TNS
  2. to allow access to the Enterprise Manager Express site

For CentOS 7 the default firewall is firewalld :

systemctl status firewalld

 firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
   Active: <font color="#8AE234"><b>active (running)</b></font> since Tue 2019-01-01 14:53:08 GMT; 4min 30s ago
     Docs: man:firewalld(1)
 Main PID: 2842 (firewalld)
    Tasks: 2
   CGroup: /system.slice/firewalld.service
           └─2842 /usr/bin/python -Es /usr/sbin/firewalld --nofork --nopid

Jan 01 14:53:07 frea.virtualbox systemd[1]: Starting firewalld - dynamic fir....
Jan 01 14:53:08 frea.virtualbox systemd[1]: Started firewalld - dynamic fire....
Hint: Some lines were ellipsized, use -l to show in full.

On my client machine, I’ve added the following entries to the $ORACLE_HOME/network/admin/tnsnames.ora file :

XE18 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = frea.virtualbox)(PORT = 1522))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = XE)
    )
  )
  
xepdb1  =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = frea.virtualbox)(PORT = 1522))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = xepdb1)
    )
  )  

XE18 will allow me to connect to the CDB and xepdb1 will let me connect to the PDB.

At the moment, when we try to connect to the datbase from a client machine we hit…

sqlplus system@XE18

SQL*Plus: Release 12.2.0.1.0 Production on Tue Jan 1 15:10:34 2019

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Enter password: 
ERROR:
ORA-12543: TNS:destination host unreachable

Back on the server, we can remedy this by issuing the following command to open the port that the TNS Listener is listening on ( in my case 1522) :

sudo firewall-cmd --permanent --add-port=1522/tcp
success

…and verify with :

sudo firewall-cmd --list-ports
1522/tcp

This then allows the remote connection :

sqlplus system@XE18

SQL*Plus: Release 12.2.0.1.0 Production on Tue Jan 1 15:12:44 2019

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Enter password: 
Last Successful login time: Mon Dec 31 2018 23:22:40 +00:00

Connected to:
Oracle Database 18c Express Edition Release 18.0.0.0.0 - Production

SQL> 

As for EM Express, we need to do the same for the port it’s running on (5500 in this case) :

sudo firewall-cmd --permanent --add-port=5500/tcp

However, we also need to connect to the database as SYSTEM and allow remote access by running:

exec dbms_xdb_config.SetListenerLocalAccess(false);

Once this is done we should now be able to access the EM Express home page remotely …

References

As ever, I’ve found a number articles that have proved useful in writing this post. I’d like to make a point of saying thank-you to the people who have essentially provided free consultancy to me (and you, if you’ve followed this guide). So thanks to…

Copying Block Volume Across Region in Oracle Cloud Infrastructure (OCI)

Online Apps DBA - Thu, 2019-01-03 05:53

Why is it important and useful to migrate our application or database from one region of Oracle Cloud Infrastructure to another? Want to Know WHY? Visit: https://k21academy.com/oci25 and Consider our new Exciting Blog Covering new feature of Oracle Cloud Infrastructure like: ✔ Copying Block Volume Across Region ✔ Learn Steps To Copy a Backup Across […]

The post Copying Block Volume Across Region in Oracle Cloud Infrastructure (OCI) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Controlling Your Cloud - A Look At The Oracle Cloud Infrastructure Java SDK

OTN TechBlog - Wed, 2019-01-02 17:09

A few weeks ago our cloud evangelism team got the opportunity to spend some time on site with some amazing developers from one of Oracle's clients in Santa Clara, CA for a 3-day cloud hackfest.  During the event, one of the developers mentioned that a challenge his team faced was handling file uploads for potentially extremely large files.  I've faced this problem before as a developer and it's certainly challenging.  The web just wasn't really built for large file transfers (though, things have gotten much better in the past few years as we'll discuss later on).  We didn't end up with an opportunity to fully address the issue during the hackfest, but I promised the developer that I would follow-up with a solution after digging deeper into the Oracle Cloud Infrastructure APIs once I got back home.  So yesterday I got down to digging into the process and engineered a pretty solid demo for that developer on how to achieve large file uploads to OCI Object Storage, but before I show that solution I wanted to give a basic introduction to working with your Oracle Cloud via the available SDK so that things are easier to follow once we get into some more advanced interactions. 

Oracle offers several other SDKs (Python, Ruby and Go), but since I typically write my code in Groovy I went with the Java SDK.  Oracle provides a full REST API for working with your cloud, but the SDK provides a nice native solution and abstracts away some of the painful bits of signing your request and making the HTTP calls into a nice package that can be bundled within your application. The Java SDK supports the following OCI services:

  • Audit
  • Container Engine for Kubernetes
  • Core Services (Networking, Compute, Block Volume)
  • Database
  • DNS
  • Email Delivery
  • File Storage
  • IAM
  • Load Balancing
  • Object Storage
  • Search
  • Key Management

Let's take a look at the Java SDK in action, specifically how it can be used to interact with the Object Storage service.  The SDK is open source and available on GitHub.  I created a very simple web app for this demo.  Unfortunately, the SDK is not yet available via Maven (see here), so step one was to download the SDK and include it as a dependency in my application.  I use Gradle, so I dropped the JARs into a "libs" directory in the root of my app and declared the following dependencies block to make sure that Gradle picked up the local JARs (the key being the "implementation" method on line 8):

The next step is to create some system properties that we'll need for authentication and some of our service calls.  To do this, you'll need to set up some config files locally and generate some key pairs, which can be mildly annoying at first, but once you're set up you're good to go in the future and you get the added bonus of being set up for the OCI CLI if you want to use it later on.  Once I had the config file and keys generated, I set my props into a file in the app root called 'gradle.properties'.  Using this properties file and the key naming convention shown below Gradle makes the variables available within your build script as system properties.

Note that having the variables as system properties in your build script does not make them available within your application, but to do that you can simply pass them in via your 'run' task:

Next, I created a class to manage the provider and service clients.  This class only has a single client right now, but adding additional clients for other services in the future would be trivial.

I then created an 'ObjectService' for working with the Object Storage API.  The constructor accepts an instance of the OciClientManager that we looked at above, and sets some class variables for some things that are common to many of the SDK methods (namespace, bucket name, compartment ID, etc):

At this point, we're ready to interact with the SDK.  As a developer, it definitely feels like an intuitive API and follows a standard "request/response" model that other cloud providers use in their APIs as well.  I found myself often simply guessing what the next method or property might be called and often being right (or close enough for intellisense to guide me to the right place).  That's pretty much my benchmark for a great API - if it's intuitive and doesn't get in my way with bloated authentication schemes and such then I'm going to love working with it.  Don't get me wrong, strong authentication and security are assuredly important, but the purpose of an SDK is to hide the complexity and expose a method to use the API in a straightforward manner.  All that said, let's look at using the Object Storage client.  

We'll go rapid fire here and show how to use the client to do the following actions (with a sample result shown after each code block):

  1. List Buckets
  2. Get A Bucket
  3. List Objects In A Bucket
  4. Get An Object

List Buckets:

 

Get Bucket:

List Objects:

Get Object:

The 'Get Object' example also contains an InputStream containing the object that can be written to file.

As you can see, the Object Storage API is predictable and consistent.  In another post, we'll finally tackle the more complex issue of handling large file uploads via the SDK.

Controlling Your Cloud - Uploading Large Files To Oracle Object Storage

OTN TechBlog - Wed, 2019-01-02 16:42

In my last post, we took an introductory look at working with the Oracle Cloud Infrastructure (OCI) API with the OCI Java SDK.  I mentioned that my initial motivation for digging into the SDK was to handle large file uploads to OCI Object Storage, and in this post, we'll do just that.  

As I mentioned, HTTP wasn't originally meant to handle large file transfers (Hypertext Transfer Protocol).  Rather, file transfers were typically (and often, still) handled via FTP (File Transfer Protocol).  But web developers deal with globally distributed clients and FTP requires server setup, custom desktop clients, different firewall rules and authentication which ultimately means large files end up getting transferred over HTTP/S.  Bit Torrent can be a better solution if the circumstances allow, but distributed files aren't often the case that web developers are dealing with.  Thankfully, many advances in HTTP over the past several years have made large file transfer much easier to deal with, the main advance being chunked transfer encoding (known as "chunked" or "multipart" file upload).  You can read more about Oracle's support for multipart uploading, but to explain it in the simplest possible way a file is broken up into several pieces ("chunks"), uploaded (at the same time, if necessary), and reassembled into the original file once all of the pieces have been uploaded.

The process to utilize the Java SDK for multipart uploading involves, at a minimum, three steps.  Here's the JavaDocs for the SDK in case you're playing along at home and want more info.

  1. Initiate the multipart upload
  2. Upload the individual file parts
  3. Commit the upload

The SDK provides methods for all of the steps above, as well as a few additional steps for listing existing multipart uploads, etc.  Individual parts can be up to 50 GiB.  The SDK process using the ObjectClient (see the previous post) necessary to complete the three steps above are explained as such:

1.  Call ObjectClient.createMultipartUpload, passing an instance of a CreateMultipartUploadRequest (which contains an instance of CreateMultipartUploadRequestDetails)

To break down step 1, you're just telling the API "Hey, I want to upload a file.  The object name is "foo.jpg" and it's content type is "image/jpeg".  Can you give me an identifier so I can associate different pieces of that file later on?"  And the API will return that to you in the form of a CreateMultipartUploadResponse.  Here's the code:

So to create the upload, I make a call to /oci/upload-create and pass the objectName and contentType param.  I'm invoking it via Postman, but this could just as easily be a fetch() call in the browser:

So now we've got an upload identifier for further work (see "uploadId", #2 in the image above).  On to step 2 of the process:

2.  Call ObjectClient.uploadPart(), passing an instance of UploadPartRequest (including the uploadId, the objectName, a sequential part number, and the file chunk), which receives an UploadPartResponse.  The response will contain an "ETag" which we'll need to save, along with the part number, to complete the upload later on.

Here's what the code looks like for step 2:

And here's an invocation of step 2 in Postman, which was completed once for each part of the file that I chose to upload.  I'll save the ETag values along with each part number for use in the completion step.

Finally, step 3 is to complete the upload.

3.  Call ObjectClient.commitMultipartUpload(), passing an instance of CommitMultipartUploadRequest (which contains the object name, uploadId and an instance of CommitMultipartUploadDetails - which itself contains an array of CommitMultipartUploadPartDetails).

Sounds a bit complicated, but it's really not.  The code tells the story here:

When invoked, we get a simple result confirming the completion of the multipart upload commit!  If we head over to our bucket in Object Storage, we can see the file details for the uploaded and reassembled file:

And if we visit the URL via a presigned URL (or directly, if the bucket is public), we can see the image.  In this case, a picture of my dog Moses:

As I've hopefully illustrated, the Oracle SDK for multipart upload is pretty straightforward to use once it's broken down into the steps required.  There are a number of frontend libraries to assist you with multipart upload once you have the proper backend service in place (in my case, the file was simply broken up using the "split" command on my MacBook).  

Elasticsearch 6.1.2 Will Soon Be Available

PeopleSoft Technology Blog - Wed, 2019-01-02 16:18

Elasticsearch 6.1.2 is the minimum version planned for PeopleTools with release 8.57 (more information on that coming soon).  In addition, 6.1.2 is also planned for availability on PeopleTools 8.55 and 8.56.  Upgrade to Elasticsearch 6.1.2 will require full indexing of all search definitions.  Customers will be able to use the Live Cutover feature in the PeopleSoft Search Framework to migrate to Elasticsearch 6.1.2 with no downtime.

If you are using Elasticsearch 2.3.2, and want to continue on it, that version will be supported for one year following our support of version 6.1.2.  At that point, Oracle will only support version 6.1.2. Note that support for PeopleTools 8.55 ends before October 2019, so Elasticsearch 2.3.2 and 6.1.2 support for 8.55 ends with the last CPU for 8.55.  At that point no fixes to the PeopleSoft Search Framework or Elasticsearch will be available for 8.55.

Look for more announcements on this important transition including new features for Search that will be available with PeopleTools 8.57.  These will be posted on the Search concepts page on peoplesoftinfo.com.

Pages

Subscribe to Oracle FAQ aggregator