Feed aggregator

Documentum – MigrationUtil – 2 – Change Docbase Name

Yann Neuhaus - Tue, 2019-02-05 02:01

You are attending the second episode of the MigrationUtil series, today we will change the Docbase Name. If you missed the first one, you can find it here. I did this change on Documentum CS 16.4 with Oracle database, on the same docbase I already used to change the docbase ID.
My goal is to do both changes on the same docbase because that’s what I will need in the future.

So, we will be interested in the docbase RepoTemplate to change his name to repository1.

1. Migration preparation

I will not give the overview of the MigrationUtil, as I already did in the previous blog.
1.a Update the config.xml file
Below is the updated version of config.xml file to change the Docbase Name:

[dmadmin@vmtestdctm01 ~]$ cat $DOCUMENTUM/product/16.4/install/external_apps/MigrationUtil/config.xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
<comment>Database connection details</comment>
<entry key="dbms">oracle</entry> <!-- This would be either sqlserver, oracle, db2 or postgres -->
<entry key="tgt_database_server">vmtestdctm01</entry> <!-- Database Server host or IP -->
<entry key="port_number">1521</entry> <!-- Database port number -->
<entry key="InstallOwnerPassword">install164</entry>
<entry key="isRCS">no</entry>    <!-- set it to yes, when running the utility on secondary CS -->

<!-- <comment>List of docbases in the machine</comment> -->
<entry key="DocbaseName.1">RepoTemplate</entry>

<!-- <comment>docbase owner password</comment> -->
<entry key="DocbasePassword.1">install164</entry>
...
<entry key="ChangeDocbaseName">yes</entry>
<entry key="NewDocbaseName.1">repository1</entry>
...

Put all other entry to no.
The tool will use above information, and load more from the server.ini file.

2. Before the migration (optional)

– Get docbase map from the docbroker:

[dmadmin@vmtestdctm01 ~]$ dmqdocbroker -t vmtestdctm01 -c getdocbasemap
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 16.4.0000.0185
Targeting port 1489
**************************************************
**     D O C B R O K E R    I N F O             **
**************************************************
Docbroker host            : vmtestdctm01
Docbroker port            : 1490
Docbroker network address : INET_ADDR: 02 5d2 c0a87a01 vmtestdctm01 192.168.122.1
Docbroker version         : 16.4.0000.0248  Linux64
**************************************************
**     D O C B A S E   I N F O                  **
**************************************************
--------------------------------------------
Docbase name        : RepoTemplate
Docbase id          : 1000600
Docbase description : Template Repository
Govern docbase      : 
Federation name     : 
Server version      : 16.4.0000.0248  Linux64.Oracle
Docbase Roles       : Global Registry
...

– Create a document in the docbase:
Create an empty file

touch /home/dmadmin/DCTMChangeDocbaseExample.docx

Create document in the repository using idql

create dm_document object
SET title = 'DCTM Change Docbase Document Example',
SET subject = 'DCTM Change Docbase Document Example',
set object_name = 'DCTMChangeDocbaseExample.docx',
SETFILE '/home/dmadmin/DCTMChangeDocbaseExample.docx' with CONTENT_FORMAT= 'msw12';

Result:

object_created  
----------------
090f449880001125
(1 row affected)

note the r_object_id.

3. Execute the migration

3.a Stop the Docbase and the Docbroker

$DOCUMENTUM/dba/dm_shutdown_RepoTemplate
$DOCUMENTUM/dba/dm_stop_DocBroker

3.b Update the database name in the server.ini file
It is a workaround to avoid below error:

Database Details:
Database Vendor:oracle
Database Name:DCTMDB
Databse User:RepoTemplate
Database URL:jdbc:oracle:thin:@vmtestdctm01:1521/DCTMDB
ERROR...Listener refused the connection with the following error:
ORA-12514, TNS:listener does not currently know of service requested in connect descriptor

In fact, the tool deal with the database name as a database service name, and put “/” in the url instead of “:”. The best workaround I found is to update database_conn value in the server.ini file, and put the service name instead of the database name.
Check the tnsnames.ora and note the service name, in my case is dctmdb.local.

[dmadmin@vmtestdctm01 ~]$ cat $ORACLE_HOME/network/admin/tnsnames.ora 
DCTMDB =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = vmtestdctm01)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = dctmdb.local)
    )
  )

Make the change in the server.ini file:

[dmadmin@vmtestdctm01 ~]$ vi $DOCUMENTUM/dba/config/RepoTemplate/server.ini
...
[SERVER_STARTUP]
docbase_id = 1000600
docbase_name = RepoTemplate
server_config_name = RepoTemplate
database_conn = dctmdb.local
database_owner = RepoTemplate
database_password_file = /app/dctm/product/16.4/dba/config/RepoTemplate/dbpasswd.txt
service = RepoTemplate
root_secure_validator = /app/dctm/product/16.4/dba/dm_check_password
install_owner = dmadmin
...

Don’t worry, we will roll back this change before docbase start ;)

3.c Execute the MigrationUtil script

[dmadmin@vmtestdctm01 ~]$ $DM_HOME/install/external_apps/MigrationUtil/MigrationUtil.sh

Welcome... Migration Utility invoked.
 
Skipping Docbase ID Changes...
Skipping Host Name Change...
Skipping Install Owner Change...
Skipping Server Name Change...

Changing Docbase Name...
Created new log File: /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/DocbaseNameChange.log
Finished changing Docbase Name...

Skipping Docker Seamless Upgrade scenario...
Migration Utility completed.

No Error encountred here but it doesn’t mean that everything is ok… Please check the log file:

[dmadmin@vmtestdctm01 ~]$ cat /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/DocbaseNameChange.log
Start: 2019-02-01 19:32:10.631
Changing Docbase Name
=====================

DocbaseName: RepoTemplate
New DocbaseName: repository1
Retrieving server.ini path for docbase: RepoTemplate
Found path: /app/dctm/product/16.4/dba/config/RepoTemplate/server.ini

Database Details:
Database Vendor:oracle
Database Name:dctmdb.local
Databse User:RepoTemplate
Database URL:jdbc:oracle:thin:@vmtestdctm01:1521/dctmdb.local
Successfully connected to database....

Processing Database Changes...
Created database backup File '/app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/DocbaseNameChange_DatabaseRestore.sql'
select r_object_id,object_name from dm_sysobject_s where r_object_type = 'dm_docbase_config' and object_name = 'RepoTemplate'
update dm_sysobject_s set object_name = 'repository1' where r_object_id = '3c0f449880000103'
select r_object_id,docbase_name from dm_docbaseid_map_s where docbase_name = 'RepoTemplate'
update dm_docbaseid_map_s set docbase_name = 'repository1' where r_object_id = '440f449880000100'
select r_object_id,file_system_path from dm_location_s where file_system_path like '%RepoTemplate%'
update dm_location_s set file_system_path = '/app/dctm/product/16.4/data/repository1/content_storage_01' where r_object_id = '3a0f44988000013f'
...
update dm_job_s set target_server = 'repository1.RepoTemplate@vmtestdctm01' where r_object_id = '080f4498800003e0'
...
select i_stamp from dmi_vstamp_s where i_application = 'dmi_dd_attr_info'
...
Successfully updated database values...
...
Backed up '/app/dctm/product/16.4/dba/dm_start_RepoTemplate' to '/app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/dm_start_RepoTemplate_docbase_RepoTemplate.backup'
Updated dm_startup script.
Renamed '/app/dctm/product/16.4/dba/dm_start_RepoTemplate' to '/app/dctm/product/16.4/dba/dm_start_repository1'
Backed up '/app/dctm/product/16.4/dba/dm_shutdown_RepoTemplate' to '/app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/dm_shutdown_RepoTemplate_docbase_RepoTemplate.backup'
Updated dm_shutdown script.
Renamed '/app/dctm/product/16.4/dba/dm_shutdown_RepoTemplate' to '/app/dctm/product/16.4/dba/dm_shutdown_repository1'
WARNING...File /app/dctm/product/16.4/dba/config/RepoTemplate/rkm_config.ini doesn't exist. RKM is not configured
Finished processing File changes...

Processing Directory Changes...
Renamed '/app/dctm/product/16.4/data/RepoTemplate' to '/app/dctm/product/16.4/data/repository1'
Renamed '/app/dctm/product/16.4/dba/config/RepoTemplate' to '/app/dctm/product/16.4/dba/config/repository1'
Renamed '/app/dctm/product/16.4/dba/auth/RepoTemplate' to '/app/dctm/product/16.4/dba/auth/repository1'
Renamed '/app/dctm/product/16.4/share/temp/replicate/RepoTemplate' to '/app/dctm/product/16.4/share/temp/replicate/repository1'
Renamed '/app/dctm/product/16.4/share/temp/ldif/RepoTemplate' to '/app/dctm/product/16.4/share/temp/ldif/repository1'
Renamed '/app/dctm/product/16.4/server_uninstall/delete_db/RepoTemplate' to '/app/dctm/product/16.4/server_uninstall/delete_db/repository1'
Finished processing Directory Changes...
...
Processing Services File Changes...
Backed up '/etc/services' to '/app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/services_docbase_RepoTemplate.backup'
ERROR...Couldn't update file: /etc/services (Permission denied)
ERROR...Please update services file '/etc/services' manually with root account
Finished changing docbase name 'RepoTemplate'

Finished changing docbase name....
End: 2019-02-01 19:32:23.791

Here it is a justified error… Let’s change the service name manually.

3.d Change the service
As root, change the service name:

[root@vmtestdctm01 ~]$ vi /etc/services
...
repository1				49402/tcp               # DCTM repository native connection
repository1_s       	49403/tcp               # DCTM repository secure connection

3.e Change back the Database name in the server.ini file

[dmadmin@vmtestdctm01 ~]$ vi $DOCUMENTUM/dba/config/repository1/server.ini
...
[SERVER_STARTUP]
docbase_id = 1000600
docbase_name = repository1
server_config_name = RepoTemplate
database_conn = DCTMDB
...

3.f Start the Docbroker and the Docbase

$DOCUMENTUM/dba/dm_launch_DocBroker
$DOCUMENTUM/dba/dm_start_repository1

3.g Check the docbase log

[dmadmin@vmtestdctm01 ~]$ tail -5 $DOCUMENTUM/dba/log/RepoTemplate.log
...
2019-02-01T19:43:15.677455	16563[16563]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent master (pid : 16594, session 010f449880000007) is started sucessfully."
IsProcessAlive: Process ID 0 is not > 0
2019-02-01T19:43:15.677967	16563[16563]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 16595, session 010f44988000000a) is started sucessfully."
IsProcessAlive: Process ID 0 is not > 0
2019-02-01T19:43:16.680391	16563[16563]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 16606, session 010f44988000000b) is started sucessfully." 

You are saying the log name is still RepoTemplate.log ;) Yes! because in my case the docbase name and the server name were the same before I change the docbase name:

[dmadmin@vmtestdctm01 ~]$ vi $DOCUMENTUM/dba/config/repository1/server.ini
[SERVER_STARTUP]
docbase_id = 1000600
docbase_name = repository1
server_config_name = RepoTemplate
database_conn = DCTMDB
database_owner = RepoTemplate
database_password_file = /app/dctm/product/16.4/dba/config/repository1/dbpasswd.txt
service = repository1
root_secure_validator = /app/dctm/product/16.4/dba/dm_check_password
install_owner = dmadmin

Be patient, in the next episode we will see how we can change the server name :)

4. After the migration (optional)

Get docbase map from the docbroker:

[dmadmin@vmtestdctm01 ~]$ dmqdocbroker -t vmtestdctm01 -c getdocbasemap
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 16.4.0000.0185
Targeting port 1489
**************************************************
**     D O C B R O K E R    I N F O             **
**************************************************
Docbroker host            : vmtestdctm01
Docbroker port            : 1490
Docbroker network address : INET_ADDR: 02 5d2 c0a87a01 vmtestdctm01 192.168.122.1
Docbroker version         : 16.4.0000.0248  Linux64
**************************************************
**     D O C B A S E   I N F O                  **
**************************************************
--------------------------------------------
Docbase name        : repository1
Docbase id          : 1000600
Docbase description : Template Repository
Govern docbase      : 
Federation name     : 
Server version      : 16.4.0000.0248  Linux64.Oracle
Docbase Roles       : Global Registry
...

it’s not very nice to keep the old description of the docbase… Use below idql request to change it:

Update dm_docbase_config object set title='Renamed Repository' where object_name='repository1';

Check after change:

[dmadmin@vmtestdctm01 ~]$ dmqdocbroker -t vmtestdctm01 -c getdocbasemap
...
Docbase name        : repository1
Docbase id          : 1000600
Docbase description : Renamed Repository
...

Check the document created before the migration:
docbase id : 090f449880001125

API> dump,c,090f449880001125
...
USER ATTRIBUTES

  object_name                     : DCTMChangeDocbaseExample.docx
  title                           : DCTM Change Docbase Document Example
  subject                         : DCTM Change Docbase Document Example
...
5. Conclusion

Well, the tool works, but as you saw we need a workaround to make the change. Which is not great, hope that it will be fixed in the future versions.
In the next episode I will change the server config name, see you there ;)

Cet article Documentum – MigrationUtil – 2 – Change Docbase Name est apparu en premier sur Blog dbi services.

In-transit encryption for boot and block volumes - New Feature - Jan 2019 - Oracle Cloud Infrastructure

Senthil Rajendran - Tue, 2019-02-05 01:15
New Feature : In-transit encryption for boot and block volumes
Services : Block Volume
Release Month : Jan 2019

Data is often considered less secured when in movement. It could be across two servers , two data center , between two services, between cloud and on-premise or between two cloud providers. Wherever data is  moving , data protection methods should be implemented for in transit data that are critical. While organization care more about data at rest , protecting sensitive data in-transit should also be given high importance as attackers find new methods to steal data.

Encryption is the best way to protect data in-transit. This is done by encrypting the data before sending it , authenticating the end points and decryption once the data is received. 

OCI block volume service encrypts all block volumes at rest and their backups as well using AES Advanced Encryption Standard algorithms with 256-bit encryption. Data moving between the instance and the block volume is transferred over an internal and highly secure network. This transfer could be encrypted with this feature announcement for paravirtualized volume attachments on virtual machines.






































Optionally you can use the encryption keys managed by the key management service for volume encryption. if there is no service used oracle provided encryption key is used and this is for both data at rest and in-transit.




Here above when you specify the key for the block while creating then the same will be used for in-transit as well.


[FREE Live Masterclass] Cloud Security Using Oracle IDCS: Career Path & What to Learn

Online Apps DBA - Tue, 2019-02-05 00:00

[FREE Live Masterclass] Oracle Identity Cloud Service allow both on premise and cloud resources to be secured from a single set of controls… Sounds Interesting? So Get Ready and Plan your journey towards Oracle Identity Cloud Services and become an IDCS Expert.? Visit: https://k21academy.com/idcs02 & Engage yourself in our Free Masterclass on Cloud Security Using […]

The post [FREE Live Masterclass] Cloud Security Using Oracle IDCS: Career Path & What to Learn appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Python cx_Oracle 7.1's Connection Fix-up Callback Improves Application Scalability

Christopher Jones - Mon, 2019-02-04 18:09

cx_Oracle logo

 

 

cx_Oracle 7.1, the extremely popular Oracle Database interface for Python, is now Production on PyPI.

 

Another great release of cx_Oracle is available from PyPI, this time with a focus on session pooling. There were also a number of incremental improvements and fixes, all detailed in the release notes.

Session Pooling

When applications use a lot of connections for short periods, Oracle recommends using a session pool for efficiency. The session pool is a pool of connections to Oracle Database. (For all practical purposes, a 'session' is the same as a 'connection'). Many applications set some kind of state in connections (e.g. using ALTER SESSION commands to set the date format, or a time zone) before executing the 'real' application SQL. Pooled connections will retain this state after they have been released back to the pool with conn.close() or pool.release(), and the next user of the connection will see the same state. However, because the number of connections in a pool can vary over time, or connections in the pool can be recreated, there is no guarantee a subsequent pool.acquire() call will return a database connection that has any particular state. In previous versions of cx_Oracle, any ALTER SESSION commands had to be run after each and every pool.acquire() call. This added load and reduced system efficiency.

In cx_Oracle 7.1, a new cx_Oracle.SessionPool() option 'sessionCallback' reduces configuration overhead, as featured in the three scenarios shown below. Further details on session callbacks can be found in my post about the equivalent feature set in node-oracledb.

Scenario 1: All Connections Should Have the Same State

When all connections in a pool should have exactly the same state, you can set sessionCallback to a Python function:

def InitSession(conn, requestedTag): cursor = conn.cursor() cursor.execute("alter session ....") pool = cx_Oracle.SessionPool(un, pw, connectstring, sessionCallback=InitSession, threaded=True) . . .

The function InitSession will be called whenever a pool.acquire() call selects a newly created database connection in the pool that has not been used before. It will not be called if the connection in the pool was previously used by the application. It is called before pool.acquire() returns. The big advantage is that it saves the cost of altering session state if a previous user of the connection has already set it. Also the current caller of pool.acquire() can always assume the correct state is set.

If you need to execute more than one SQL statement in the callback, use a PL/SQL block to reduce round-trips between Python and the database:

def InitSession(conn, requestedTag): cursor = conn.cursor() cursor.callproc( """begin execute immediate 'alter session set nls_date_format = ''YYYY-MM-DD'' nls_language = AMERICAN'; -- other SQL statements could be put here end;""")

The requestedTag parameter is shown in the next section.

Scenario 2: Connections Need Different State

When callers of pool.acquire() need different session states, for example if they need different time zones, then session tagging can be used in conjunction with sessionCallback. See SessionCallback.py for a runnable example.

A tag is a semi-arbitrary string that you assign to connections before you release them to the pool. Typically a tag represents the session state you have set in the connection. Note that when cx_Oracle is using Oracle Client 12.2 (or later) libraries then tags are multi-property and must be in the form of one or more "name=value" pairs, separated by a semi-colon. You can choose the property names and values.

Subsequent pool.acquire() calls may request a connection be returned that has a particular tag already set, for example:

conn = pool.acquire(tag="NLS_DATE_FORMAT=SIMPLE")

This will do one of:

  • Select an existing connection in the pool that has the requested tag. In this case, the sessionCallback function is NOT called.

  • Select a new, previously unused connection in the pool (which will have no tag) and call the sessionCallback function.

  • Will select a previously used connection with a different tag. The existing session and tag are cleared, and the sessionCallback function is called.

An optional matchanytag parameter can be used:

conn = pool.acquire(tag="TIME_ZONE=MST", matchanytag=True)

In this case, a connection that has a different tag may be selected from the pool (if a match can't be found) and the sessionCallback function will be invoked.

When the callback is executed, it can compare the requested tag with the tag that the connection currently has. It can then set the desired connection state and update the connection's tag to represent that state. The connection is then returned to the application by the pool.acquire() call:

def InitSession(conn, requestedTag): # Display the requested and actual tags print("InitSession(): requested tag=%r, actual tag=%r" % (requestedTag, conn.tag)) # Compare the requested and actual tags and set some state . . . cursor = conn.cursor() cursor.execute("alter session ....") # Assign the requested tag to the connection so that when the connection # is closed, it will automatically be retagged conn.tag = requestedTag

The sessionCallback function is always called before pool.acquire() returns.

The underlying Oracle Session Pool tries to optimally select a connection from the pool. Overall, a pool.acquire() call will try to return a connection which has the requested tag string or tag properties, therefore avoiding invoking the sessionCallback function.

Scenario 3: Using Database Resident Connection Pooling (DRCP)

When using Oracle client libraries 12.2 (or later) the sessionCallback can alternatively be a PL/SQL procedure. Instead of setting sessionCallback to a Python function, set it to a string containing the name of a PL/SQL procedure, for example:

pool = cx_Oracle.SessionPool(un, pw, connectstring, sessionCallback="myPlsqlCallback", threaded=True)

The procedure has the declaration:

PROCEDURE myPlsqlCallback ( requestedTag IN VARCHAR2, actualTag IN VARCHAR2 );

For an example PL/SQL callback, see SessionCallbackPLSQL.py.

The PL/SQL procedure is called only when the properties in the requested connection tag do not match the properties in the actual tag of the connection that was selected from the pool. The callback can then change the state before pool.acquire() returns to the application.

When DRCP connections are being used, invoking the PL/SQL callback procedure does not need round-trips between Python and the database. In comparison, a complex (or badly coded) Python callback function could require lots of round-trips, depending on how many ALTER SESSION or other SQL statements it executes.

A PL/SQL callback can also be used without DRCP; in this case invoking the callback requires just one round-trip.

Summary

cx_Oracle 7.1 is a solid release which should particularly please session pool users.

cx_Oracle References

Home page: oracle.github.io/python-cx_Oracle/index.html

Installation instructions: cx-oracle.readthedocs.io/en/latest/installation.html

Documentation: cx-oracle.readthedocs.io/en/latest/index.html

Release Notes: cx-oracle.readthedocs.io/en/latest/releasenotes.html

Source Code Repository: github.com/oracle/python-cx_Oracle

Facebook group: https://www.facebook.com/groups/418337538611212/

Questions: github.com/oracle/python-cx_Oracle/issues

Microsoft Azure: Regions and Availability Zones

Dietrich Schroff - Mon, 2019-02-04 13:07
Before running a VM or any other service inside Microsoft Azure, just a look where the servers can be placed:
This looks fairly the same as the regions at AWS.
Here a better view to europe:

 Each region has several availibility zones (same wording like AWS):
The definition give at Microsoft Azure:
RegionsA region is a set of datacenters deployed within a latency-defined perimeter and connected through a dedicated regional low-latency network.
With more global regions than any other cloud provider, Azure gives customers the flexibility to deploy applications where they need to. Azure is generally available in 42 regions around the world, with plans announced for 12 additional regions.

Availability ZonesAvailability Zones are physically separate locations within an Azure region. Each Availability Zone is made up of one or more datacenters equipped with independent power, cooling, and networking.
Availability Zones allow customers to run mission-critical applications with high availability and low-latency replication.

EBS 12.2.8 VM Virtual Appliance Now Available

Steven Chan - Mon, 2019-02-04 10:50

We are pleased to announce that the Oracle E-Business Suite Release 12.2.8 VM Virtual Appliance is now available from the Oracle Software Delivery Cloud.

You can use this appliance to create an Oracle E-Business Suite 12.2.8 Vision demonstration instance on a single, unified virtual machine containing both the database tier and the application tier.

Use with Oracle VM Manager or Oracle VM VirtualBox

This virtual appliance can be imported into Oracle VM Manager to deploy an Oracle E-Business Suite Linux 64-bit environment on compatible server-class machines running Oracle VM Server. It can also be imported into Oracle VM VirtualBox to create a virtual machine on a desktop PC or laptop.

Note: This virtual appliance is for on-premises use only. If you're interested in running Oracle E-Business Suite on Oracle Cloud, see Getting Started with Oracle E-Business Suite on Oracle Cloud (My Oracle Support Knowledge Document 2066260.1) for more information.

EBS Technology Stack Components

The Oracle E-Business Suite 12.2.8 VM virtual appliance delivers the full software stack, including the Oracle Linux 6.10 (64-bit) operating system, Oracle E-Business Suite, and additional required technology components.

The embedded technology components and their versions are listed in the table below:

Technology Component Version (and associated MOS Doc ID if applicable) RDBMS Oracle Home 12.1.0.2 Application Code Level
  • Oracle E-Business Suite 12.2.8 Release Update Pack (Doc ID 2393248.1)
  • R12.AD.C.Delta.10 and R12.TXK.C.Delta.10 (Doc ID 1617461.1)
  • Consolidated technology patches for database and application tier techstack components (Doc ID 1594274.1)
  • Oracle E-Business Suite Data Removal Tool (DRT) patches (Doc ID 2388237.1)
Oracle Forms and Reports 10.1.2.3 WebLogic Server 10.3.6 Web Tier 11.1.1.9 JDK JDK 1.7.0_201 Java Plugin J2SE 1.7 Critical Patch Update (CPU) October 2018 (Doc ID 2445688.1)   REST Services Availability

With Oracle VM Virtual Appliance for Oracle E-Business Suite Release 12.2.8, the required setup tasks for enabling Oracle E-Business Suite REST services provided through Oracle E-Business Suite Integrated SOA Gateway are already preconfigured. This means that Oracle E-Business Suite integration interface definitions published in Oracle Integration Repository, a component in Oracle E-Business Suite Integrated SOA Gateway, are available for REST service deployment.

Note that PL/SQL APIs, Java Bean Services, Application Module Services, Open Interface Tables and Views and Concurrent Programs are all REST-enabled in this VM virtual appliance.

References Related Articles
Categories: APPS Blogs

Retailers Turn to Latest Oracle Demand Forecasting Service to Optimize Inventory

Oracle Press Releases - Mon, 2019-02-04 07:00
Press Release
Retailers Turn to Latest Oracle Demand Forecasting Service to Optimize Inventory Built-in artificial intelligence and intuitive dashboards help retailers prevent overstocking and boost customer satisfaction

Redwood Shores, Calif.—Feb 4, 2019

Retailers can now improve inventory management through a single view of demand throughout their entire product lifecycle with the next generation Oracle Retail Demand Forecasting Cloud Service. With built-in machine learning, artificial intelligence and decision science, the offering enables retailers to gain pervasive value across retail processes, allowing for optimal planning strategies, decreased operational costs, and enhanced customer satisfaction. In addition, modern, intuitive dashboards improve operational agility and workflows, adapting immediately to new information to improve inventory outcomes.

The offering is part of the Oracle’s Platform for Modern Retail, all built on the cloud-native platform and aligned to the Oracle Retail Planning and Optimization portfolio. Learn more about Oracle Retail Demand Forecasting Cloud Service here.

“As customer trends continue to evolve faster than ever before, it’s imperative that retailers move quickly to optimize inventory and demand. Too little inventory and customers are dissatisfied. Too much and retailers have a bottom line problem that leads to unprofitable discounting,” said Jeff Warren, vice president, Oracle Retail. “We have distilled over 15 years of forecasting experience across hundreds of retailers worldwide into a comprehensive and modern solution that maximizes the forecast accuracy for the entire product lifecycle. Our customers asked, and we delivered.”

For example, the offering was evaluated by a major specialty retailer against 2.2M units sold over the 2018 holiday season, representing over $480M in revenue. With the forecast accuracy improvements, the retailer was able to achieve the same sales with at least 345K units less of inventory. In tandem, the retailer improved 70 percent of forecasts using completely automated next-generation forecasting data science. These results gave the retailer the confidence to decrease safety stock by 10 percent, reduce overall inventory by 30 percent and improve in-stock rates by 10 percent through smarter placement of the same inventory–all while delivering the same level of service to customers.

“As unified commerce sales grow, the ability to support all four business activities (demand planning, supply planning, inventory planning, and sales and operations execution/merchandising, inventory and operations execution) across all sales channels becomes even more important. A 2017 Gartner survey of supply chain executives highlighted the importance organizations place on their planning capabilities.” Of the “top three investment areas from 2016 through 2017, 36% of retail respondents cited upgrading their demand management capabilities,” wrote Gartner experts Mike Griswold and Alex Pradhan. Source:  Gartner Market Guide for Retail Forecasting and Replenishment Solutions, December 31, 2018

Maximizing Forecast Accuracy Throughout the Product Lifecycle

With the next generation Oracle Retail Demand Forecasting Cloud Service, retailers can:

  • Tailor approaches for short and long lifecycle products, maximizing forecast accuracy for the entire product lifecycle
  • Adapt to recent trends, seasonality, out-of-stocks, and promotions; and reflect retailers’ unique demand drivers, delivering better customer experience from engagement to sale, to fulfillment
  • Leverage dashboard views to support day-in-the-life forecasting workflows such as forecast overview, forecast scorecard, exceptions and forecast approvals
  • Gain transparency across the entire supply chain that enables analytical processes and end-users to understand and engage with the forecast, increasing inventory productivity
  • Coordinate and simulate demand-driven outcomes using forecasts that adapt immediately to new information and without a dependency on batch processes, driving operational agility
Contact Info
Kristin Reeves
Oracle
925-787-6744
kris.reeves@oracle.com
About Oracle Retail

Oracle is the modern platform for retail. Oracle provides retailers with a complete, open, and integrated platform for best-of-breed business applications, cloud services, and hardware that are engineered to work together. Leading fashion, grocery, and specialty retailers use Oracle solutions to accelerate from best practice to next practice, drive operational agility and refine the customer experience. For more information, visit our website at www.oracle.com/retail.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kristin Reeves

  • 925-787-6744

[Blog] [Solved] Oracle GoldenGate: Bidirectional Replication Issue

Online Apps DBA - Mon, 2019-02-04 06:49

Installed GoldenGate software with Oracle 11.2.0.4 database and also configured all the bidirectional parameters but Still Facing Oracle GoldenGate Bidirectional Replication issue… Worry Not! We are Here Visit: https://k21academy.com/goldengate33 and Consider our New Blog Covering: ✔ What is Bi-Directional Replication and its Capabilities ✔ Issues in Bi-Directional Replication ✔ Cause and Solution for the Issue […]

The post [Blog] [Solved] Oracle GoldenGate: Bidirectional Replication Issue appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

ORA-600 [ossnet_assign_msgid_1] on Exadata

Syed Jaffar - Mon, 2019-02-04 06:10
On a Exadata system with Oracle v12.1, a MERGE statement with parallelism was frequently failing with below ORA errors:

                   ORA-12805: parallel query server died unexpectedly
        ORA-06512

A quick look in the alert.log, an ORA-600 is noticed.

ORA-00600: internal error code, arguments: [ossnet_assign_msgid_1], [],[ ] 

The best and easy way to diagnose any ORA-600 errors is to utilize the ORA-600 tool available on MOS.

In our case, with large hash join, the following MOS note helped to fix the issue:

On Exadata Systems large hash joins can fail with ORA-600 [OSSNET_ASSIGN_MSGID_1] (Doc ID 2254344.1)

Cause:
On Exadata Systems large hash joins can fail with ORA-600 [OSSNET_ASSIGN_MSGID_1] and the root cause if often a too small default value  for _smm_auto_min_io_size  and  _smm_auto_max_io_size'

and the workaround to fix the issue is to set the following underscore (_) parameters:

_smm_auto_max_io_size = 2048
_smm_auto_min_io_size = 256

In some cases, the below MOS notes helps to fix ORA-600 [ossnet_assign_msgid_1] error.

ORA-600 [ossnet_assign_msgid_1] (Doc ID 1522389.1)
Bug 14512766 : ORA-600 [OSSNET_ASSIGN_MSGID_1] DURING RMAN CONVERSION

Consistent device paths for block volumes - New Feature - Jan 2019 - Oracle Cloud Infrastructure

Senthil Rajendran - Mon, 2019-02-04 04:53
New Feature : Consistent device paths for block volumes
Services : Block Volume
Release Date : Jan 2019

With this feature you can now select a device path that will remain consistent between instance reboots. though this is an optional feature it is recommended to use the device path as you can refer to the volumes when create partitions , creating file systems , mounting file system , you can also specify this option in /etc/fstab file for automatically mounting volumes on the instance boot.





























Operating System Linux Images that are released by Oracle prior to November 2018 would not be able to use this feature. Windows based , Custom images and Partner images are not supported.

To verify if consistent device path support is available on your instance , login into your environment and do a "ll /dev/oracleoci/oraclevd*" , if you see a list of devices then it is supported else if you get a message "no such file or directory" then it is not supported.

Screenshot showing output for listing attached devices on instance using consistent device paths



Attaching a device path in the console is done simply by selecting a device path for the block volume.  Once attached you can verify the block volume from the summary page

Device Path : /dev/oracleoci/oraclevdb


After attaching the device then from the operating system you can create a partition using the device path.

fdisk  /dev/oracleoci/oraclevdb 
mkfs.ext3 /dev/oracleoci/oraclevdb1
update : /etc/fstab --- /dev/oracleoci/oraclevdb1   /oradata    ext3    defaults,_netdev,noatime  0  2
mkdir /oradata
mount /dev/oracleoci/oraclevdb1 /oradata





Cross Field Form Validation in Oracle JET

Andrejus Baranovski - Mon, 2019-02-04 03:09
JET keeps evolving and in the latest versions  - toolkit provides improved support for form cross-field validation. It is much easier to implement validation than it was before. I will show it in this example.

Example of the data entry form. Validation logic:

- Invoice Date before Payment Due Date and Payment Date
- Payment Due Date before Payment Date


Example when two fields fail validation:


JET provides component called validation group. Form can be wrapped by this component to identify if any validation errors are reported there. For example, when calling JS function, before proceeding with the function code - we can check if validation group contains errors:


Input field can be assigned with custom validator function:


Example of validation function code where cross-field validation logic is implemented - we compare field value with other fields. If validation rule condition is false - validation error is thrown:


Example of function code, where validation group is checked for errors. If there are errors in the current validation group - errors are displayed and the first field with error is focused:


Download sample code from my GitHub repo.

Documentum – Process Builder Installation Fails

Yann Neuhaus - Mon, 2019-02-04 01:25

A couple of weeks ago, at a customer I received an incident from the application team regarding an error occurred when installing Process Builder. The error message was:
The Process Engine license has not been enabled or is invalid in the ‘RADEV’ repository.
The Process Engine license must be enabled to use the Process Builder.
Please see your system administrator
.”

The error appears when selecting the repository:

Before I investigate on this incident I had to learn more about the Process Builder as it is usually managed by the application team.
In fact, The Documentum Process Builder is a software for creating a business process templates, used to formalize the steps required to complete a business process such as an approval process, so the goal is to extend the basic functionality of Documentum Workflow Manager.
It is a client application that can be installed on any computer, but before installing Process Builder you need to prepare your content server and repository by installing the Process Engine, because the CS handle the check in, check out, versioning, archiving, and all processes created are saved in the repository… Hummm, so maybe the issue is that my content server or repository is not well configured?

To be clean from the client side, I asked the application team to confirm the docbroker and port configured in C:\Documentum\Config\dfc.properties.

From the Content Server side, we used Process Engine installer, which install the Process Engine on all repositories that are served by the Content Server, deploy the bpm.ear file on Java Method Server and install the DAR files on each repository.

So let’s check the installation:

1. The BPM url http://Server:9080/bpm/modules.jsp is reachable:

2. No error in the bpm log file $JBOSS_HOME/server/DctmServer_MethodServer/logs/bpm-runtime.log.

3. BPM and XCP DARs are correctly installed in the repository:

select r_object_id, object_name, r_creation_date from dmc_dar where object_name in ('BPM', 'xcp');
080f42a480026d98 BPM 8/29/2018 10:43:35
080f42a48002697d xcp 8/29/2018 10:42:11

4. The Process Engine module is missed in the docbase configuration:

	API> retrieve,c,dm_docbase_config
	...
	3c0f42a480000103
	API> dump,c,l
	...
	USER ATTRIBUTES

		object_name                : RADEV
		title                      : RADEV Repository
	...
	SYSTEM ATTRIBUTES

		r_object_id                : 3c0f42a480000103
		r_object_type              : dm_docbase_config
		...
		r_module_name           [0]: Snaplock
								[1]: Archive Service
								[2]: CASCADING_AUTO_DELEGATE
								[3]: MAX_AUTO_DELEGATE
								[4]: Collaboration
		r_module_mode           [0]: 0
								[1]: 0
								[2]: 0
								[3]: 1
								[4]: 3

We know the root cause of this incident now :D
To resolve the issue, add the Process Engine module to the docbase config:

API>fetch,c,docbaseconfig
API>append,c,l,r_module_name
Process Engine
API>append,c,l,r_module_mode
3
API>save,c,l

Check after update:

	API> retrieve,c,dm_docbase_config
	...
	3c0f42a480000103
	API> dump,c,l
	...
	USER ATTRIBUTES

		object_name                : RADEV
		title                      : RADEV Repository
	...
	SYSTEM ATTRIBUTES

		r_object_id                : 3c0f42a480000103
		r_object_type              : dm_docbase_config
		...
		r_module_name           [0]: Snaplock
								[1]: Archive Service
								[2]: CASCADING_AUTO_DELEGATE
								[3]: MAX_AUTO_DELEGATE
								[4]: Collaboration
								[5]: Process Engine
		r_module_mode           [0]: 0
								[1]: 0
								[2]: 0
								[3]: 1
								[4]: 3
								[5]: 3
		...

Then I asked the application team to retry the installation, the issue has been resolved.

No manual docbase configuration required in the Process Engine Installation Guide. I guess the Process Engine Installer should do it automatically.
I will install a new environment in the next few days/weeks, and keep you informed if any news ;)

Cet article Documentum – Process Builder Installation Fails est apparu en premier sur Blog dbi services.

Batch Scheduler Integration Questions

Anthony Shorten - Sun, 2019-02-03 21:57

One of the most common questions I get from partners is around batch scheduling and execution. Oracle Utilities Application Framework has a flexible set of methods of managing, executing and monitoring batch processes. The alternatives available are as follows:

  • Third Party Scheduler Integration. If the site has an investment in a third party batch scheduler to define the schedules and execute product batch processes with non-product processes, at an enterprise level, then the Oracle Utilities Application Framework includes a set of command line utilities, via scripts, that can be invoked by a wide range of third party schedulers to execute the process. This allows scheduling to be managed by the third party scheduler and the scripts to be used to execute and manage product batch processes. The scripts return standard return codes that the scheduler to use to determine next actions if necessary. For details of the command line utilities refer to the Server Administration Guide supplied with your version of the product.
  • Oracle Scheduler Integration. The Oracle Utilities Application Framework provides a dedicated API to allow implementations to use the Oracle DBMS Scheduler included in all editions of the database to be used as local or enterprise wide scheduler. The advantage of this is that the scheduler is already included in your existing database license and has inbuilt management capabilities provided via the base functionality of Oracle Enterprise Manager (12+) (via Scheduler Central) and also via Oracle SQL Developer. Oracle uses this scheduler in the Oracle Utilities SaaS Cloud solutions. Customers of those cloud services can use the interface provided by the included Oracle Utilities Cloud Service Foundation to manage their schedules or use the provided REST based scheduler API to execute schedules and/or processes from a third party scheduler. For more details of the scheduler interface refer to the Batch Scheduler Integration (Doc Id: 2138193.1) whitepaper available from My Oracle Support.
  • Online Submission. The Oracle Utilities Application Framework provides a development and testing tool to execute individual batch processes from the online system. It is basic and only supports execution of individual processes (not groups of jobs like the alternatives do). This online submission capability is designed for cost effective developer and non-production testing, if desired, and is not supported for production use. For more details, refer to the online documentation provided with the version of the product you are using.

Note: For customers of legacy versions of Oracle Utilities Customer Care and Billing, a basic workflow based scheduler was provided for development and testing purposes. This interface is not supported for production use and one of the alternatives outlined above should be used instead.

All the above methods all use the same architecture for execution of running batch processes (though some have some additional features that need to be enabled).  For details of the each of the configurations, refer to the Server Administration Guide supplied with your version of the product. 

When asked about which technology should be used I tend to recommend the following:

  • If you have an existing investment, that you want to retain, in a third party scheduler then use the command line interface. This will retain your existing investment and you can integrate across products or even integrate non-product batch such as backups from the same scheduler.
  • If you do not have an existing scheduler, then consider using the DBMS Scheduler provided with the database. It is more likely your DBA's are already using it for their tasks and it is used by a lot of Oracle products already. The advantage of this scheduler is that you already have the license somewhere in your organization already. It can be deployed locally within the product database or remotely as an enterprise wide solution. It has a lot of good features and Oracle Utilities will use this scheduler as a foundation of our cloud implementations. If you are on the cloud then use the provided interface in Oracle Utilities Cloud Service Foundation and if you have an external scheduler via the REST based Scheduler API. If you are on-premise, then use the Oracle Enterprise Manager (12+) interface (Scheduler Central) in preference to the SQL Developer interface (though the latter is handy for developers). Oracle also ships a command line interface to the scheduler objects if you like pl/sql type administration.

Note: Scheduler Central in Oracle Enterprise Manager is included in the base functionality for Oracle Enterprise Manager and does not require any additional packs.

  • I would only recommend to use the online submission for demonstrations, development and perhaps in testing (where you are not using Oracle Utilities Testing Accelerator or have the scheduler not implemented).  It has very limited support and will only execute individual processes.

 

Java 11: JEP 333 ZGC A Scalable Low-Latency Garbage Collector

Dietrich Schroff - Sat, 2019-02-02 14:34
After i found this strange "No-Op Garbage Collector", i was keen, if there are some other new GC features with Java 11.


There is another JEP with the number 333:
 If you look here, the goals are:
  • GC pause times should not exceed 10ms
  • Handle heaps ranging from relatively small (a few hundreds of megabytes) to very large (many terabytes) in size
  • No more than 15% application throughput reduction compared to using G1
  • Lay a foundation for future GC features and optimizations leveraging colored pointers and load barriers
  • Initially supported platform: Linux/x64
 Inside JEP 333 there are some numbers for the performance provided:
Below are typical GC pause times from the same benchmark. ZGC manages to stay well below the 10ms goal. Note that exact numbers can vary (both up and down, but not significantly) depending on the exact machine and setup used.
(Lower is better)
ZGC
avg: 1.091ms (+/-0.215ms)
95th percentile: 1.380ms
99th percentile: 1.512ms
99.9th percentile: 1.663ms
99.99th percentile: 1.681ms
max: 1.681ms

G1
avg: 156.806ms (+/-71.126ms)
95th percentile: 316.672ms
99th percentile: 428.095ms
99.9th percentile: 543.846ms
99.99th percentile: 543.846ms
max: 543.846ms
This looks very promising. But within the limitations you can read, that it will take some more time, until this can be used:
The initial experimental version of ZGC will not have support for class unloading. The ClassUnloading and ClassUnloadingWithConcurrentMark options will be disabled by default. Enabling them will have no effect.
Also, ZGC will initially not have support for JVMCI (i.e. Graal). An error message will be printed if the EnableJVMCI option is enabled.
These limitations will be addressed at a later stage in this project.Nevertheless: You can use this GC with the command line argument
-XX:+UnlockExperimentalVMOptions -XX:+UseZGCFor more information take a look here: https://wiki.openjdk.java.net/display/zgc/Main


SELECT FOR UPDATE SKIP LOCKED

Tom Kyte - Fri, 2019-02-01 16:46
Hi Team Have a scenario to select a particular set of rows from a table for further processing. We need to ensure that multi users do not work on the same set of rows. We use SELECT FOR UPDATE SKIP LOCKED in order to achieve this. EG:a simp...
Categories: DBA Blogs

Sending e-mail! -- Oracle 8i specific response

Tom Kyte - Fri, 2019-02-01 16:46
How to send personalized email to clients registered in my portal www.intrainternet.com using the information stored in our Database Oracle 8i automatically?
Categories: DBA Blogs

Fusion and in-app Guides

Duncan Davies - Fri, 2019-02-01 05:00

I’ve heard a couple of clients recently mentioning that they’d like to have some kind of in-app guide setup to walk their self-service users through common tasks in Fusion, so I thought I’d investigate.

There are quite a few companies that operate in the same area, here are the ones that I found with a short googling session:

ServiceCostAppcuesVariesIridizeAcquired by Oracle, now known as Oracle Guided LearningMyGuide$1-3/user/monthPendoVariesToonimoRequest a QuoteUserlaneRequest a QuoteWalkMeRequest a QuoteWhatfixRequest a Quote

There are also a couple of Open Source alternatives Joyride and Bootstrap Tour. Although they’re free to use, you’re going to need to code to get anything up and running so they’d be significantly higher maintenance.

Over the next few weeks I’ll investigate some of these options and post the results.

Recover dropped tables with Virtual Access Restore in #Exasol

The Oracle Instructor - Fri, 2019-02-01 04:34

The technique to recover only certain objects from an ordinary backup is called Virtual Access Restore. Means you create a database from backup that contains only the minimum elements needed to access the objects you request. This database is then removed afterwards.

Let’s see an example. This is my initial setup:

EXAoperation Database page

One database in a 2+1 cluster. Yes it’s tiny because it lives on my notebook in VirtualBox. See here how you can get that too.

It uses the data volume v0000 and I took a backup into the archive volume v0002 already.

EXAoperation volumes

I have a schema named RETAIL there with the table SALES:

RETAIL.SALES

By mistake, that table gets dropped:

drop table

And I’m on AUTOCOMMIT, otherwise this could be rolled back in Exasol. Virtual Access Restore to the rescue!

First I need another data volume:

second data volume

Notice the size of the new volume: It is smaller than the overall size of the backup respectively the size of the “production database”! I did that to prove that space is not much of a concern here.

Then I add a second database to the cluster that uses that volume. The connection port (8564) must be different from the port used by the first database and the DB RAM in total must not exceed the licensed size, which is limited to 4 GB RAM in my case:

second database

I did not start that database because for the restore procedure it has to be down anyway. Clicking on the DB Name and then on the Backups button gets me here:

Foreign database backups

No backup shown yet because I didn’t take any backups with exa_db2. Clicking on Show foreign database backups:

Backup choice

The Expiration date must be empty for a Virtual Access Restore, so I just remove it and click Apply. Then I select the Restore Type as Virtual Access and click Restore:

Virtual Access Restore

This will automatically start the second database:

Two databases in one cluster

I connect to exa_db2 with EXAplus, where the Schema Browser gives me the DDL for the table SALES:

ExaPlus Schema Browser get DDL

I take that to exa_db1 and run it there, which gives me the table back but empty. Next I create a connection from exa_db1 to exa_db2 and import the table

create connection exa_db2 
to '192.168.43.11..13:8564' 
user 'sys' identified by 'exasol';

import into retail.sales 
from exa at exa_db2 
table retail.sales;

This took about 2 Minutes:

Import

The second database and then the second data volume can now be dropped. Problem solved!

 

Categories: DBA Blogs

Continuing the Journey

Steven Chan - Fri, 2019-02-01 03:50

Greetings, EBS Technology Blog readers!

Speaking from my own experience as an Oracle E-Business Suite customer, this blog served as my go-to place for information regarding Oracle E-Business Suite Technology. So I understand first-hand the importance of this blog to you.

We in the EBS Technology Product Management and Development Teams are grateful to Steven for his leadership in the continuing refinement and usability of Oracle E-Business Suite, and his pioneering use of this blog to better keep in touch with you, our customers. Personally, I could not have asked for a better mentor. Wishing you all the best, Steven! And, hoping our paths cross again soon and often.

On behalf of the whole team, let me stress our continued commitment to the blog, and intention of operating with Steven's original guiding principle in mind: to bring you the information you need, when you need it. And ultimately, to keep improving how we do this.

Steven's new path has left us with some rather large shoes to fill, so going forward, this blog will bring you the distinctive individual voices of a number of highly experienced experts from our team who will be giving you their own unique insights into what we have delivered. Over the next few weeks, Kevin and I will be re-introducing you to some of our existing and frequent contributors and introducing you to new blog authors.

Our key goal is to continue to provide you, our readers and customers, the very latest news direct from Oracle E-Business Suite Development. And last but by no means least, we look forward to hearing your comments and feedback as we continue the journey of this blog.

Related Articles
Categories: APPS Blogs

Installing Spinnaker on Pivotal Container Service (PKS) with NSX-T running on vSphere

Pas Apicella - Thu, 2019-01-31 19:47
I decided to install spinnaker on my vSphere PKS installation into one of my clusters. Here is how I did this step by step

1. You will need PKS installed which I have on vSphere with PKS 1.2 using NSX-T. Here is a screen shot of that showing Ops Manager UI


Make sure your PKS Plans have these check boxes enabled, without these checked spinnaker will not install using the HELM chart we will be using below


2. In my setup I created a DataStore which will be used by my K8's cluster, this is optional you can setup PVC however you see fit.



3. Now it's assumed you have a K8s cluster which I have as shown below. I used the PKS CLI to create a small cluster of 1 master node and 3 worker nodes

$ pks cluster lemons

Name:                     lemons
Plan Name:                small
UUID:                     19318553-472d-4bb5-9783-425ce5626149
Last Action:              CREATE
Last Action State:        succeeded
Last Action Description:  Instance provisioning completed
Kubernetes Master Host:   lemons.haas-65.pez.pivotal.io
Kubernetes Master Port:   8443
Worker Nodes:             3
Kubernetes Master IP(s):  10.y.y.y
Network Profile Name:

4. Create a Storage Class as follows, notice how we reference our vSphere Data Store named "k8s" as per step 2

$ kubectl create -f storage-class-vsphere.yaml

Note: storage-class-vsphere.yaml defined as follows

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: fast
provisioner: kubernetes.io/vsphere-volume
parameters:
  datastore: k8s
  diskformat: thin
  fstype: ext3

5. Set this Storage Class as the default

$ kubectl patch storageclass fast -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Verify

papicella@papicella:~$ kubectl get storageclass
NAME             PROVISIONER                    AGE
fast (default)   kubernetes.io/vsphere-volume   14h

6. Install helm as shown below

$ kubectl create -f rbac-config.yaml
$ helm init --service-account tiller
$ kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
$ sleep 10
$ helm ls

Note: rbac-config.yaml defined as follows

apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system

7. Install spinnaker into your K8's cluster as follows

$ helm install --name myspinnaker stable/spinnaker --timeout 6000 --debug

If everything worked

papicella@papicella:~$ kubectl get pods
NAME                                  READY     STATUS      RESTARTS   AGE
myspinnaker-install-using-hal-gbd96   0/1       Completed   0          14m
myspinnaker-minio-5d4c999f8b-ttm7f    1/1       Running     0          14m
myspinnaker-redis-master-0            1/1       Running     0          14m
myspinnaker-spinnaker-halyard-0       1/1       Running     0          14m
spin-clouddriver-7b8cd6f964-ksksl     1/1       Running     0          12m
spin-deck-749c84fd77-j2t4h            1/1       Running     0          12m
spin-echo-5b9fd6f9fd-k62kd            1/1       Running     0          12m
spin-front50-6bfffdbbf8-v4cr4         1/1       Running     1          12m
spin-gate-6c4959fc85-lj52h            1/1       Running     0          12m
spin-igor-5f6756d8d7-zrbkw            1/1       Running     0          12m
spin-orca-5dcb7d79f7-v7cds            1/1       Running     0          12m
spin-rosco-7cb8bd4849-c44wg           1/1       Running     0          12m

8. At the end of the HELM command once complete you will see output as follows

1. You will need to create 2 port forwarding tunnels in order to access the Spinnaker UI:
  export DECK_POD=$(kubectl get pods --namespace default -l "cluster=spin-deck" -o jsonpath="{.items[0].metadata.name}")
  kubectl port-forward --namespace default $DECK_POD 9000

2. Visit the Spinnaker UI by opening your browser to: http://127.0.0.1:9000

To customize your Spinnaker installation. Create a shell in your Halyard pod:

  kubectl exec --namespace default -it myspinnaker-spinnaker-halyard-0 bash

For more info on using Halyard to customize your installation, visit:
  https://www.spinnaker.io/reference/halyard/

For more info on the Kubernetes integration for Spinnaker, visit:
  https://www.spinnaker.io/reference/providers/kubernetes-v2/

9. Go ahead and run these commands to connect using your localhost to the spinnaker UI

$ export DECK_POD=$(kubectl get pods --namespace default -l "cluster=spin-deck" -o jsonpath="{.items[0].metadata.name}")
$ kubectl port-forward --namespace default $DECK_POD 9000
Forwarding from 127.0.0.1:9000 -> 9000
Forwarding from [::1]:9000 -> 9000

10. Browse to http://127.0.0.1:9000



More Information

Spinnaker
https://www.spinnaker.io/

Pivotal Container Service
https://pivotal.io/platform/pivotal-container-service


Categories: Fusion Middleware

Pages

Subscribe to Oracle FAQ aggregator