Feed aggregator

Passing dbms_sql.desc_tab (or desc_tab2) as a function parameter

Tom Kyte - Mon, 2017-05-29 06:26
Hi, I'm having issues with the use of certain parts of the dbms_sql package. What I am trying to do, in brief, is: * take an existing sys_refcursor and convert it for use with the dbms_sql package using <code>app_curs := dbms_sql.to_cursor_num...
Categories: DBA Blogs

temporary lobs

Tom Kyte - Mon, 2017-05-29 06:26
Hi Tom, Here is my query to check the temporary lob. select sum(cache_lobs) sum_cache_lobs from v$temporary_lobs; SUM_CACHE_LOBS -------------- 0 This is my simple standalone function with return type clob CREATE OR REPLACE FUNCTION ...
Categories: DBA Blogs

Using Oracle Compute API Part 2 of 3 - Creating IP Reservations and Compute storage

Now that we have our authentication cookie, we can now begin to use the API with orchestration to create our Oracle Compute components.  What is an...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Can EBS Nodes Be at Different Operating System Levels?

Steven Chan - Mon, 2017-05-29 02:00

Oddly, a category of EBS architecture questions that I'd thought long-settled is starting to reemerge.  I've recently seen several queries along the lines of:

  1. Can EBS database nodes and application tier nodes be on different operating systems?
  2. Can multiple EBS database nodes be on different operating systems or levels?
  3. Can multiple EBS application tier nodes be on different operating systems or levels?

Here are the answers to these questions:

1. Can EBS database nodes and application tier nodes be on different operating systems?

Yes.  In some cases, the operating systems certified for EBS database nodes cannot be run on the application tier. These are called "database tier only" certifications. It is quite common to see this configuration, especially in large EBS environments.

2. Can multiple EBS database nodes be on different operating systems or levels?

No.  From our "Installation Guide: Using Rapid Install" documentation:  

"All database tier nodes must be at the same operating system patch level."

3. Can multiple EBS application tier nodes be on different operating systems or levels?

No. From our "Installation Guide: Using Rapid Install" documentation:  

"All application tier nodes must be at the same operating system patch level."

Here's a screenshot from our EBS 12.2 documentation (emphasis in red):

References

Related Articles

Categories: APPS Blogs

Scripting, Groovy and Java for extending the product

Anthony Shorten - Sun, 2017-05-28 23:55

In a recent past release of the Oracle Utilities Application Framework, we introduced Groovy as an alternative development technology for server side extensions on our products. This now means we have three technologies that can be used to extend our products:

  • XPath/Xquery based scripting engine known as scripting
  • Java
  • Groovy

Now, the issue becomes which technology do I use for my extensions. Here are a few guidelines to help you:

  • In terms of performance, there is not much difference between the technologies as, at the end of the day, they all result in byte code that is executed by the product. The product runtime does not discriminate the technology at that level. There is a slight advantage of Java/Groovy over Scripting for extremely large volumes.
  • If you are doing complex algorithmic or operating system level interaction it is recommended to use either Groovy or Java instead of scripting. While scripting can satisfy the most common of extensions, it may not be as efficient as Java/Groovy.
  • If you are intending to move to the Oracle Utilities SaaS offerings, you cannot use Java for any extensions. This is due to the fact that Java tends to be low level and also you cannot deploy your own JAR/WAR/EAR files in a Saas environment. If you use Oracle PaaS then you have full access so you can use Java in those cases.
  • Groovy was adopted as a language as it is the foundation of the Oracle Cloud offerings in general for extensions. The Groovy implementation across the Oracle Cloud is whitelisted so that it is restricted to accessing classes that do not have direct access to operating system resources. In this case we supply Groovy libraries to provide a contained integration with these resources.
  • One of the major considerations is total cost of ownership. Typically if you use a mixture of languages in your implementation then the cost of maintenance of those extensions tends to be higher if you chose to use a single language. This is true for any product that has multiple ways of extension as while flexibility is a great asset, it can come with additional costs. I usually recommend that you pick one of the technologies and stick with it for your extensions unless, for some reason, you need to use a mixture.
  • In terms of best practices, a lot of implementation partners tend to use scripting for the vast majority of their extensions and only use Groovy/Java when scripting is not applicable for some reason.
  • One of the big advantages of scripting and Groovy is that the code assets are actually contained in the database and migration is all handled by either Bundling (for small migrations) or using Configuration Migration Assistant (CMA). The use of Java for extensions, typically requires a manual synchronization of data as well as code.

From a vendor perspective, it does not matter which technology you choose to use. Personally, I would use scripting and the only use Groovy as necessary, it is easier to manage and you do not have physical JAR/WAR/EAR files to manage which makes your code/data synchronization much less an issue in a complex migration strategy. It also means you can move to the cloud a lot easier, in the future.

Configuring Oracle Traffic Director 12c with WebGate

Amis Blog - Sun, 2017-05-28 15:01

At a recent customer install, I was faced with configuring Oracle Traffic Director (OTD) 12.2.1.2.0 Webgate with Oracle Access Manager.

Deploying Webgate on OD 12c is very well described in the documentation. See A Configuring OAM Agent (WebGate) for Oracle Traffic Director 12.2.1.2

There is however a flaw in the documentation. I came across that when I reached the point where Webgate get’s configured in the conf files of OTD.

When you configure Webgate for OTD 12c, the OTD Conf files such as magnus.conf and virtual-server-*-obj.conf are updated. (on a collocated installation)
If you follow the documentation completely, you will end up with conf files that either have no WebGate configuration in them or with the configuration dedicated to the first OTD instance on both servers. In the latter case, the second instance will no longer start.
I created a Service Request at Oracle support to address the issue. They didn’t have a solution for the problem and I ended up being bounced between the OTD and the WebGate support teams. Finally one of the guys from the WebGate team really tried to help me, but couldn’t resolve the issue. So I went along and solved the problem myself. As I will describe below.

When you reach pt.5 of the documentation A.2 Configuring Oracle Traffic Director 12c WebGate

Change the EditObjConf line as follows

./EditObjConf -f Domain_Home/config/fmwconfig/components/
OTD/otd_configuration_name/config/virtual_server_name-obj.conf -w webgate_instanceDirectory [-oh Oracle_Home] -ws otd

For example

OTD Configuration Name: TST1
OTD Instance 1: otd_TST1_host1.domain.local
OTD Instance 2: otd_TST1_ host1.domain.local
Domainhome: /u01/app/oracle/config/domains/otd_domain_tst

./EditObjConf -f /u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/components/
OTD/TST1/config/virtual-server-tst1-obj.conf -w /u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/components/
OTD/instances/otd_TST1_ host1.domain.local -oh $ORACLE_HOME -ws otd

Where TST1 is the name of the configuration and host1.domain.local is the name of the first server.
This will change the magnus.conf and virtual-server-tst1-obj.conf for Webgate.
In virtual-server-tst1-obj.conf there are no instance specific references.
However in the magnus.conf there are references to the first instance, since this is the one that we used with EditObjConf.

This is what the magnus.conf in the OTD configuration section (on global level) looks like after EditObjConf command.
Notice the hardcoded instance name in four places.

less /u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/components/ OTD/TST1/config/magnus.conf&amp;amp;lt;/pre&amp;amp;gt;
#
# Copyright (c) 2011, 2016, Oracle and/or its affiliates. All rights reserved.
#

Init fn="load-modules" shlib="libwebapp-firewall.so"

# Oracle WebGate Init FNs start #WGINITFN
Init fn="load-modules"
funcs="OBWebGate_Init,OBWebGate_Authent,OBWebGate_Control,
OBWebGate_Err,OBWebGate_Handle401, OBWebGate_Response"
shlib="/u01/app/oracle/product/otd1221/webgate/otd/lib/webgate.so"
obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd"
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/
components/OTD/instances/otd_TST1_host1.domain.local"
#ESSO#Init fn="load-modules"
funcs="EssoBasicAuthInit,EssoBasicAuth,EssoClean"
shlib="/u01/app/oracle/product/otd1221/webgate/otd/lib/webgate.so"
obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd"
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/
components/OTD/instances/otd_TST1_host1.domain.local"
Init fn="OBWebGate_Init"
obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd"
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/
components/OTD/instances/otd_TST1_host1.domain.local" Mode="PEER"
#WebGateLockFileDir="&amp;amp;amp;lt;some_local_dir&amp;amp;amp;gt;"

# WebGateLockFileDir: Optional directive specifying the location to create
# webgate lock files.
#
# If configured, then all webgate lock files will be created under
# &amp;amp;amp;lt;WebGateLockFileDir&amp;amp;amp;gt;/&amp;amp;amp;lt;Hash of WebGateInstancedir&amp;amp;amp;gt;. The hash subdir is to
# ensure uniqueness for each webserver instance and avoid locking conflicts
# if two different instances have configured the directive with same value.
#
# If the dir does not exist before, will try to create it first. If dir
# creation failed or the directive not configured, webgate falls back to old
# model, i.e. use same location as original file that lock is based upon.
#
# This directive is useful when webgate instance is located on NFS mounted
# disks and performance greatly impacted. Configure it to local dir will solve
# the issue.
#ESSO#Init fn="EssoBasicAuthInit"
obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd"
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/
components/OTD/instances/otd_TST1_host1.domain.local" Mode="PEER"
# Oracle WebGate Init FNs end #WGINITFN

Leaving it like this will result in this hardcoded instance name being distributed to all instance. Hence only one instance would start.

Now how to fix this.

Open magnus.conf with an editor

Replace the hardcoded instance name with a variable called ${INSTANCE_NAME}
(I picked up the existence of this variable in the server.xml which is also on the OTD Configuration level and get distributed all instances.)

In our example the magnus.conf now looks like this.

#
# Copyright (c) 2011, 2016, Oracle and/or its affiliates. All rights reserved.
#&amp;amp;lt;/pre&amp;amp;gt;
Init fn="load-modules" shlib="libwebapp-firewall.so"

# Oracle WebGate Init FNs start #WGINITFN
Init fn="load-modules"
funcs="OBWebGate_Init,OBWebGate_Authent,OBWebGate_Control,
OBWebGate_Err,OBWebGate_Handle401,OBWebGate_Response"
shlib="/u01/app/oracle/product/otd1221/webgate/otd/lib/webgate.so"
obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd"
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/components/OTD/instances/${INSTANCE_NAME}"
#ESSO#Init fn="load-modules"
funcs="EssoBasicAuthInit,EssoBasicAuth,EssoClean"
shlib="/u01/app/oracle/product/otd1221/webgate/otd/lib/webgate.so"
obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd"
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/components/OTD/instances/${INSTANCE_NAME}" Init fn="OBWebGate_Init"
obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd"
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/components/OTD/instances/${INSTANCE_NAME}" Mode="PEER"
#WebGateLockFileDir="&amp;amp;amp;lt;some_local_dir&amp;amp;amp;gt;"
# WebGateLockFileDir: Optional directive specifying the location to create
# webgate lock files.
#
# If configured, then all webgate lock files will be created under
# &amp;amp;amp;lt;WebGateLockFileDir&amp;amp;amp;gt;/&amp;amp;amp;lt;Hash of WebGateInstancedir&amp;amp;amp;gt;. The hash subdir is to
# ensure uniqueness for each webserver instance and avoid locking conflicts
# if two different instances have configured the directive with same value.
#
# If the dir does not exist before, will try to create it first. If dir
# creation failed or the directive not configured, webgate falls back to old
# model, i.e. use same location as original file that lock is based upon.
#
# This directive is useful when webgate instance is located on NFS mounted
# disks and performance greatly impacted. Configure it to local dir will solve
# the issue.

#ESSO#Init fn="EssoBasicAuthInit"
obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd" 
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/components/OTD/instances/${INSTANCE_NAME}" Mode="PEER"
# Oracle WebGate Init FNs end #WGINITFN
Now to distributed these files

Open Enterprise ManagerFusion Middleware Control 12c and go to the OTD Configuration

Go to Virtual Server section and click Lock and Edit

EM will show the Pull Components Changes bar.

DON’T pull the changes!
This will replace the conf files of the configuration with those currently in use by the instances.

Instead make a minor, insignificant, change in the configuration.
For example add a hostname to the Virtual Server Settings. (We remove it later)
Now activate the changes
Again, don’t Pull the changes

Discard the Instance Changes and Activate Changes.

Again Discard Changes

And finally Discard Changes to distributed the correct conf files to the instances.

Now let’s look at the magnus.conf on both instances. (We already know that the virtual-server-tst1-obj.conf is the same everywhere)

On Instance 1

#
# Copyright (c) 2011, 2016, Oracle and/or its affiliates. All rights reserved.
#&amp;lt;/pre&amp;gt;
Init fn="load-modules" shlib="libwebapp-firewall.so"

# Oracle WebGate Init FNs start #WGINITFN
Init fn="load-modules"
funcs="OBWebGate_Init,OBWebGate_Authent,OBWebGate_Control,
OBWebGate_Err,OBWebGate_Handle401,OBWebGate_Response"
shlib="/u01/app/oracle/product/otd1221/webgate/otd/lib/webgate.so"
obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd"
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/
components/OTD/instances/otd_TST1_host1.domain.local"
#ESSO#Init fn="load-modules" funcs="EssoBasicAuthInit,EssoBasicAuth,EssoClean"
shlib="/u01/app/oracle/product/otd1221/webgate/otd/lib/webgate.so"
obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd"
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/components/OTD/instances/otd_TST1_host1.domain.local" Init fn="OBWebGate_Init"
obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd"
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/components/OTD/instances/otd_TST1_host1.domain.local" Mode="PEER"
#WebGateLockFileDir="&amp;amp;lt;some_local_dir&amp;amp;gt;"

# WebGateLockFileDir: Optional directive specifying the location to create # webgate lock files.
#
# If configured, then all webgate lock files will be created under
# &amp;amp;lt;WebGateLockFileDir&amp;amp;gt;/&amp;amp;lt;Hash of WebGateInstancedir&amp;amp;gt;. The hash subdir is to
# ensure uniqueness for each webserver instance and avoid locking conflicts
# if two different instances have configured the directive with same value.
#
# If the dir does not exist before, will try to create it first. If dir
# creation failed or the directive not configured, webgate falls back to old
# model, i.e. use same location as original file that lock is based upon.
#
# This directive is useful when webgate instance is located on NFS mounted
# disks and performance greatly impacted. Configure it to local dir will solve
# the issue.

#ESSO#Init fn="EssoBasicAuthInit"
obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd"
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/components/OTD/instances/otd_TST1_host1.domain.local" Mode="PEER"
# Oracle WebGate Init FNs end #WGINITFN

And on Instance 2

#
# Copyright (c) 2011, 2016, Oracle and/or its affiliates. All rights reserved.
#&amp;lt;/pre&amp;gt;
Init fn="load-modules" shlib="libwebapp-firewall.so"

# Oracle WebGate Init FNs start #WGINITFN
Init fn="load-modules" funcs="OBWebGate_Init,OBWebGate_Authent,OBWebGate_Control,
OBWebGate_Err,OBWebGate_Handle401,OBWebGate_Response"
shlib="/u01/app/oracle/product/otd1221/webgate/otd/lib/webgate.so"
obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd"
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/components/OTD/instances/otd_TST1_host2.domain.local"
#ESSO#Init fn="load-modules"
funcs="EssoBasicAuthInit,EssoBasicAuth,EssoClean"
shlib="/u01/app/oracle/product/otd1221/webgate/otd/lib/webgate.so"
obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd"
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/components/OTD/instances/otd_TST1_host2.domain.local"
Init fn="OBWebGate_Init"
obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd"
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/components/OTD/instances/otd_TST1_host2.domain.local" Mode="PEER"
#WebGateLockFileDir="&amp;amp;lt;some_local_dir&amp;amp;gt;"

# WebGateLockFileDir: Optional directive specifying the location to create
# webgate lock files.
#
# If configured, then all webgate lock files will be created under
# &amp;amp;lt;WebGateLockFileDir&amp;amp;gt;/&amp;amp;lt;Hash of WebGateInstancedir&amp;amp;gt;. The hash subdir is to
# ensure uniqueness for each webserver instance and avoid locking conflicts
# if two different instances have configured the directive with same value.
#
# If the dir does not exist before, will try to create it first. If dir
# creation failed or the directive not configured, webgate falls back to old
# model, i.e. use same location as original file that lock is based upon.
#
# This directive is useful when webgate instance is located on NFS mounted
# disks and performance greatly impacted. Configure it to local dir will solve
# the issue.

#ESSO#Init fn="EssoBasicAuthInit"

obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd"
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/components/OTD/instances/otd_TST1_host2.domain.local" Mode="PEER"
# Oracle WebGate Init FNs end #WGINITFN

The files look good on both instances.

Now Restart Instances

Validate Restart operation on target /Domain_otd_domain_tst/otd_domain_tst/otd_TST1_host2.domain.local Validate Restart operation on target /Domain_otd_domain_tst/otd_domain_tst/otd_TST1_host1.domain.local ------------------------------------------------
Perform Restart operation on target /Domain_otd_domain_tst/otd_domain_tst/TST1
Perform Restart operation on target /Domain_otd_domain_tst/otd_domain_tst/otd_TST1_host2.domain.local
Perform Restart operation on target /Domain_otd_domain_tst/otd_domain_tst/otd_TST1_host1.domain.local
------------------------------------------------
Checking operation status on target /Domain_otd_domain_tst/otd_domain_tst/TST1
Operation Restart on target /Domain_otd_domain_tst/otd_domain_tst/otd_TST1_host2.domain.local succeeded
Operation Restart on target /Domain_otd_domain_tst/otd_domain_tst/otd_TST1_host1.domain.local succeeded

 

Now you’re good to go with WebGate correctly configured on OTD 12c.

I put the solution in the service request and got thanks from the guys at Oracle Support. They told me, they where going to change the documentation to match my solution. Always nice to get this kind of appreciation from them.

The post Configuring Oracle Traffic Director 12c with WebGate appeared first on AMIS Oracle and Java Blog.

CREATE_CHUNKS_BY_SQL

Tom Kyte - Sat, 2017-05-27 17:46
How to frame the select query which will divide the result set into required chunks by using CREATE_CHUNKS_BY_SQL by_rowid => true. For example: i have a table contains 100 records and my select query fetches after filtration 50 records, i want ...
Categories: DBA Blogs

Undo Data files resize

Tom Kyte - Sat, 2017-05-27 17:46
Hi, I have One undo Table space with 4 undo datafiles in my production database Oracle 11g and standby db. Each data file size is 30gb but uses only 1gb. so can i resize (shrink) all data files without downtime. please reply me with desc...
Categories: DBA Blogs

How to fetch results even when one of the DB LINK Fails

Tom Kyte - Sat, 2017-05-27 17:46
Hi Tom, Hope you are doing good. I got a requirement of getting a values from view that was built from 4 other views combined by UNION.Each 4 views referring and getting data from different DBlinks.So when a main view executed it still need to fe...
Categories: DBA Blogs

unique vs. non unique index

Tom Kyte - Sat, 2017-05-27 17:46
Hi, I searched in the archives for my question but I was unable to find an answer and I do apologize for asking what seems like such an easy question. I have always assumed that unique indexes were faster than non-unique indexes but I have never bee...
Categories: DBA Blogs

12c Sequence - nopartition/partition

Tom Kyte - Sat, 2017-05-27 17:46
What is the meaning of a new parameter when creating a sequence? From dbms_metadata.get_ddl: CREATE SEQUENCE "XY"."MY_SEQUENCE" MINVALUE 1000 MAXVALUE 9999999999999999999999999999 INCREMENT BY 1 START WITH 1 NOCACHE NOORDER NOCYCLE NOPARTITI...
Categories: DBA Blogs

Rows locks from select for update clause

Tom Kyte - Sat, 2017-05-27 17:46
Hi Tom, From my knowledge of oracle, i understand that SELECT FOR UPDATE clause acquires row locks. In that case, if in a given table, when my query updates one set of data, same query should be able to update different set of data by using select f...
Categories: DBA Blogs

Building my own Container Cloud Services on top of Oracle Cloud IaaS

Marcelo Ochoa - Sat, 2017-05-27 17:37
Two weeks ago I presented in the ArOUG Cloud Day at Buenos Aires an introduction to Oracle Container Cloud Services (OCCS) and Oracle IaaS Compute management using console and API.
For this presentation I showed how to implement your Container Cloud Services (CCS) directly on top of IaaS compute.
Let's compare OCCS and my own CCS, here how they look like:
OCCS welcome pageMy CCS welcome pagethey look similar :), my own CCS is implemented using Portainer.io project.
Going deeper with both implementations I can resume the pros/cons as:
OCCS:
  • Pros
    • Easy manage
    • Pre-configured templates
    • Fast jump-start, 1-click deploy
    • Support for using official repository of Oracle images
  • Cons:
    • Host OS not configurable, ej. disk, semaphores, etc
    • Old Docker version 1.12
    • Not Swarm support
    • Basic orchestration features

My CCS:
  • Pros
    • Latest Docker/Swarm version 17.04.0-ce
    • Full Swarm support, scaling, upgrade, load-balancing
    • One-click deploy
    • Public templates (Portainer.io and LinuxServer.io)
    • Graphical scaling Swarm services
    • Console  Log/Stats for services
    • Full custom software/hardware selection
  • Cons
    • Oracle official repositories only available from command line
    • Registry with login not supported in graphical interface, only command line
    • A little complex deployment (scripts)
In my opinion the main problem of the OCCS is that you can't touch low level details of the implementation, so for example you can't change the host OS file /etc/sysctl.conf parameter vm.max_map_count=262144, then you will never get up and running a cluster of ElasticSearch 5.x version, more on this, supported version of Docker (1.12) is a bit old compared with latest features included in 17.04-0-ce.
On the other side my CCS best feature is that supports Swarm and command line operation if you connect to the host OS using ssh, with Swarm support you have a serious implementation for Docker data center solution, specially compared with other orchestration solutions like Kubernetes, see this great post about Docker Swarm exceeds Kubernetes performance at scale.
If you want to test by your self my CCS I extended the scripts of my previous post Managing Oracle Cloud IaaS - The API way to include a Portainer console up and running after deployment of the ES cluster and Swarm Visualizer.
The scripts are available at my GitHub repository, basically it deploys Portainer console at the end of the script deploy-cluster.sh using this command line:
docker run -d --name portainer -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock portainer/portainer -H unix:///var/run/docker.sock
if you grant access to the Swarm master host (node5 at the examples) to the port 9000 you will get access to the Portainer console as is showed at above screenshot, note that Portainer have access to the Unix socket /var/run/docker.sock to perform Swarm commands graphically.
You can also directly manage other nodes of the cluster in a native Docker way adding endpoints using TLS certificates generated during the step deploy-machines.sh.
Finally to see Portainer in action I recorded this simple walk through and below OCCS presentation.
Swarm nodesSwarm Services Scale up/down op.




Doing RDBMS hot full backup using RMan when running on Docker

Marcelo Ochoa - Sat, 2017-05-27 07:54
I think many databases are going in production now using Docker environment, specially with the official support provides by Oracle when allows pulling Docker images from official Docker store.
If you are using a custom build image using Oracle's official scripts you can do a hot full backup using RMAN as is described in this post.
We will test using a container started as:
[mochoa@localhost ols-official]$ docker run -d --privileged=true --name test --hostname test --shm-size=1g -p 1521:1521 -p 5500:5500 -e ORACLE_SID=TEST -e ORACLE_PDB=PDB1 -v /etc/timezone:/etc/timezone:ro -e TZ="America/Argentina/Buenos_Aires" -v /home/data/db/test:/opt/oracle/oradata oracle/database:12.1.0.2-ee
Note that datafiles and other RDBMS persistent data are stored at /home/data/db/test host directory.
To connect as SYSDBA to above running container do:
[mochoa@localhost ols-official]$ docker exec -ti test bash [oracle@test ~]$ sqlplus "/ as sysdba"First checks if your database is running in archive log mode:
SQL> SELECT LOG_MODE FROM SYS.V$DATABASE;
LOG_MODE------------ NOARCHIVELOG
If no, enable using the steps described in this page, login using bash and performs following steps connected as SYSDBA:
SQL> SHUTDOWN IMMEDIATE SQL> STARTUP MOUNT SQL> ALTER DATABASE ARCHIVELOG; SQL> ALTER DATABASE OPEN; SQL> SELECT LOG_MODE FROM SYS.V$DATABASE;
LOG_MODE------------ ARCHIVELOG
you can force RDBMS to generate a log file to see which directory is defined for redo log backups, for example:
SQL> ALTER SYSTEM SWITCH LOGFILE;
usually is defined by the parameter log_archive_dest, but if it empty the Docker image is doing backup at the container directory:
/opt/oracle/oradata/fast_recovery_area/${ORACLE_SID}/archivelog/yyyy_mm_dd/
Once you have your database up and running in archive log mode using RMAN utility a full daily backup can be configured in your /etc/cron.daily/ host directory as:
[mochoa@localhos ols-official]$ cat /etc/cron.daily/backup-db.sh #!/bin/bash docker exec test /opt/oracle/product/12.1.0.2/dbhome_1/bin/rman target=/ cmdfile='/opt/oracle/oradata/backup_full_compressed.sh' log='/opt/oracle/oradata/backup_archive.log'
where backup_full_compressed.sh is an RMAN's script as:
delete force noprompt obsolete; run {    configure controlfile autobackup on;    configure default device type to disk;    configure device type disk parallelism 1;    configure controlfile autobackup format for device type disk clear;    allocate channel c1 device type disk;    backup format '/opt/oracle/oradata/backup/prod/%d_%D_%M_%Y_%U' as    compressed backupset database; } delete force noprompt obsolete;
during the full backup you can see RMAN's output at the host file:
/home/data/db/test/backup_archive.log
or container file:
/opt/oracle/oradata/backup_archive.log
it looks like:
Recovery Manager: Release 12.1.0.2.0 - Production on Sat May 27 09:18:54 2017
Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
connected to target database: TEST (DBID=2242310144)
RMAN> delete force noprompt obsolete;2> run3> {4> configure controlfile autobackup on;5> configure default device type to disk;6> configure device type disk parallelism 1;7> configure controlfile autobackup format for device type disk clear;8> allocate channel c1 device type disk;9> backup format '/opt/oracle/oradata/backup/prod/%d_%D_%M_%Y_%U' as compressed backupset database;10> }11> delete force noprompt obsolete;12> using target database control file instead of recovery catalogRMAN retention policy will be applied to the commandRMAN retention policy is set to redundancy 1allocated channel: ORA_DISK_1channel ORA_DISK_1: SID=365 device type=DISK....Deleting the following obsolete backups and copies:Type                 Key    Completion Time    Filename/Handle-------------------- ------ ------------------ --------------------Backup Set           5      26-MAY-17           Backup Piece       5      26-MAY-17          /opt/oracle/oradata/backup/prod/TEST_26_05_2017_05s57dn5_1_1Backup Set           6      26-MAY-17           Backup Piece       6      26-MAY-17          /opt/oracle/oradata/backup/prod/TEST_26_05_2017_06s57dti_1_1Backup Set           7      26-MAY-17           Backup Piece       7      26-MAY-17          /opt/oracle/oradata/backup/prod/TEST_26_05_2017_07s57e1g_1_1Archive Log          3      27-MAY-17          /opt/oracle/oradata/fast_recovery_area/TEST/archivelog/2017_05_27/o1_mf_1_48_dllv53d2_.arcBackup Set           8      26-MAY-17           Backup Piece       8      26-MAY-17          /opt/oracle/oradata/fast_recovery_area/TEST/autobackup/2017_05_26/o1_mf_s_945010852_dljvbpfq_.bkpdeleted backup piecebackup piece handle=/opt/oracle/oradata/backup/prod/TEST_26_05_2017_05s57dn5_1_1 RECID=5 STAMP=945010405deleted backup piecebackup piece handle=/opt/oracle/oradata/backup/prod/TEST_26_05_2017_06s57dti_1_1 RECID=6 STAMP=945010610deleted backup piecebackup piece handle=/opt/oracle/oradata/backup/prod/TEST_26_05_2017_07s57e1g_1_1 RECID=7 STAMP=945010736deleted archived logarchived log file name=/opt/oracle/oradata/fast_recovery_area/TEST/archivelog/2017_05_27/o1_mf_1_48_dllv53d2_.arc RECID=3 STAMP=945076211deleted backup piecebackup piece handle=/opt/oracle/oradata/fast_recovery_area/TEST/autobackup/2017_05_26/o1_mf_s_945010852_dljvbpfq_.bkp RECID=8 STAMP=945010854Deleted 5 objects

Recovery Manager complete.
and that's all when your host cron.daily script finish your full backup is at /home/data/db/test/backup/prod, note that a 3.7Gb datafiles only produce 695Mb size files and took a few minutes to do that.


Carbonated Java & JavaScript Stored Procedures

Kuassi Mensah - Fri, 2017-05-26 17:43
Carbonated Java Stored ProceduresFor accessing JSON Collections and documents without any knowledge of SQL, Oracle furnishes the SODA for Java API. It allows a convenient access and navigation using the dot notation.

How to use SODA for Java in Java Stored Procedures? I have posted the steps, the code samples and scripts on GitHub.
Carbonated JavaScript Stored Procedures Nashorn allows interoperability between Java and javaScript. By leveraging such interoperability, I've bee able to reuse SODA for Java with JavaScript Stored Procedures.

How to use SODA for Java in JavaScript Stored Procedures? I have posted the steps, the code samples and scripts on GitHub.

Enjoy!

12cR2 needs to connect with password for Cross-PDB DML

Yann Neuhaus - Fri, 2017-05-26 14:13

In a previous post, I explained that Cross-PDB DML, executing an update/delete/insert with the CONTAINERS() clause, seems to be implemented with implicit database links. Connecting through a database link requires a password and this blog post is about an error you may encounter: ORA-01017: invalid username/password; logon denied

This blog post also explains a consequence of this implementation, the big inconsistency of CONTAINERS() function because the implementation is completely different for queries (select) and for insert/delete/update, and you may finally write and read from different schemas.

We do not need Application Container for Cross-PDB DML and we don’t even need metadata link tables. Just tables with same columns. Here I have a DEMO table which is just a copy of DUAL, and it is created in CDB$ROOT and in PDB1 (CON_ID=3), owned by SYS.

Implicit database link

I’m connecting to CDB$ROOT with user, password and service name:

SQL> connect sys/oracle@//localhost/CDB1A as sysdba
Connected.

I insert a row into the DEMO table in the PDB1, which is CON_ID=3:

SQL> insert into containers(DEMO) (con_id,dummy) values (3,'Y');
1 row created.

This works in 12.2, is documented, and is an alternative way to switching to the container.

But now, let’s try to do the same when connecting with ‘/ as sysdba':

SQL> connect / as sysdba
Connected.
SQL> insert into containers(DEMO) (con_id,dummy) values (3,'Y');
 
insert into containers(DEMO) (con_id,dummy) values (3,'Y')
*
ERROR at line 1:
ORA-01017: invalid username/password; logon denied
ORA-02063: preceding line from PDB1

The first message mentions invalid user/password, and the second one mentions a database link having the same name as the container.
As I described in the previous post the CONTAINERS() opens an implicit database link when doing some modifications to another container. But a database link requires a connection and no user/password has been provided. It seems that it tries to connect with the same user and password as the one provided to connect to the root.

Then, I provide the user/password but with local connection (no service name):


SQL> connect sys/oracle as sysdba
Connected.
SQL> insert into containers(DEMO) (con_id,dummy) values (3,'Y');
insert into containers(DEMO) (con_id,dummy) values (3,'Y')
*
ERROR at line 1:
ORA-01017: invalid username/password; logon denied

There is no mention of a database link here, but still impossible to connect. Then it seems that the session needs our connection string to find out how to connect to the PDB.

Explicit database link

There is an alternative. You can create the database link explicitly and then it will be used by the container(), having all information required password and service. But the risk is that you define this database link to connect to another user.

Here I have also a DEMO table created in SCOTT:

SQL> create database link PDB1 connect to scott identified by tiger using '//localhost/PDB1';
Database link created.
 
SQL> select * from DEMO@PDB1;
 
D
-
X

From the root I insert with CONTAINERS() without mentioning the schema:

SQL> insert into containers(DEMO) (con_id,dummy) values (3,'S');
1 row created.

I have no errors here (I’m still connected / as sysdba) because I have a database link with the same name as the one it tries to use implicitly. So it works without any error or warning. But my database link does not connect to the same schema (SYS) but to SCOTT. And because a DEMO table was there with same columns, the row was actually inserted into the SCOTT schema:

SQL> select * from DEMO@PDB1;
 
D
-
X
S

The big problem here is that when doing a select through the same CONTAINER() function, a different mechanism is used, not using the database link but session switching to the other container, in same schema, so the row inserted through INSERT INTO CONTAINER() is not displayed by SELECT FROM CONTAINER():
SQL> select * from containers(DEMO);
 
D CON_ID
- ----------
X 1
X 3
Y 3

So what?

I don’t know if the first problem (invalid user/password) will be qualified as a bug but I hope the second one will. Cross-PDB DML will be an important component of Application Containers, and having a completely different implementation for SELECT and for INSERT/UPDATE/DELETE may be a source of problems. In my opinion, both should use container switch within the same session, but that means that a transaction should be able to write in multiple containers, which is not possible currently.

 

Cet article 12cR2 needs to connect with password for Cross-PDB DML est apparu en premier sur Blog dbi services.

Automating Password Rotation for Oracle Databases

Pythian Group - Fri, 2017-05-26 14:03

Password rotation is not the most exciting task in the world, and that’s exactly why it’s a perfect candidate for automation. Automating routine tasks like this are good for everyone – DBAs can work on something that’s more exciting, companies save costs as less time is spent on changing the passwords, and there’s no place for human error, either. At Pythian, we typically use Ansible for task automation, and I like it mainly because of its non-intrusive configuration (no agents need to be installed on the target servers), and its scalability (tasks are executed in parallel on the target servers). This post will briefly describe how I automated password rotation for oracle database users using Ansible.

Overview

This blog post is not an intro to what is Ansible and how to use it, but it’s rather an example of how a simple task can be automated using Ansible in a way that’s scalable, flexible and easily reusable, and also provides the ability for other tasks to pick up the new passwords from a secure password store.

  • Scalability – I’d like to take advantage of Ansible’s ability of executing tasks on multiple servers at the same time. For example, in a large environments of tens or hundreds of machines, a solution that executes password change tasks serially would not be suitable. This would be an example of a “serial” task (it’s not a real thing, but just an illustration that it “hardcodes” a few “attributes” (environment file, the username and the hostname), and creating a separate task for every user/database you’d want to change the password for would be required:
    - hosts: ora-serv01
      remote_user: oracle
      tasks:
      - name: change password for SYS
        shell: | 
          . TEST1.env && \
          sqlplus / as sysdba @change_pasword.sql SYS \
          \"{{lookup('password','/dev/null length=8')}}\"
    
  • Flexible – I want to be able to adjust the list of users for which the passwords are changed, and the list of servers/databases that the user passwords are changed for in a simple way, that doesn’t include changing the main task list.
  • Reusable – this comes together with flexibility. The idea is that the playbook would be so generic, that it wouldn’t require any changes when it’s implemented in a completely separate environment (i.e. for another client of Pythian)
  • Secure password store – the new passwords are to be generated by the automated password rotation tool, and a method of storing password securely is required so that the new passwords could be picked up by the DBAs, application owners or the next automated task that would reconfigure the application
The implementation Prerequisites

I chose to do the implementation using Ansible 2.3, because it introduces the passwordstore lookup, which enables interaction with the pass utility (read more about it in Passwordstore.org). pass is very cool. It store passwords in gpg-encrypted files, and it can also be configured to automatically update the changes to a git repository, which relieves us of the headache of password distribution. The password can be retrieved from git on the servers that need the access to the new passwords.

Ansible 2.3 runs on python 2.6, unfortunately, the passwordstore lookup requires Python 2.7, which can be an issue if the control host for Ansible runs on Oracle Linux 6 or RHEL 6, as they don’t provide Python 2.7 in the official yum repositories. Still, there are ways of getting it done, and I’ll write another blog post about it.

So, what we’ll need is:

  • Ansible 2.3
  • jmespath plugin on Ansible control host (pip install jmespath)
  • jinja2 plugin on Ansible control host (I had to update it using pip install -U jinja2 in few cases)
  • Python 2.7 (or Python 3.5)
  • pass utility
The Playbook

This is the whole list of files that are included in the playbook:

./chpwd.yml
./inventory/hosts
./inventory/orcl1-vagrant-private_key
./inventory/orcl2-vagrant-private_key
./roles/db_users/files/change_password.sql
./roles/db_users/files/exists_user.sql
./roles/db_users/defaults/main.yml
./roles/db_users/tasks/main.yml

Let’s take a quick look at all of them:

  • ./chpwd.yml – is the playbook and (in this case) it’s extremely simple as I want to run the password change against all defined hosts:
    $ cat ./chpwd.yml
    ---
    
      - name: password change automation
        hosts: all
        roles:
          - db_users
    
  • ./inventory/hosts, ./inventory/orcl1-vagrant-private_key, ./inventory/orcl2-vagrant-private_key – these files define the hosts and the connectivity. In this case we have 2 hosts – orcl1 and orcl2, and we’ll connect to vagrant user using the private keys.
    $ cat ./inventory/hosts
    [orahosts]
    orcl1 ansible_host=127.0.0.1 ansible_port=2201 ansible_ssh_private_key_file=inventory/orcl1-vagrant-private_key ansible_user=vagrant
    orcl2 ansible_host=127.0.0.1 ansible_port=2202 ansible_ssh_private_key_file=inventory/orcl2-vagrant-private_key ansible_user=vagrant
  • ./roles/db_users/files/change_password.sql – A sql script that I’ll execute on the database to change the passwords. It takes 2 parameters the username and the password:
    $ cat ./roles/db_users/files/change_password.sql
    set ver off pages 0
    alter user &1 identified by "&2";
    exit;
  • ./roles/db_users/files/exists_user.sql – A sql script that allows verifying the existence of the users. It takes 1 argument – the username. It outputs “User exists.” when the user is there, and “User {username} does not exist.” – when it’s not.
    $ cat ./roles/db_users/files/exists_user.sql
    set ver off pages 0
    select 'User exists.' from all_users where username=upper('&1')
    union all
    select 'User '||upper('&1')||' does not exist.' from (select upper('&1') from dual minus select username from all_users);
    exit;
  • ./roles/db_users/defaults/main.yml – is the default file for the db_users role. I use this file to define the users for each host and database for which the passwords need to be changed:
    $ cat ./roles/db_users/defaults/main.yml
    ---
    
      db_users:
        - name: TEST1
          host: orcl1
          env: ". ~/.bash_profile && . ~/TEST1.env > /dev/null"
          pwdstore: "orcl1/TEST1/"
          os_user: oracle
          become_os_user: yes
          users:
            - dbsnmp
            - system
        - name: TEST2
          host: orcl2
          env: ". ~/.bash_profile && . ~/TEST2.env > /dev/null"
          pwdstore: "orcl2/TEST2/"
          os_user: oracle
          become_os_user: yes
          users:
            - sys
            - system
            - ctxsys
        - name: TEST3
          host: orcl2
          env: ". ~/.bash_profile && . ~/TEST3.env > /dev/null"
          pwdstore: "orcl2/TEST3/"
          os_user: oracle
          become_os_user: yes
          users:
            - dbsnmp

    In this data structure, we define everything that’s needed to be known to connect to the database and change the passwords. each entry to the list contains the following data:

    • name – just a descriptive name of the entry in this list, normally it would be the name of the database that’s described below.
    • host – the host on which the database resides. It should match one of the hosts defined in ./inventory/hosts.
    • env – how to set the correct environment to be able to connect to the DB (currently it requires sysdba connectivity).
    • pwdstore – the path to the folder in the passwordstore where the new passwords will be stored.
    • os_user and become_os_user – these are used in case sudo to another user on the target host is required. In a typical configuration, I connect to the target host using a dedicated user for ansible, and then sudo to the DB owner. if ansible connects to the DB onwer directly, then become_os_user should be set to “no”.
    • users – this is the list of all users for which the passwords need to be changed.

    As you see, this structure greatly enhances the flexibility and reusability, because adding new databases, hosts or users to the list would be done by a simple change to the “db_users:” structure in this defaults file. In this example, dbsnmp and system passwords are rotated for TEST1@orcl1, sys, system and ctxsys passwords are rotated for TEST2@orcl2, and dbsnmp on TEST3@orcl2

  • ./roles/db_users/tasks/main.yml – this is the task file of the db_users role. The soul of the playbook and the main part that does the password change depending on the contents in the defaults file described above. Instead of pasting the whole at once, I’ll break it up task by task, and will provide some comments about what’s being done.
    • populate host_db_users – This task simply filters the whole db_users data structure that’s defined in the defaults file, and creates host_db_users fact with only the DBs that belong to the host the task is currently run on. Using the ansible “when” conditional would also be possible to filter the list, however in such case there’s a lot of “skipped” entries displayed when the task is executed, so I prefer filtering the list before it’s even passed to the Ansible task.
      ---
      
        - name: populate host_db_users
          set_fact: host_db_users="{{ db_users | selectattr('host','equalto',ansible_hostname) | list }}"
      
    • create directory for target on db hosts – for each unique combination of os_user and become_os_user on the target host, and “ansible” directly is created. A json_query is used here, to filter just the os_user and become_os_user attributes that are needed. It would also work with with_items: "{{ host_db_users }}", but in that case, the outputs become cluttered as the attributes are displayed during the execution.
        - name: create directory for target on db hosts
          file:
            path: "ansible"
            state: directory
          become_user: "{{ item.os_user }}"
          become: "{{ item.become_os_user }}"
          with_items: "{{ host_db_users | json_query('[*].{os_user: os_user, become_os_user: become_os_user }') | unique | list }}"
      
    • copy sql scripts to db_hosts – the missing scripts are copied from Ansible control host to the target “ansible” directories. “with_nested” is the method to create a loop in Ansible.
        - name: copy sql scripts to db_hosts
          copy:
            src="{{ item[1] }}"
            dest=ansible/
            mode=0644
          become_user: "{{ item[0].os_user }}"
          become: "{{ item[0].become_os_user }}"
          with_nested:
            - "{{ host_db_users | json_query('[*].{os_user: os_user, become_os_user: become_os_user }') | unique | list }}"
            - ['files/change_password.sql','files/exists_user.sql']
      
    • verify user existence – I’m using a shell module to execute the sql script after setting the environment. The outputs are collected in “exists_output” variable. This task will not fail and will not show as “changed” because of failed_when and changed_when settings of “false”.
        - name: verify user existence
          shell: |
             {{ item[0].env }} && \
             sqlplus -S / as sysdba \
             @ansible/exists_user.sql {{ item[1] }}
          register: exists_output
          become_user: "{{ item[0].os_user }}"
          become: "{{ item[0].become_os_user }}"
          with_subelements:
            - "{{ host_db_users |json_query('[*].{env: env, os_user: os_user, users: users, become_os_user: become_os_user }') }}"
            - users
          failed_when: false
          changed_when: false
      
    • User existence results – this task will fail when any of the users didn’t exist, and will display which user it was. This is done in a separate task to produce cleaner output, and in case it’s not wanted to fail if any of the users don’t exist (continue to change passwords for the existing users), this task can simply be commented or the “failed_when: false” can be uncommented.
        - name: User existence results
          fail: msg="{{ item }}"
          with_items: "{{ exists_output.results|rejectattr('stdout','equalto','User exists.')|map(attribute='stdout')|list }}"
          #failed_when: false
      
    • generate and change the user passwords – finally, this is the task that actually changes the passwords. The successful password change is detected by checking the output from the sqlscript, which should produce “User altered.” The rather complex use of lookups is there for a reason: the passwordstore lookup can also generate passwords, but it’s not possible to define the character classes that the new password should contain, however the “password” lookup allows defining these. Additionally, the 1st character is generated only containing “ascii_letters”, as there are usually some applications that “don’t like” passwords that start with numbers (this is why generating the 1st letter of the password is separated from the remaining 11 characters. And lastly, the “passwordstore” lookup is used with the “userpass=” parameter to pass and store the generated password into the passwordstore (and it also keeps the previous passwords). This part could use some improvement as in some cases different rules for the generated password complexity may be required. The password change outputs are recorded in “change_output” that’s checked in the last task.
        - name: generate and change the user passwords
          shell: |
             {{ item[0].env }} && \
             sqlplus -S / as sysdba \
             @ansible/change_password.sql \
             {{ item[1] }} \"{{ lookup('passwordstore',item[0].pwdstore + item[1] + ' create=true overwrite=true userpass=' +
                                       lookup('password','/dev/null chars=ascii_letters length=1') +
                                       lookup('password','/dev/null chars=ascii_letters,digits,hexdigits length=11')) }}\"
          register: change_output
          become_user: "{{ item[0].os_user }}"
          become: "{{ item[0].become_os_user }}"
          with_subelements:
            - "{{ host_db_users |json_query('[*].{env: env, os_user: os_user, users: users, pwdstore: pwdstore, become_os_user: become_os_user}') }}"
            - users
          failed_when: false
          changed_when: "'User altered.' in change_output.stdout"
      
    • Password change errors – The “change_output” data are verified here, and failed password changes are reported.
         # fail if the password change failed.
        - name: Password change errors
          fail: msg="{{ item }}"
          with_items: "{{ change_output.results|rejectattr('stdout','equalto','\nUser altered.')|map(attribute='stdout')|list }}"
      
It really works!

Now, when you know how it’s built – it’s time to show how it works!
Please pay attention to the following:

  • The password store is empty at first
  • The whole password change playbook completes in 12 seconds
  • The tasks on both hosts are executed in parallel (see the order of execution feedback for each task)
  • The passwordstore contains the password entries after the playbook completes, and they can be retrieved by using the pass command
$ pass
Password Store

$ time ansible-playbook -i inventory/hosts chpwd.yml

PLAY [pasword change automation] *******************************************************

TASK [Gathering Facts] *****************************************************************
ok: [orcl1]
ok: [orcl2]

TASK [db_users : populate host_db_users] ***********************************************
ok: [orcl1]
ok: [orcl2]

TASK [db_users : create directory for target on db hosts] ******************************
changed: [orcl1] => (item={'become_os_user': True, 'os_user': u'oracle'})
changed: [orcl2] => (item={'become_os_user': True, 'os_user': u'oracle'})

TASK [db_users : copy sql scripts to db_hosts] *****************************************
changed: [orcl1] => (item=[{'become_os_user': True, 'os_user': u'oracle'}, u'files/change_password.sql'])
changed: [orcl2] => (item=[{'become_os_user': True, 'os_user': u'oracle'}, u'files/change_password.sql'])
changed: [orcl1] => (item=[{'become_os_user': True, 'os_user': u'oracle'}, u'files/exists_user.sql'])
changed: [orcl2] => (item=[{'become_os_user': True, 'os_user': u'oracle'}, u'files/exists_user.sql'])

TASK [db_users : verify user existance] ************************************************
ok: [orcl2] => (item=({'become_os_user': True, 'os_user': u'oracle', 'env': u'. ~/.bash_profile && . ~/TEST2.env > /dev/null'}, u'sys'))
ok: [orcl1] => (item=({'become_os_user': True, 'os_user': u'oracle', 'env': u'. ~/.bash_profile && . ~/TEST1.env > /dev/null'}, u'dbsnmp'))
ok: [orcl1] => (item=({'become_os_user': True, 'os_user': u'oracle', 'env': u'. ~/.bash_profile && . ~/TEST1.env > /dev/null'}, u'system'))
ok: [orcl2] => (item=({'become_os_user': True, 'os_user': u'oracle', 'env': u'. ~/.bash_profile && . ~/TEST2.env > /dev/null'}, u'system'))
ok: [orcl2] => (item=({'become_os_user': True, 'os_user': u'oracle', 'env': u'. ~/.bash_profile && . ~/TEST2.env > /dev/null'}, u'ctxsys'))
ok: [orcl2] => (item=({'become_os_user': True, 'os_user': u'oracle', 'env': u'. ~/.bash_profile && . ~/TEST3.env > /dev/null'}, u'dbsnmp'))

TASK [db_users : User existance results] ***********************************************

TASK [db_users : generate and change the user passwords] *******************************
changed: [orcl2] => (item=({'become_os_user': True, 'os_user': u'oracle', 'pwdstore': u'orcl2/TEST2/', 'env': u'. ~/.bash_profile && . ~/TEST2.env > /dev/null'}, u'sys'))
changed: [orcl1] => (item=({'become_os_user': True, 'os_user': u'oracle', 'pwdstore': u'orcl1/TEST1/', 'env': u'. ~/.bash_profile && . ~/TEST1.env > /dev/null'}, u'dbsnmp'))
changed: [orcl2] => (item=({'become_os_user': True, 'os_user': u'oracle', 'pwdstore': u'orcl2/TEST2/', 'env': u'. ~/.bash_profile && . ~/TEST2.env > /dev/null'}, u'system'))
changed: [orcl1] => (item=({'become_os_user': True, 'os_user': u'oracle', 'pwdstore': u'orcl1/TEST1/', 'env': u'. ~/.bash_profile && . ~/TEST1.env > /dev/null'}, u'system'))
changed: [orcl2] => (item=({'become_os_user': True, 'os_user': u'oracle', 'pwdstore': u'orcl2/TEST2/', 'env': u'. ~/.bash_profile && . ~/TEST2.env > /dev/null'}, u'ctxsys'))
changed: [orcl2] => (item=({'become_os_user': True, 'os_user': u'oracle', 'pwdstore': u'orcl2/TEST3/', 'env': u'. ~/.bash_profile && . ~/TEST3.env > /dev/null'}, u'dbsnmp'))

TASK [db_users : Password change errors] ***********************************************

PLAY RECAP *****************************************************************************
orcl1                      : ok=6    changed=3    unreachable=0    failed=0
orcl2                      : ok=6    changed=3    unreachable=0    failed=0

real    0m12.418s
user    0m8.590s
sys     0m3.900s

$ pass
Password Store
|-- orcl1
|   |-- TEST1
|       |-- dbsnmp
|       |-- system
|-- orcl2
    |-- TEST2
    |   |-- ctxsys
    |   |-- sys
    |   |-- system
    |-- TEST3
        |-- dbsnmp

$ pass orcl1/TEST1/system
HDecEbjc6xoO
lookup_pass: First generated by ansible on 26/05/2017 14:28:50
Conclusions

For past 2 months I’ve been learning Ansible and trying it for various DBA tasks. It hasn’t always been a smooth ride, as I had to learn quite a lot, because I wasn’t exposed much to beasts like jinja2, json_query, YAML, python (very handy for troubleshooting) and Ansible itself before. I feel that my former PL/SQL coder’s experience had created some expectations from Ansible, that turned out not to be true. The biggest challenges to me were getting used to the linear execution of the playbook (while with PL/SQL I can call packages, functions, etc. to process the data “outside” the main linear code line), and the lack of execution feedback, because one has to learn creating Ansible tasks in a way that they either succeed or fail (no middle states like ‘this is a special case – process it differently’), as well as the amount of visual output is close to none – which does make sense to some degree, it’s “automation” after all, right? Nobody should be watching :)
A separate struggle for me was working with the complex data structure that I created for storing the host/database/user information. It’s a mix of yaml “dictionary” and “list”, and it turned out to be difficult to process it in a way I wanted – this is why I used the json_query at times (although not in a very complex way in this case). There are probably simpler ways I didn’t know of (didn’t manage finding), and I’d be glad if you’d let me know of possible improvements or even other approaches to such tasks that you have worked on and implemented.
Despite all the complaining above, I think it’s really worth investing time in automating tasks like this, it really works and once done it doesn’t require much attention. Happy Automating!

Categories: DBA Blogs

Introducing UEK4 and DTrace on Oracle Linux for SPARC

Wim Coekaerts - Fri, 2017-05-26 13:18

About 2 months ago we released the first version of Oracle Linux 6, Update 7 for SPARC. That was the same version of Oracle Linux used in Exadata SL6. OL6 installed on T4, T5 and T7 systems but it did not yet support the S7 processors/systems. It contained support for the various M7 processor features (DAX, ADI, crypto,...), gcc optimizations to support better code generation for SPARC, important optimizations in functions like memcpy() etc.

We also introduced support for Linux as the control domain (guest domain worked before). So this was the first time one could use Linux as the control domain with a vdiskserver, vswitch and virtual console driver. For this release we based the kernel on UEK2 (2.6.39).

The development team has been hard at work doing a number of things:

- continue to work with upstream Linux  and gcc/glibc/binutils development to submit all the code changes for inclusion. Many SPARC features have already been committed upstream and many are pending/Work in Progress.

- part of the work is  to forward port, so to speak, a lot of the uek2/sparc/exadata features into uek4, alongside upstream/mainline development.

- performance work, both in kernel and userspace (glibc, gcc in particular)

Today, we released an updated version of the ISO image that contains UEK4 QU4 (4.1.12-94.3.2). The main reason for updating the ISO is to introduce support for the S7 processor and S7-based servers. It contains a ton of improvements over UEK2,  we also added support for DTrace.

You can download the latest version of the ISO here :  http://www.oracle.com/technetwork/server-storage/linux/downloads/oracle-linux-sparc-3665558.html

The DTrace utilities can be downloaded here : http://www.oracle.com/technetwork/server-storage/linux/downloads/linux-dtrace-2800968.html

As we add more features we will update the kernel and we will also publish a new version of the software collections for Oracle Linux for SPARC with newer versions of gcc (6.x etc) so more coming!

We are working on things like gccgo, valgrind, node... and the yum repo on http://yum.oracle.com/ contains about 5000 RPMs.

Download it, play with it, have fun.

 

ora-01453 set transaction must be first statement of transaction when using 2 dblinks between 3 databases

Tom Kyte - Fri, 2017-05-26 05:06
Hi Tom, I have a stored procedure that returns ref cursor, the procedure runs in DB.1 (10g), reading from a remote view on DB.2 (11g), and that view is selecting union of 2 tables from DB.2 and the other table from remote DB.3 (11g). Th...
Categories: DBA Blogs

Getting ORA-29284 file read error

Tom Kyte - Fri, 2017-05-26 05:06
Hello Experts, I have below pl/sql function with that I am trying to upload the data from .xls file to pl/sql table but getting ora-29284 file read error. PL/SQL Function: CREATE OR REPLACE FUNCTION LOAD_data ( p_table in varchar2, p_dir i...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator