Feed aggregator

Log file sync

Tom Kyte - Tue, 2016-10-11 12:26
Hi There, I found slow performing pl/sql functionality indicated to be slow only when a backup of rachivelog is in progress. Checking the events using AWR and ASH reports there is clear indication for "Log file sync" waiting event for a delete...
Categories: DBA Blogs

Mirantis OpenStack 9.0 installation using VirtualBox – part 1

Yann Neuhaus - Tue, 2016-10-11 10:47

In this series of  blogs, I am going to  give a quick overview of OpenStack and learn how to install it using Mirantis.

“Mirantis is the #1 pure play OpenStack company. More customers rely on Mirantis than any other company to scale out OpenStack without vendor lock-in” (source: https://www.openstack.org/marketplace/distros/distribution/mirantis/mirantis-openstack)

OpenStack is an open source Infrastructure as a Service (IaaS) platform and was born in July 2010 as a collaboration between NASA and Rackspace.

OpenStack is not monolithic but is composed of several projects. I am not going to  detail in all of them now but here are the components I am going to  install in the lab :

  • Horizon : OpenStack Dashboard
  • Keystone : handles all the authentification processes
  • Neutron : creates virtual networks
  • Nova :  heart of the OpenStack project, provides virtualization capabilities
  • Cinder: provides persistent storage to the instances
  • Glance:  provides ready operating systems to the virtual instances



Of course, there are many ways to install OpenStack :

  • manually  –> not recommended because very difficult to maintain
  • using a deployment tool like : Ansible, Puppet, Chef, etc ..
  • using a distribution like :
    • Mirantis –> which uses Fuel as automation tool
    • Red Hat –> which is based on  TripleO
    • Rackspace –> which uses Ansible
    • Canonical –> which uses Juju and MaaS amoung other tools
    • […]

Distributions are here to handle:

  • OpenStack’s lifecycle
  • Patches & Upgrades
  • Documentation
  • Bug fixing and so on..

In this series of blogs I am going to  focus on Mirantis which is one of the best ways to get an OpenStack stable, up and running very quickly.
As said before, Mirantis uses Fuel (based on Puppet) as a deployment tool for OpenStack.

This is how the architecture of Fuel looks like:



  • Web UI : provides the Fuel User Interface based on Nginx
  • Keystone : for the authentification process
  • PosgreSQL Database: stores Fuel Master’s informations, about the deployment of the Fuel slave nodes
  • Nailgun : is the heart of the Fuel project which basically converts the choice of the user into commands for Astute workers
  • AMQP : is the message queue which Nailgun uses to give orders to Astute workers
  • Astute : gives node’s configuration to Cobbler and reboot the Fuel slaves node to let Cobller do its job
  • Cobbler :  installs the base Operating System on the Fuel slave nodes
  • MCollective : Orchestration tool for deploying Puppet via MCollective agents
  • MCollective agents: run on all Fuel slave node


Sotfware requirements:
– Virtual Box 4.2.12 – 5.0.x
– VirtualBox Extension Pack (to enable PXE boot)
can be downloaded at: https://www.virtualbox.org/wiki/Downloads
– Mirantis 9.0 ISO and Mirantis VirtualBox scripts
can be  download it from https://www.mirantis.com/how-to-install-openstack/

Hardware requirements:
– 64 bit  host operating system
– 8GB RAM at least
– 300GB+ Disk


1. Download the openstack/fuel-virtualbox project:

$ git clone https://github.com/openstack/fuel-virtualbox.git
Cloning into 'fuel-virtualbox'...
remote: Counting objects: 741, done.
remote: Total 741 (delta 0), reused 0 (delta 0), pack-reused 741
Receiving objects: 100% (741/741), 338.50 KiB | 0 bytes/s, done.
Resolving deltas: 100% (492/492), done.
Checking connectivity... done.

2. Go to the fuel-virtualbox directory and put the Mirantis OpenStack .ISO in the iso/ directory

$ cd fuel-virtualbox/
$ ls -l
total 104
drwx------ 1 sbe sbe 4096 Oct 4 11:14 actions
-rw------- 1 sbe sbe 1091 Jun 15 15:04 clean.sh
-rw------- 1 sbe sbe 7277 Oct 10 10:14 config.sh
drwx------ 1 sbe sbe 0 Oct 3 14:02 contrib
drwx------ 1 sbe sbe 0 Oct 3 14:02 drivers
-rw------- 1 sbe sbe 61122 Jun 15 15:04 dumpkeys.cache
drwx------ 1 sbe sbe 4096 Oct 4 10:44 functions
drwx------ 1 sbe sbe 0 Oct 10 10:11 iso
-rw------- 1 sbe sbe 653 Oct 4 10:40 launch_16GB.sh
-rw------- 1 sbe sbe 652 Jun 15 15:04 launch_8GB.sh
-rw------- 1 sbe sbe 1308 Jun 15 15:04 launch.sh
-rw------- 1 sbe sbe 1462 Jun 15 15:04 MAINTAINERS
-rw------- 1 sbe sbe 1939 Jun 15 15:04 README.md

You can see that there are two launch_X.sh file; one for 16GB and one for 8 GB. For testing purpose I will use the launch_8GB.sh script. One important file here is config.sh because it is where you set up the hardware configurations (RAM, Disk, CPU) for the Fuel master node and the Fuel slave nodes. You can have a look on it for more details. If you run a 16GB RAM machine, then you can use the “launch_16GB.sh” script.

By default, for 8 GB, the script will create 4 machines:

– one Fuel Master node with 2 GB RAM and 60 GB disk
– 3 Fuel slave nodes with 1.5 GB RAM and 3 disk of 65 GB

So the lab will looklike to this :



  • PXE network :  used by the Fuel Master node administrate the Fuel slave nodes and install OpenStack
  • Managament network :  for OpenStack services communication within the cloud
  • External network : to access the Internet
  • Private network : the inter-instances communication network within the OpenStack cloud
  • Storage network : used by instances to access the storage

3. Then launch the script :

$  ./launch_8GB.sh 
Prepare the host system...
Checking for 'dumpkeys.cache'... OK
Checking for 'expect'... OK
Checking for 'xxd'... OK
Checking for 'VBoxManage'... OK
Checking for VirtualBox Extension Pack... OK
Checking for VirtualBox iPXE firmware...SKIP
VirtualBox iPXE firmware is not found. Used standard firmware from the VirtualBox Extension Pack.
Checking for Mirantis OpenStack ISO image... OK
Going to use Mirantis OpenStack ISO file: iso/MirantisOpenStack-9.0.iso
Checking if SSH client installed... OK
Checking if ipconfig or ifconfig installed... OK


Now the Fuel Master node is going to download a special Linux image.

Once the bootstrap image is downloaded the Fuel slave nodes boots up with this image :


This image will send to the Fuel Master all the hardware informations of the Fuel slave nodes which are called “facts”.

This is an important step because via this image the Fuel Master node will discover the slave nodes.


Slave nodes have been created. They will boot over PXE and get discovered by the master node.
To access master node, please point your browser to:

The default username and password is admin:admin

This concludes the first part of the blog. In the next blog, I will show you  the interface of Fuel and how to install OpenStack on the Fuel slave nodes.


Cet article Mirantis OpenStack 9.0 installation using VirtualBox – part 1 est apparu en premier sur Blog dbi services.

OTN Appreciation Day : OSWatcher Black Box Analyzer (OSWBBA)

Yann Neuhaus - Tue, 2016-10-11 10:23

Following my last blog post about OSWatcher, I will present in this one OSWatcher Black Box Analyzer (OSWBBA), which is the tool that you can use to display graphically the data collected by OSWBB.
This tool is a Java utility and exists since OSWatcher version 4.0.0. It permits to create graphs and complete HTML reports containing collected OS statistics.  Image1

OSWBBA require no installation. It is embedded in the OSWatcher home directory.
To start the Analyser, run oswbba.jar :
$ java -jar oswbba.jar -i ./archive
Starting OSW Analyzer V7.3.3
OSWatcher Analyzer Written by Oracle Center of Expertise
Copyright (c)  2014 by Oracle Corporation
Parsing Data. Please Wait...
Scanning file headers for version and platform info...
Parsing file srvtestoel7.it.dbi-services.com_iostat_16.07.25.2000.dat

The ”–i” parameter indicates the OSWatcher archive directory and is mandatory.
Once launched, the main menu is displayed :
Enter 1 to Display CPU Process Queue Graphs
Enter 2 to Display CPU Utilization Graphs
Enter 3 to Display CPU Other Graphs
Enter 4 to Display Memory Graphs
Enter 5 to Display Disk IO Graphs
Enter 6 to Generate All CPU Gif Files
Enter 7 to Generate All Memory Gif Files
Enter 8 to Generate All Disk Gif Files
Enter L to Specify Alternate Location of Gif Directory
Enter T to Alter Graph Time Scale Only (Does not change analysis dataset)
Enter D to Return to Default Graph Time Scale
Enter R to Remove Currently Displayed Graphs
Enter A to Analyze Data
Enter S to Analyze Subset of Data(Changes analysis dataset including graph time scale)
Enter P to Generate A Profile
Enter X to Export Parsed Data to File
Enter Q to Quit Program

You must enable a X-Windows environment to display graphs.

If you don’t want to go through this menu every time you want to display graph or generate report, you can pass all of the above options to OSWBBA from the command line, for example :
$ java -jar oswbba.jar -i ./archive -4 -P last_crash

  • -i  : Specify the archive directory
  • -4 : Create memory graphs
  • -P : Create a profile called “last_crash”

Other options :

  • -6..8 : Same options as in the menu
  • -L : User specified location to place gif files
  • -A : Create a report
  • -B : Specify the start time to analyze (format Mon DD HH:MM:SS YYYY)
  • -E : Specify the end time to analyze  (format Mon DD HH:MM:SS YYYY)
  • -F : Specify a filename of a text file containing a list of options
    (all others options are ignored if –F is used)

Example :
$ java -jar oswbba.jar -i ./archive -6 -B Sep 23 09:25:00 2016 -E Sep 23 09:30:00 2016
Will start OSWatcher Analyzer with the following parameters :

  • Archive directory : $OSWatcher_HOME/archive
  • Generate all CPU GIF files
  • Time slot : 23 of Septembre 2016 – 09:25:00 to 09:30:00

Generated file :

It’s also possible to specify in a text file all options you want to use and then run OSWBBA with the “-f” parameter :
$ cat input.txt
-P today_crash -B Sep 23 09:00:00 2016 -E Sep 23 11:00:00 2016
$ java -jar oswbba.jar -i ./archive -F input.txt

This will generate a complete HTML report (called “today_crash”) with all available graphs based on the statistics stored in the archive directory.


Cet article OTN Appreciation Day : OSWatcher Black Box Analyzer (OSWBBA) est apparu en premier sur Blog dbi services.

OTN Appreciation Day : OSWatcher Black Box (OSWBB)

Yann Neuhaus - Tue, 2016-10-11 10:22

In this post, I will present a usefull and easy-to-use Oracle tool : OSWatcher.

Image1What is it ?

OSWatcher Black Box (OSWBB), for its full name, is a free Oracle Tool which will help you to diagnose performance issues on the OS side.
Of course, it will not solve the issue for you, but it gives a system health state at a given moment.
OSWBB is multi-platforms supported (AIX, Solaris, HP-UX, Linux and Windows) and is installed by default on Oracle Database Appliance (ODA).

How does it work ?

OSWatcher invoke OS utilities like vmstat, netstat, iostat, etc. by creating a “Data Collectors” for each of them available on the system. The “Data Collectors” works as background processes to collect periodically the data provided by these different OS utilities.
Once collected, all the statistics are stored inside a common destination (archive directory).


Below is the content of the archive directory. As you can see there is a dedicated folder for each type of OS statistics collected :
oracle@srvtestoel7:/u01/app/oracle/product/oswbb/archive/ [JOCDB1] ll
total 36
-rw-r--r-- 1 oracle oinstall 1835 28 sept. 16:55 heartbeat
drwxr-xr-x 2 oracle oinstall 4096 28 sept. 16:52 oswifconfig
drwxr-xr-x 2 oracle oinstall 4096 28 sept. 16:52 oswiostat
drwxr-xr-x 2 oracle oinstall 4096 28 sept. 16:52 oswmeminfo
drwxr-xr-x 2 oracle oinstall 4096 28 sept. 16:52 oswmpstat
drwxr-xr-x 2 oracle oinstall 4096 28 sept. 16:52 oswnetstat
drwxr-xr-x 2 oracle oinstall 6 23 sept. 09:18 oswprvtnet
drwxr-xr-x 2 oracle oinstall 4096 28 sept. 16:52 oswps
drwxr-xr-x 2 oracle oinstall 6 23 sept. 09:18 oswslabinfo
drwxr-xr-x 2 oracle oinstall 4096 28 sept. 16:52 oswtop
drwxr-xr-x 2 oracle oinstall 4096 28 sept. 16:52 oswvmstat


You can download OSWatcher from My Oracle Support – Doc ID 301137.1 (.tar file – 6Mb)


To install OSWatcher, you simply have to untar the downloaded file :
$ tar -xvf oswbb733.tar
All necessary files are stored in the oswbb folder.


To remove OSWatcher from your server, you only have to :
– Stop all OSWatcher running processes
– Delete the oswbb folder


$ nohup ./OSWatcher.sh P1 P2 P3 P4


– P1 = snapshot interval in seconds (default : 30 seconds)
– P2 = number of hours of archive data to store (default : 48 hours)
– P3 = name of a compress utility to compress each file automatically (default : none)
– P4 = alternate location to store the archive directory (default : oswbb/archive)

You can also set the UNIX environment variable oswbb_ARCHIVE_DEST to specify a non-default location.

Startup steps

Starting OSWatcher involve 4 steps :

  1. Check parameters
    $ ./OSWatcher.sh 60 24 gzip /tmp/oswbb/archive
    Info...Zip option IS specified.
    Info...OSW will use gzip to compress files.
  2. Discover OS utilities
    Testing for discovery of OS Utilities...
    VMSTAT found on your system.
    IOSTAT found on your system.
    MPSTAT found on your system.
    Discovery completed.
  3. Discover CPU count
    Testing for discovery of OS CPU COUNT
    oswbb is looking for the CPU COUNT on your system
    CPU COUNT will be used by oswbba to automatically look for cpu problems
    CPU COUNT found on your system.
    CPU COUNT = 1
  4. Data collection
    Data is stored in directory: /tmp/oswbb/archive
    Starting Data Collection...
    oswbb heartbeat:mar. sept. 13 22:03:33 CEST 2016
    oswbb heartbeat:mar. sept. 13 22:04:33 CEST 2016
    oswbb heartbeat:mar. sept. 13 22:05:33 CEST 2016
Check if OSWBB is running

$ ps -ef | grep OSWatcher | grep -v grep
oracle    8130     1  0 13:47 pts/0    00:00:33 /bin/sh ./OSWatcher.sh 5 48
oracle    8188  8130  0 13:47 pts/0    00:00:00 /bin/sh ./OSWatcherFM.sh 48 /u01/app/oracle/product/oswbb/archive

The OSWatcherFM.sh process is the file manager who delete collected statitstics once they have reached their retention.


Run the stopOSWbb.sh to stop all OSWatcher processes
$ ./stopOSWbb.sh

Configure automatic startup

Oracle provide a RPM package to configure auto-start of OSWatcher when the system starts.
You can download it here : My Oracle Support – Doc ID 580513.1
Once downloaded, install the package (as root) :
$ rpm -ihv oswbb-service-7.2.0-1.noarch.rpm
Preparing... ######################################### [100%] 1:oswbb-service    ######################################### [100%]

You can adapt the following values in /usr/libexec/oswbb-service/oswbb-helper to define the parameters with which OSWatcher will auto-starts :

Start the service :
$ service oswbb start
Starting oswbb (via systemctl): [ OK ]

Check the service :
$ service oswbb status
OSWatcher is running.

Stop the service :
$ service oswbb stop
Stopping oswbb (via systemctl):  Warning: Unit file of oswbb.service changed on disk, 'systemctl daemon-reload' recommended.
[  OK  ]

Enable the service when the system start :
$/sbin/chkconfig oswbb on

Systemd commands (Linux 7) :
$ systemctl stop oswbb.service
$ systemctl start oswbb.service
$ systemctl status oswbb.service
$ systemctl enable oswbb.service

Inside the archive directory, one dedicated folder is created by type of collected statistics :
oracle@srvtestoel7:/u01/app/oracle/product/oswbb/archive/ [JOCDB1] ll
total 0
drwxr-xr-x 2 oracle oinstall 136 23 sept. 10:00 oswifconfig
drwxr-xr-x 2 oracle oinstall 132 23 sept. 10:00 oswiostat
drwxr-xr-x 2 oracle oinstall 134 23 sept. 10:00 oswmeminfo
drwxr-xr-x 2 oracle oinstall 132 23 sept. 10:00 oswmpstat
drwxr-xr-x 2 oracle oinstall 134 23 sept. 10:00 oswnetstat
drwxr-xr-x 2 oracle oinstall 6 23 sept. 09:18 oswprvtnet
drwxr-xr-x 2 oracle oinstall 124 23 sept. 10:00 oswps
drwxr-xr-x 2 oracle oinstall 6 23 sept. 09:18 oswslabinfo
drwxr-xr-x 2 oracle oinstall 126 23 sept. 10:00 oswtop
drwxr-xr-x 2 oracle oinstall 132 23 sept. 10:00 oswvmstat

In a following blog post, I’ll present OSWatcher Black Box Analyzer (oswbaa), which is a tool used to analyze graphically the collected data.


Cet article OTN Appreciation Day : OSWatcher Black Box (OSWBB) est apparu en premier sur Blog dbi services.

OTN Appreciation Day – tnsping

Yann Neuhaus - Tue, 2016-10-11 08:38

Tim Hall had the idea that as many people as possible would write a small blog post about their favorite Oracle feature and we all post them on the same day. I do have a lot of favorite Oracle tools, and the one I choose today is: tnsping

tnsping tells you, if your connect string can be resolved and if the listener where the connect string is pointing to, is available, and in the end, it displays an estimate of the round trip time (in milliseconds) it takes to reach the Oracle Net service.

All in all, tnsping is very easy to use and that’s why I love it, and not so overloaded like e.g. crsctl.  In fact, tnsping knows only 2 parameters. <address> and optionally <count>, like shown in the following example.

Usage: tnsping <address> [<count>]

Getting the option list of tnsping, a few lines are enough. I don’t need to scroll down several pages, like e.g. for emctl.  emctl is another one besides crsctl were you can spend a lifetime only reading the manual.  No, I picked tnsping this time because I like the option list.

Here we go … now I run one tnsping without and one with count.

oracle@oel001:/home/oracle/ [OCM121] tnsping RMAN
TNS Ping Utility for Linux: Version - Production on 11-OCT-2016 14:28:14
Copyright (c) 1997, 2014, Oracle.  All rights reserved.
Used parameter files:
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = oel001)(PORT = 1521))) 
OK (0 msec)

oracle@oel001:/home/oracle/ [OCM121] tnsping RMAN 5
TNS Ping Utility for Linux: Version - Production on 11-OCT-2016 14:28:20
Copyright (c) 1997, 2014, Oracle.  All rights reserved.
Used parameter files:
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = oel001)(PORT = 1521))) 
OK (0 msec)
OK (10 msec)
OK (0 msec)
OK (0 msec)
OK (0 msec)

But … wait a second … my RMAN connect string points to host oel001, but it should point to oel002. Let’s take a look in $ORACLE_HOME/network/admin/tnsnames.ora

oracle@oel001:/u00/app/oracle/ [OCM121] cat /u00/app/oracle/product/
      (ADDRESS = (PROTOCOL = TCP)(HOST = oel002)(PORT = 1521))
      (SERVICE_NAME = OCM121)

It looks correct. So what is going on here. There are several explanations to this issue.

1.) You might have set the TNS_ADMIN environment variable, which points to a total different directory
2.) Or your sqlnet.ora might point to a LDAP server first, which resolves the name
3.) Or a total different tnsnames.ora file is taken into account, but which one?
4.) Or something totally different, e.g. corrupt nscd, symlinks …

For quite a long time, Oracle is not searching first in the $ORACLE_HOME/network/admin/tnsnames.ora
to get the name resolved. The tnsnames.ora search order is the following:

1.) $HOME/.tnsnames.ora    # yes, it looks up the a hidden file in your home directory first
2.) /etc/tnsnames.ora    # then, a global tnsnames.ora in the /etc directory
3.) $ORACLE_HOME/network/admin/tnsnames.ora    # and last but not least, it looks it up in the $ORACLE_HOME/network/admin

To prove it, simply run a strace on your tnsping command and take a look at the trace file.

$ strace -o /tmp/tnsping.trc -f tnsping RMAN
$ cat /tmp/tnsping.trc | grep tnsnames

21919 access("/home/oracle/.tnsnames.ora", F_OK) = 0
21919 access("/etc/tnsnames.ora", F_OK) = -1 ENOENT (No such file or directory)
21919 access("/u00/app/oracle/product/", F_OK) = -1 ENOENT (No such file or directory)
21919 stat("/home/oracle/.tnsnames.ora", {st_mode=S_IFREG|0644, st_size=173, ...}) = 0
21919 open("/home/oracle/.tnsnames.ora", O_RDONLY) = 3

Here we go … in my case, the “/home/oracle/.tnsnames.ora” was taken into account. Let’s take a look.
Indeed, I have found an entry here.

oracle@oel001:/home/oracle/ [OCM121] cat /home/oracle/.tnsnames.ora
      (ADDRESS = (PROTOCOL = TCP)(HOST = oel001)(PORT = 1521))
      (SERVICE_NAME = OCM121)

Have fun with tnsping.





Cet article OTN Appreciation Day – tnsping est apparu en premier sur Blog dbi services.

OTN Appreciation Day: Oracle Text

Joel Kallman - Tue, 2016-10-11 08:13
For OTN Appreciation Day, I was told that it wouldn't be appropriate to write about my favorite Oracle feature (APEX, obviously).  So I'll gladly promote my second-favorite Oracle Database feature...Oracle Text!

I've used Oracle Text for many years - from when it was SQL*TextRetrieval to Oracle ConText Option to Oracle interMedia Text to finally Oracle Text.  This was one of those products that used to be a for-cost option and was merged into the Oracle Database as native, no-cost functionality (how cool is that?).  You can use Oracle Text to index BLOB columns containing Microsoft Word or PDF documents, you can score the query results for relevance, you can perform a proximity search within the contents (find "Oracle" and "APEX" within 10 words of each other), you can search within sections of a document, you can do a fuzzy search, you can create a thesaurus to assist in searching for similar terms, you can create a text result with the matching words highlighted, and on and on.

The beauty of Oracle Text is that it's all completely accessible in SQL.  Any tool that can "talk" SQL can easily take advantage of this rich functionality in the Oracle Database - Java, .NET, PHP, Node, and of course, APEX!  I authored the PL/SQL functions and text indexes (and text queries) for AskTom back in 2001 - and they're still running as fast as ever today.  One of the most popular applications inside of Oracle, an employee directory (1.5M page views every day from 55,000 distinct users), is an APEX application that we're responsible for - and we are in the process of expanding this to use the fuzzy search capabilities of Oracle Text - what is more commonly misspelled than someone's name?  And it's easy, because this is all running inside the Oracle Database.  Whether your content is a string or BLOB or XML or JSON, once this content is inside the Oracle Database, it's accessible to Oracle Text and SQL, and the application development opportunities on top of this are easy.  I'm a big  fan of Oracle Text, and you should take a look at it too!

OTN Appreciation Day : Create Database Using SQL | Thinking Out Loud Blog

Michael Dinh - Tue, 2016-10-11 07:30

Do you ever wonder how to get all parameters for CREATE DATABASE Statement ?

I will be sharing some of the reverse engineering done to create a duplicate copy of the database.

Some of you may be thinking, “Why not just duplicate database or backup and restore?”

For the project I was working on, this was not feasible since Extended Data Types (12c NF) was enabled and there is no going back.

Restoring database from backup would result in too much data loss.

This leaves the only option is to create new database with max_string_size=standard, and perform full export/import.

From backup controlfile to trace:
SYS@DB1> alter database backup controlfile to trace as '/tmp/cf_@.sql' reuse resetlogs;
Database altered.

From /tmp/cf_DB1.sql:
$ ll /tmp/cf_DB1.sql
-rw-r--r--. 1 oracle oinstall 2955 Oct 11 04:45 /tmp/cf_DB1.sql

  GROUP 1 '/oradata/DB1A/onlinelog/o1_mf_1_czl4h9sg_.log'  SIZE 200M BLOCKSIZE 512,
  GROUP 2 '/oradata/DB1A/onlinelog/o1_mf_2_czl4h9yr_.log'  SIZE 200M BLOCKSIZE 512,
  GROUP 3 '/oradata/DB1A/onlinelog/o1_mf_3_czl4hbdb_.log'  SIZE 200M BLOCKSIZE 512
DB1:(SYS@DB1):PRIMARY> select name,comp_id,comp_name,version,schema,status
  2  from v$database,dba_registry
  3  order by 2
  4  ;

NAME       COMP_ID      COMP_NAME                                VERSION    SCHEMA     STATUS
---------- ------------ ---------------------------------------- ---------- ---------- --------
DB1        CATALOG      Oracle Database Catalog Views   SYS        VALID
DB1        CATJAVA      Oracle Database Java Packages   SYS        VALID
DB1        CATPROC      Oracle Database Packages and Types SYS        VALID
DB1        JAVAVM       JServer JAVA Virtual Machine    SYS        VALID
DB1        XDB          Oracle XML Database             XDB        VALID
DB1        XML          Oracle XDK                      SYS        VALID

6 rows selected.

DB1:(SYS@DB1):PRIMARY> select property_name,property_value from DATABASE_PROPERTIES;

PROPERTY_NAME                            PROPERTY_VALUE
---------------------------------------- ----------------------------------------
DICT.BASE                                2
DEFAULT_EDITION                          ORA$BASE
Flashback Timestamp TimeZone             GMT
EXPORT_VIEWS_VERSION                     8
DEFAULT_TBS_TYPE                         SMALLFILE
GLOBAL_DB_NAME                           DB1
NLS_NCHAR_CHARACTERSET                   AL16UTF16
NLS_NCHAR_CONV_EXCP                      FALSE
NLS_LENGTH_SEMANTICS                     BYTE
NLS_COMP                                 BINARY
NLS_DUAL_CURRENCY                        $
NLS_TIME_TZ_FORMAT                       HH.MI.SSXFF AM TZR
NLS_TIME_FORMAT                          HH.MI.SSXFF AM
NLS_SORT                                 BINARY
NLS_DATE_LANGUAGE                        AMERICAN
NLS_DATE_FORMAT                          DD-MON-RR
NLS_CALENDAR                             GREGORIAN
NLS_CHARACTERSET                         AL32UTF8
NLS_NUMERIC_CHARACTERS                   .,
NLS_ISO_CURRENCY                         AMERICA
NLS_CURRENCY                             $
NLS_TERRITORY                            AMERICA
NLS_LANGUAGE                             AMERICAN
DST_SECONDARY_TT_VERSION                 0
DST_PRIMARY_TT_VERSION                   18
DST_UPGRADE_STATE                        NONE
MAX_STRING_SIZE                          STANDARD
DBTIMEZONE                               US/Mountain
NO_USERID_VERIFIER_SALT                  88C7FDB8D44CA60E05624A08A177722C

37 rows selected.

  1  select log_mode,flashback_on,force_logging,created
  2  from V$DATABASE

LOG_MODE     FLASHBACK_ON       FORCE_LOGGING                           CREATED
------------ ------------------ --------------------------------------- -------------------
ARCHIVELOG   NO                 NO                                      2016-10-08 08:34:02

  1  select status, filename

---------- --------------------

  1  select *
  3  order by 1

ATTRIBUTE_NAME                           VALUE
---------------------------------------- ----------------------------------------
DEFAULT_TIMEZONE                         US/Mountain
FILE_WATCHER_COUNT                       0
LOG_HISTORY                              30

11 rows selected.

The finished SQL: crdb.sql
spool crdbp.log
set echo on timing on time on
host echo $ORACLE_SID
host sysresv
create spfile from pfile;
startup force nomount;
spool off
exec dbms_scheduler.set_scheduler_attribute(attribute=>'default_timezone',value=>'US/Mountain');

-- alter system set nls_length_semantics=CHAR scope=both sid='*';
-- alter database flashback on;
-- alter database FORCE LOGGING;
-- alter database enable block change tracking;

connect system/oracle @?/sqlplus/admin/pupbld.sql 

OTN Appreciation Day – The Community

Mathias Magnusson - Tue, 2016-10-11 07:00

So this is my post aboout my favorite feature of my favorite product. I can hear a lot of you say “The community is not a feature of the database or any other product”?

That is your opinion, I think it is the greatest one. I’d say that the mostly friendly Oracle community is by far the most important driver for quality solutions. I know I have and still do learn more from the community than from any manual.

I began using Oracle AskTom back when I started with Oracle. Tom was nice enough to start it up just months before I joined the fun in the Oracle world. From there I started finding all the amazing blogs that let me dig deeper and deeper into the database. Part of finding that community was fidning great presenters at Training Days that RMOUG holds every year. Those two days used to be the professional highlight of the year while I was based in Denver, CO.

My manager at the time used to comment that where she and others just saw a technology I referred to the community over and over. That is how I see it, when people talk about Oracle I think about the company and the community first and about the specific technology after that.

That is still the case to the point where I today try to create a community where one is missing. That one is just a very small piece in the bigger world wide community of Oracle professionals. The user group scene is one of the greatest opportunities available to learn more and to get a chance to share the knowledge one happens to pick up.

Not only is taking part in the community one of the greatest opportunities available to learn critical skills in the technology you focus on, presenting in it on thing you think you know forces you to learn even more about it. It is also a great way to start building a network to others who enjoy sharing and debating technical aspects of Oracle technologies.

Another part of the community is OTN who sponsors a large part of the things that makes the community “one” community. Things like the ACE-program that awards some of the best in the community the ACE-title for their ability to share and educate. The ability for user groups to have ACE Directors to visit and hold a couple of presentations is a fantastic thing for every member of a user group.

Going to conferences and when I get a chance to present at them is one of the things I enjoy the most. That is when you really feel the power of the community. I feel we have too little of it in Sweden, so to see and feel how great it is in other places provides a lot of motivation to bring people together in one in Sweden.

If you are not feeling part of the user group community, sign up with your local user group. Start reading blogs, get on twitter and start following some of the greats. From there you’ll find more and more interesting sites, people and blogs to follow.

I’ll refrain from name dropping the guys and gals I follow. If you know me, you’ll know who anyway. If not, search for your favorite topics within Oracle on Twitter or google it followed by twitter or some other social network name and you’ll find lots twitterites or bloggers writing about the stuff that inspires you.

If you still want a list of where to start, hit me up and I’ll get you a good starting point from where to expand your horizons.

OTN Appreciation Day : ADVM

Yann Neuhaus - Tue, 2016-10-11 05:26

Tim Hall had the idea that as many people as possible would write a small blog post about their favorite Oracle feature and we all post them on the same day. Here is my favorite feature: ADVM – The Oracle ASM Dynamic Volume Manager.

So, what is it? The docs tell you this: “Oracle ASM Dynamic Volume Manager (Oracle ADVM) provides volume management services and a standard disk device driver interface to clients. File systems and other disk-based applications send I/O requests to Oracle ADVM volume devices as they would to other storage devices on a vendor operating system.”

The easy to understand version is this: It enables us to use regular file systems on top of ASM.

Does is make sense to use it? When you have ASM running on the host or all the hosts of a Grid Infrastructure cluster anyway then it definitely makes sense. ASM will do all the mirroring and striping for you so there is no need to use another technology to achieve that when you can create ADVM volumes and create file systems on top of these. Although the most common scenario is to create an ACFS file system on top of the volumes you are actually not limited to that. Lets do a short demo.

Lets say we have these devices available for use by the grid user:

[root@rac1 ~] ls -la /dev/sd[b-f]*
brw-rw----. 1 root disk     8, 16 Oct 10 17:54 /dev/sdb
brw-rw----. 1 grid asmadmin 8, 17 Oct 10 18:10 /dev/sdb1
brw-rw----. 1 root disk     8, 32 Oct 10 17:54 /dev/sdc
brw-rw----. 1 grid asmadmin 8, 33 Oct 10 18:10 /dev/sdc1
brw-rw----. 1 root disk     8, 48 Oct 10 17:54 /dev/sdd
brw-rw----. 1 grid asmadmin 8, 49 Oct 10 18:10 /dev/sdd1
brw-rw----. 1 root disk     8, 64 Oct 10 17:54 /dev/sde
brw-rw----. 1 grid asmadmin 8, 65 Oct 10 18:10 /dev/sde1
brw-rw----. 1 root disk     8, 80 Oct 10 17:54 /dev/sdf
brw-rw----. 1 grid asmadmin 8, 81 Oct 10 18:10 /dev/sdf1

We want to use “/dev/sde1″ for our new ADVM volume. What we need is an ASM diskgroup in a first step because for creating an ADVM volume you’ll need a ASM diskgroup where you can place your volume on:

grid@rac1:/home/grid/ [+ASM1] sqlplus / as sysasm
SQL> create diskgroup ADVM external redundancy disk '/dev/sde1';

Diskgroup created.


Ok, fine. How can we proceed with creating a volume? Quite easy:

grid@rac1:/home/grid/ [+ASM1] asmcmd volcreate -G ADMV -s 2g VOLADVM
ORA-15032: not all alterations performed
ORA-15221: ASM operation requires compatible.asm of or higher (DBD ERROR: OCIStmtExecute)

Hm, quite clear when you search the documentation: ADVM is available since 11gR2:

Easy to fix:

grid@rac1:/home/grid/ [+ASM1] sqlplus / as sysasm

SQL> alter diskgroup ADVM set attribute 'compatible.asm'='12.1';

Diskgroup altered.


Lets try again:

grid@rac1:/home/grid/ [+ASM1] asmcmd volcreate -G ADMV -s 2g VOLADVM
grid@rac1:/home/grid/ [+ASM1] 

Perfect. Now I have a volume visible to the operating system:

grid@rac1:/home/grid/ [+ASM1] ls -la /dev/asm/*advm*
brwxrwx---. 1 root asmadmin 252, 115201 Oct 10 18:20 /dev/asm/voladvm-225

On top of this volume we can now create file systems. The natural one would be ACFS:

[root@rac1 ~] mkfs.acfs /dev/asm/voladvm-225
mkfs.acfs: version                   =
mkfs.acfs: on-disk version           = 39.0
mkfs.acfs: volume                    = /dev/asm/voladvm-225
mkfs.acfs: volume size               = 2147483648  (   2.00 GB )
mkfs.acfs: Format complete.

But in fact every other file system the operating system supports is possible, too:

[root@rac1 ~] mkfs.xfs /dev/asm/voladvm-225
meta-data=/dev/asm/voladvm-225   isize=256    agcount=4, agsize=131072 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=524288, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

Quite cool, isn’t it? Whatever file system your operating system supports can be put on ASM disk groups …


Cet article OTN Appreciation Day : ADVM est apparu en premier sur Blog dbi services.

OTN Appreciation Day : Prebuilt Developer VMs

Bar Solutions - Tue, 2016-10-11 05:11

I learnt a lot from the entire Oracle Technology Network and I still do. One of the best features of OTN these days, IMHO, is the ability to download prebuilt Virtual Machines. Since I am a developer and not an administrator, I don’t like to be bothered with stuff like how much disk space do I need, how many cores should I use etc. I can just download a Virtual Box image, import it and start experimenting with the technology I am interested in. For instance, the multi tenant features of the Oracle 12c database. The best thing in using a virtual machine is that when you screw up really, really bad, you can just throw away the virtual machine, import the original version again and try again.
Thanks, OTN, for making it possible for me to learn new technologies without having to learn all the administrator stuff.
Oh, and if I need some extra information or find out what an error means and what I can do about it, there is almost always an answer to be found at OTN.


OTN Appreciation Day: Automatic Storage Management (ASM)

Jeff Moss - Tue, 2016-10-11 04:48

Big shout out to Tim for kicking this off!

Automatic Storage Management (ASM) provides optimised volume management and filesystem capabilities for Oracle databases, whether they be single or multi instance (RAC) implementations.

Although introduced with Oracle 10g Release 1 in 2004, I first used it in a production scenario around 2008, when upgrading a hardware platform for a Data Warehouse. It seemed like a logical choice for myself and the DBAs at the time, although the storage team were less pleased at losing some control. Ultimately it proved a big success on that project and is still in stable, reliable use today.

Things I like about ASM include:

  • Simplifies storage management
  • Automatic rebalancing when capacity is added
  • Visibility within Enterprise Manager for monitoring
  • Availability of detailed metrics within the database
  • Reliable, balanced and consistent performance
  • Works with RAC
  • Rolling upgrades and patching
  • Provides a reliable cluster filesystem (ACFS)
  • Even more cool features coming in 12c such as Flex ASM


Some useful links:

ASM Administrators Guide 12cR1 (Oracle Docs)

The Mother Of All ASM Scripts (John Hallas)

Technical overview of new features for ASM in 12c (Whitepaper)

SGMB_URL = "http://www.oramoss.com/wp-content/plugins/social-media-builder/"; jQuery(".dropdownWrapper").hide();

Oracle 12c – Managing RMAN persistent settings via SQL

Yann Neuhaus - Tue, 2016-10-11 04:29

RMAN persistent settings can be managed in two different ways.

  • Via the RMAN interface
  • Via SQL

There are several scenarios when it might be helpful to use the SQL way. I will show 3 of them:

  • Automation
  • Reset to default
  • Rebuilding the RMAN persistent settings after losing all controlfiles (no catalog)

Let’s take a look at the first scenario. For example, when you have an automated way to run SQL’s against all of your databases and you want to change the RMAN retention from 3 days to 4 days for all of your databases. Then you could run the following.

SQL> select conf#, name, value from v$rman_configuration where name = 'RETENTION POLICY';

CONF# NAME                             VALUE
----- -------------------------------- ----------------------------------------------------------------------------------------


PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.

SQL> select conf#, name, value from v$rman_configuration where name = 'RETENTION POLICY';

CONF# NAME                             VALUE
----- -------------------------------- ----------------------------------------------------------------------------------------

-- The new value is, of course, immediately reflected via the RMAN interface as well


RMAN configuration parameters for database with db_unique_name OCM121 are:


The second useful scenario might be, to reset the whole RMAN config with one shot, instead of running several clear commands like the following, “RMAN> CONFIGURE BACKUP OPTIMIZATION CLEAR;” , simply run the RESETCONFIG.


PL/SQL procedure successfully completed.

-- After executing this command, the v$rman_configuration view is empty, which means that all
-- RMAN persistent settings are default.

SQL> select conf#, name, value from v$rman_configuration;

no rows selected


And last but not least, to restore the RMAN persistent settings via SQL, in case you have lost all of your controlfiles and no RMAN catalog is in place.

One little side note, in case you have a RMAN catalog. The RMAN sync from the controlfile to the catalog is usually unidirectional, meaning that the controlfile is always the master and it syncs the information to the catalog. However, there are exceptions were it is bidirectional. One of it is, when you recreate the controlfile manually, then RMAN is able to get the last RMAN persistent settings from the catalog and applies it to the controlfile.

However, if you don’t have a catalog, dump out the RMAN persistent settings into SQL, simply by backing up the controlfile to trace.

SQL> alter database backup controlfile to trace as '/tmp/cntrl.trc';

Database altered.

-- Configure RMAN configuration record 1
-- Configure RMAN configuration record 2
-- Configure RMAN configuration record 3
-- Configure RMAN configuration record 4
-- Configure RMAN configuration record 5
-- Configure RMAN configuration record 6
-- Configure RMAN configuration record 7
-- Configure RMAN configuration record 8
-- Configure RMAN configuration record 9

And if you run into the severe situation of losing all controlfiles, you can restore the RMAN persistent settings quite quickly. Especially useful, when you have configured complex Media Manager settings.

P.S. Managing RMAN persistent settings via SQL is not a 12c feature. It exists for quite a long time.


Cet article Oracle 12c – Managing RMAN persistent settings via SQL est apparu en premier sur Blog dbi services.

OTN Appreciation Day: Breaking Barriers…In Memory

Marco Gralike - Tue, 2016-10-11 04:17
The stuff I liked most in the database releases of the last years is the…

OTN Appreciation Day: OBIEE's Export to Excel Functionality

Rittman Mead Consulting - Tue, 2016-10-11 04:11

Only kidding…. Do you know that almost any transformation doable in excel can be achieved in OBIEE, probably faster and with zero impact on your local workstation?

Cat Million Rows Image credit https://twitter.com/fomin_andrew/status/749305992198881281

Why bothering downloading data to Excel when you have pivot tables, conditional formatting and a huge variety of graphs with drilling/action capabilities all in OBIEE. A platform where analysis can be shared by passing a single URL instead of emailing huge XLS files?

Sometimes however there is a good reason to export to excel, like when preparing a presentation on top of OBIEE data/analysis. The following are the possible ways of achieving the OBIEE/Excel integration:

  • Dashboard and Analysis can be exported to excel with a single click
  • A BI Publisher version of a dashboard can be created and used by default when exporting
  • Excel can be linked via Smartview to a single Analysis: Data and Visualisations can be downloaded and refreshed upon request with configurable parameters.
  • Excel can directly query the BiServer Subject Areas via Smartview.
  • Excel version of Dashboard and Analysis can be delivered by email via Agents.

An important note, Oracle published "OBIEE - New Features, Export Guidance And Recommendations For Working With Microsoft Office (Doc ID 1558070.1)". This Document contains recommendations on how to provide the export to Excel depending on the output data volume. The document was written for OBIEE but the same suggestions apply to almost any OBIEE version available.

Categories: BI & Warehousing

OTN Appreciation Day : Partition your table online !

Laurent Schneider - Tue, 2016-10-11 03:26

#ThanksOTN @oraclebase

No, I am not talking about DBMS_REDEFINITION, where you get a kind of online feeling.

No, I don’t want to rename my table, rename my foreign keys, my primary key, my not-null-constraints, recreate my referential integrity, recompile my triggers.

I just want to partition a non-partitioned table.


This is going to save me a lot of Saturday work &#x1f642;

You need 12.2 to run this.

OTN Appreciation Day: Flashback

John Scott - Tue, 2016-10-11 02:48

This is my contribution to the excellent OTN Appreciation Day idea by Tim

The Flashback (particularly Flashback Query) features of the database have saved my neck many (too many!) times over the years.

For example retrieving the value of the Employees salary as it was 10 minutes ago:

SQL> select sal 
 from emp 
 as of timestamp sysdate - interval '10' minutes 
 where ename = 'SCOTT';

This feature can be used almost everywhere Oracle is used, for example it’s embedded into the Export Application feature of Oracle Application Express


As an extension to this, using the Flashback Data Archive feature allows you maintain an archive  of your application data that you can query in real time – without having to write your own logic.

Flashback – an extremely useful but often little known and underused feature of Oracle!

Which EBS 12.1.3 RPC Should I Apply?

Steven Chan - Tue, 2016-10-11 02:05

A customer was in the process of upgrading their EBS 12.1.3 environment to RPC 4 when they heard about the release of RPC 5.  They asked, "Which one should we apply?"

This kind of question crops up frequently since we're continually producing new EBS updates. There is no definitive right answer.  For example, the answer depends upon:

  • How far a customer might have progressed in their testing of the previous update
  • The number of customizations affected by the respective updates
  • Whether the customer wants new functionality available only in the latest update
  • Whether the customer needs fixes that are only available in the latest update
  • How risk-averse the customer is (e.g. "Let other customers live on the bleeding edge")
  • etc.

EBS 12.1.3 RPC 5 screenshot download

What is a Recommended Patch Collection?

Recommended Patch Collections (RPC) are collections of patches that we have already released individually for EBS 12.1.3.

Customers can apply individual patches or they can apply a Recommended Patch Collection.  Either approach is fine.

We periodically bundle the latest patches together to save you the trouble of applying them individually.  As an added bonus, all patches in a given RPC are tested together for compatibility, which isn't always true for individual patches that are released on an emergency basis.

Recommended Patch Collections are not a “release.”  We currently have no plans to make a specific RPC a minimum baseline or prerequisite for new EBS 12.1 patches. 

What is our policy for releasing EBS patches?

Our official policy for releasing EBS 12.1 and 12.2 patches is published here:

Our current policy is that EBS 12.1.3 users should apply the latest 12.1.3 product family patches for the products in use, plus a small number of ATG technology stack patches. We will produce new EBS patches for issues that can be reproduced in environments with those prerequisites.

What's the best strategy for applying EBS 12.1.3 Recommended Patch Collections?

We generally recommend applying the latest EBS updates to production as soon as it's convenient for your users.

If you're in the middle of testing a specific RPC and it's too late to switch to a newer one, go ahead and complete your patching project on production. You can always apply the next RPC in a subsequent project.

Related Articles

Categories: APPS Blogs

Documentum story – Manual deployment of X DARs on Y docbases

Yann Neuhaus - Tue, 2016-10-11 02:00

In a previous blog (click here), I presented a common issue that might occur during the installation of some DARs and how to handle that with what Documentum provides but there are some limitations to that. Indeed the script repositoryPatch.sh is pretty good (except the small bug explained in the other blog) but its execution is limited to only one docbase and it is pretty boring to always put the full path of the DARs file knowing that usually all DARs will be at the same place (or at least this is what I would recommend). In addition to that, this script repositoryPatch.sh might not be available in your Content Server because it is normally available only after applying a patch of the Content Server. Therefore we usually use our own shell script to deploy X DARs on Y docbases with a single command.


For this blog, let’s use the following:

  • Documentum CS 7.2
  • RedHat Linux 6.6
  • $DOCUMENTUM=/app/dctm/server
  • $DM_HOME=/app/dctm/server/product/7.2


I will propose you in this blog three different solutions to avoid the issue with the space in the name of a DAR and to be able to deploy all DARs that you want on all docbases that you define.

  1. Variable with space separated list
dar_list=("DAR 1.dar" "DAR 2.dar" "DAR 3.dar")
for docbase in $docbases
        for dar in "${dar_list[@]}"
                echo "Deploying $darname into $docbase"
                ts=$(date "+%Y%m%d-%H%M%S")
                $JAVA_HOME/bin/java -Ddar="$dar_location/$dar" \
                        -Dlogpath="$dar_location/dar-deploy-$darname-$docbase-$ts.log" \
                        -Ddocbase=$docbase -Duser=$username -Ddomain= -Dpassword="$password" \
                        -cp $DM_HOME/install/composer/ComposerHeadless/startup.jar \
                        org.eclipse.core.launcher.Main \
                        -data $DM_HOME/install/composer/workspace \
                        -application org.eclipse.ant.core.antRunner \
                        -buildfile $DM_HOME/install/composer/deploy.xml


This is probably not the best solution because you have to manually add double quotes around each DAR name so that’s a little bit boring, unless you already have such a list. Please note that with this script, all DARs must be in the folder $DM_HOME/install/DARsInternal/ which is the folder used by Documentum by default for DARs.


  1. No variable but still space separated list
for docbase in $docbases
        for dar in "DAR 1.dar" "DAR 2.dar" "DAR 3.dar"
                echo "Deploying $darname into $docbase"
                ts=$(date "+%Y%m%d-%H%M%S")
                $JAVA_HOME/bin/java -Ddar="$dar_location/$dar" \
                        -Dlogpath="$dar_location/dar-deploy-$darname-$docbase-$ts.log" \
                        -Ddocbase=$docbase -Duser=$username -Ddomain= -Dpassword="$password" \
                        -cp $DM_HOME/install/composer/ComposerHeadless/startup.jar \
                        org.eclipse.core.launcher.Main \
                        -data $DM_HOME/install/composer/workspace \
                        -application org.eclipse.ant.core.antRunner \
                        -buildfile $DM_HOME/install/composer/deploy.xml


Same as before for this one, you don’t need the @ trick since the list of DARs is in the for loop directly but you still need to manually put double quotes around the file names.


  1. Variable with comma separated list
dar_list="DAR 1.dar,DAR 2.dar,DAR 3.dar"
for docbase in $docbases
        IFS=',' ; for dar in $dar_list
                echo "Deploying $darname into $docbase"
                ts=$(date "+%Y%m%d-%H%M%S")
                $JAVA_HOME/bin/java -Ddar="$dar_location/$dar" \
                        -Dlogpath="$dar_location/dar-deploy-$darname-$docbase-$ts.log" \
                        -Ddocbase=$docbase -Duser=$username -Ddomain= -Dpassword="$password" \
                        -cp $DM_HOME/install/composer/ComposerHeadless/startup.jar \
                        org.eclipse.core.launcher.Main \
                        -data $DM_HOME/install/composer/workspace \
                        -application org.eclipse.ant.core.antRunner \
                        -buildfile $DM_HOME/install/composer/deploy.xml


This version is my preferred one because what you need is just a list of all DARs to be installed and the separation is just a comma so that’s pretty simple to obtain and simpler to manage than double quotes everywhere. Now these versions will all provide the following output showing that the script is working properly even for DARs containing spaces in their names:

Deploying DAR 1.dar into DOCBASE1
Deploying DAR 2.dar into DOCBASE1
Deploying DAR 3.dar into DOCBASE1
Deploying DAR 1.dar into DOCBASE2
Deploying DAR 2.dar into DOCBASE2
Deploying DAR 3.dar into DOCBASE2
Deploying DAR 1.dar into DOCBASE3
Deploying DAR 2.dar into DOCBASE3
Deploying DAR 3.dar into DOCBASE3


So that was for the deployment of several DARs in several docbases. By default Documentum will consider that the username is “dmadmin”. If this isn’t the case, then this script will not work in its current state. Yes I know, we specified the user in the script but Documentum doesn’t care and it will fail if you aren’t using dmadmin. If you need to specify another name for the Installation Owner, then you need to do three additional things. The first one is to add a new parameter in the script that will therefore now look like the following:

dar_list="DAR 1.dar,DAR 2.dar,DAR 3.dar"
for docbase in $docbases
        IFS=',' ; for dar in $dar_list
                echo "Deploying $darname into $docbase"
                ts=$(date "+%Y%m%d-%H%M%S")
                $JAVA_HOME/bin/java -Ddar="$dar_location/$dar" \
                        -Dlogpath="$dar_location/dar-deploy-$darname-$docbase-$ts.log" \
                        -Ddocbase=$docbase -Duser=$username -Ddomain= -Dpassword="$password" \
                        -Dinstallparam="$dar_location/installparam.xml" \
                        -cp $DM_HOME/install/composer/ComposerHeadless/startup.jar \
                        org.eclipse.core.launcher.Main \
                        -data $DM_HOME/install/composer/workspace \
                        -application org.eclipse.ant.core.antRunner \
                        -buildfile $DM_HOME/install/composer/deploy.xml


After doing that, the second thing to do is to create the file installparam.xml that we used above. In this case, I put this file in $DM_HOME/install/DARsInternal but you can put it wherever you want.

[dmadmin@content_server_01 ~]$ cat $DM_HOME/install/DARsInternal/installparam.xml
<?xml version="1.0" encoding="UTF-8"?>
<installparam:InputFile xmlns:installparam="installparam" xmlns:xmi="http://www.omg.org/XMI" xmi:version="2.0">
    <parameter value="dmadmin" key="YOUR_INSTALL_OWNER"/>


Just replace in this file YOUR_INSTALL_OWNER with the name of your Installation Owner. Finally the last thing to do is to update the buildfile. In our script, we are using the default one provided by EMC. In this buildfile, you need to specifically tell Documentum that you want it to take into account a custom parameter file and this is done by adding a single line in the emc.install XML tag:

[dmadmin@content_server_01 ~]$ grep -A5 emc.install $DM_HOME/install/composer/deploy.xml
        <emc.install dar="${dar}"
                     inputfile="${installparam}" />


Once this is done, you can just restart the deployment of DARs and it should be successful this time. Another solution to specify another Installation Owner or add more install parameters is to not use the default buildfile provided by EMC but use your own custom buildile. This will be an ANT file (xml with project, target, aso…) that will define what to do exactly so this is highly customizable. So yeah there are a lot of possibilities!

Note: Once done, don’t forget to remove the line from the file deploy.xml ;)


Hope you enjoyed this blog and that this will give you some ideas about how to improve your processes or how to do more with less. See you soon!


Cet article Documentum story – Manual deployment of X DARs on Y docbases est apparu en premier sur Blog dbi services.

OTN Appreciation Day : Transportable tablespaces

Yann Neuhaus - Tue, 2016-10-11 01:15

Tim Hall had the idea that as many people as possible would write a small blog post about their favorite Oracle feature and we all post them on the same day. Here is my favorite feature that I described at the “EOUC Database ACES Share Their Favorite Database Things“: Transportable Tablespaces appeared in Oracle 8.1.5


I’ll start with a change that came between Oracle 7 and Oracle 8. The ROWID, which identifies the physical location of a row within an database (with file ID, block offset, and row directory number) changed to be the location within a tablespace only. The format did not change, but the file ID was changed to be a relative file number instead of an absolute file number.
Here is the idea:

Actually, to be able to migrate without visiting each block (the ROWID is present in all blocks, all redo vectors, etc) they used the same number but that number is unique only within a tablespace. The first goal was to hold more datafiles per tablespace (Oracle 8 was the introduction of VLDB – Very Large Databases – concepts). The limit of 255 datafiles per database became a limit of 255 datafiles per tablespace. So the numbers starts the same as before but can go further.

This change was simple because anytime you want to fetch a row by its ROWID you know which table you query, so you know the tablespace. The exception is when the ROWID comes from a global index on a partitioned table, and for this case Oracle 8 introduced an extended ROWID that contains additional bytes to identify the segment by its DATA_OBJECT_ID.

By the way, this makes tablespaces more independent on the database that contains them because all row addressing is relative.

Locally Managed Tablespaces

Another change in 8i was Locally Managed Tablespaces. Before, the space management of the tablespaces was centralized in the database dictionary. Now, it is delocalized in each tablespace. What was stored in UET$ system table is now managed as a bitmap in the first datafile header.

Pluggable tablespaces

The original name of transportable tablespace was “pluggable tablespaces”. Because they are now more self-contained, you can detach them from a database an attach them to another database, without changing the content. This means that data is moved physically which is faster than the select/inserts that are behind a logical export/import. There are only two things that do not come with the datafiles.

The open transactions store their undo in the database UNDO tablespace. This means that if you detach a user tablespace you don’t have the information to rollback the ongoing transactions when you re-attach it elsewhere. For this reason, this ‘detach’ is possible only when there are no on-going transactions: you have to put the tablespace READ ONLY.

The user object metadata is stored in the database dictionary. Without them, the datafiles is just a bunch of bytes. You need the metadata to know what is a table or index, and which one. So, with transportable tablespaces, a logical export/import remains for the metadata only. This was done with exp/imp when TTS were introduced and is now done with DataPump. Small metadata is moved logically. Large data is moved physically.


Transportable tablespaces

TTS is faster than simple DataPump because data is moved physically by moving the datafiles. TTS is more flexible than an RMAN duplicate because you can move a subset of a database easily. Because the metadata is still transported logically, and datafiles are compatible with newer versions, TTS can be done cross-version, which makes it a nice way to migrate and upgrade. It is used also for tablespace point-in-time recovery where you have to recover to an auxiliary instance and then transport the tablespace to the target.
TTS is also used to move data quickly from operational database to a datawarehouse ODS.
It is also a good way to publish and share a database in read-only, on a DVD for example.

And beyond

Except with the move to DataPump for the metadata transfer, TTS has not change a lot until 12c. In 12.1 you have full transportable tablespace which automates the operations when you want to move a whole database. This can be used to migrate from non-CDB to multitenant architecture.

With multitenant, pluggable databases is an extension of TTS. Because user metadata come with the PDB system tablespaces yon don’t need to export them logically anymore: you transport the whole PDB. That’s the first restriction relieved. The second restriction, the need for read only, will be relieved as well when the UNDO will be local to the PDB and I don’t think I disclose any secret when telling that local UNDO has been announced for 12.2

OTN Appreciation Day

This was my contribution to the “EOUC Database ACES Share Their Favorite Database Things” at Oracle Open World 2016 organized by Debra Lilley. Tim Hall idea of “OTN Appreciation Day” comes from that. You still have time to contribute for this day. No need for long posts – I always write a but more than what I plan to. The “rules” for this day is described in oracle-base.com


Cet article OTN Appreciation Day : Transportable tablespaces est apparu en premier sur Blog dbi services.


Subscribe to Oracle FAQ aggregator