Feed aggregator

locking explain plan table

Tom Kyte - Thu, 2018-05-31 04:46
Hi Tom, while creating explain plan for query, explain plan table is getting locked until commit or rollback. If we don't want to commit / rollback, is it a problem that explain plan table stays locked? also while it is locked we don't have any prob...
Categories: DBA Blogs

crsctl and background processes of rac

Tom Kyte - Thu, 2018-05-31 04:46
Hi Tom, Which background process or any process works to retrieve the result of command "crsctl stat res -t" Regards, S.Alam
Categories: DBA Blogs

Adding a Documentum Extension to gawk, part II

Yann Neuhaus - Thu, 2018-05-31 04:41

This is part II of the article “Adding a Documentum Extension to gawk”. You can find Part I here
Before we can test the extension, we need a test program and some helper functions for comfort. Let’s prepare them.
o  Move back to the dmgawk directory and edit DctmAPI.awk, the wrapper functions;

$ cd ../../..
$ pwd
$ vi DctmAPI.awk

o  Cut and paste the lines below:

@load "dctm"

# provides high-level function to do Documentum stuff;
# set environment variable $DOCUMENTUM to where the dmcl.ini file is and $LD_LIBRARY_PATH to the location of libdmcl40.so;
# this library file is made available to gawk scripts by @include-ing in it;
# the environment variable AWKPATH can be set to the path containing the included awk sources;
# Cesare Cervini
# dbi-services.com
# 5/2018

dmLogLevel = 1

function dmShow(mesg) {
# displays the message msg if allowed
   if (dmLogLevel > 0)
      print mesg

function dmConnect(docbase, user_name, password) {
# connects to given docbase as user_name/password;
# returns a session id if OK, an empty string otherwise
   dmShow("in connect(), docbase = " docbase ", user_name = " user_name ", password = " password)
   session = dmAPIGet("connect," docbase "," user_name "," password)
   if (!session) {
      print "unsuccessful connection to docbase " docbase " as user " user_name
      return "" 
   else {
      dmShow("successful session " session)
      dmShow(dmAPIGet("getmessage," session))
   dmShow("exiting connect()")
   return session

function dmExecute(session, dql_stmt) {
# execute non-SELECT DQL statements;
# returns 1 if OK, 0 otherwise;
   dmShow("in dmExecute(), dql_stmt=" dql_stmt)
   query_id = dmAPIGet("query," session "," dql_stmt)
   if (!query_id) {
      dmShow(dmAPIGet("getmessage," session))
      return 0
   if (!dmAPIExec("close," session "," query_id)) {
      dmShow(dmAPIGet("getmessage," session))
      return 0
   dmAPIGet("getmessage," session)
   return 1

function dmSelect(session, dql_stmt, attribute_names) {
# execute the DQL SELECT statement passed in dql_stmt and outputs the result to stdout;
# attributes_names is a list of attributes to extract from the result set;
# return 1 if OK, 0 otherwise;
   dmShow("in dmSelect(), dql_stmt=" dql_stmt)
   query_id = dmAPIGet("query," session "," dql_stmt)
   if (!query_id) {
      dmShow(dmAPIGet("getmessage," session))
      return 0

   s = ""
   nb_attrs = split(attribute_names, attributes_tab, " ")
   for (attr = 1; attr <= nb_attrs; attr++)
      s = s "[" attributes_tab[attr] "]\t"
   print s
   resp_cntr = 0
   while (dmAPIExec("next," session "," query_id) > 0) {
      s = ""
      for (attr = 1; attr <= nb_attrs; attr++) {
         value = dmAPIGet("get," session "," query_id "," attributes_tab[attr])
         if ("r_object_id" == attributes_tab[attr] && !value) {
            dmShow(dmAPIGet("getmessage," session))
            return 0
         s= s "[" (value ? value : "NULL") "]\t"
      resp_cntr += 1
      dmShow(sprintf("%d: %s", resp_cntr, s))
   dmShow(sprintf("%d rows iterated", resp_cntr))

   if (!dmAPIExec("close," session "," query_id)) {
      dmShow(dmAPIGet("getmessage," session))
      return 0

   return 1

function dmDisconnect(session) {
# closes the given session;
# returns 1 if no error, 0 otherwise;
   dmShow("in dmDisconnect()")
   status = dmAPIExec("disconnect," session)
   if (!status)
      dmShow("Exception in dmDisconnect():")
      dmShow("exiting disconnect()")
   return status

o  Ditto for the test program tdctm.awk;
$ vi tdctm.awk

# test program for DctmAPI.awk and the interface dctm.c;
# Cesare Cervini
# dbi-services.com
# 5/2018

@include "DctmAPI.awk"

   dmLogLevel = 1

   status = dmAPIInit()
   printf("dmAPIInit(): %d\n", status)
   if (status)
      print("dmAPIInit() was successful")
      print("dmAPIInit() was not successful")
   printf "\n"
   session = dmConnect("dmtest", "dmadmin" , "dmadmin")
   printf("dmConnect: session=%s\n", session)
   if (!session) {
      print("no session opened, exiting ...")

   printf "\n"
   dump = dmAPIGet("dump," session ",0900c35080008107")
   print("object 0900c35080008107 dumped:\n" dump)

   printf "\n"
   stmt = "update dm_document object set language_code = 'FR' where r_object_id = '0900c35080008107'"
   status = dmExecute(session, stmt)
   if (status)
      print("dmExecute [" stmt "] was successful")
      print("dmExecute [" stmt "] was not successful")

   printf "\n"
   stmt = "select r_object_id, object_name, owner_name, acl_domain, acl_name from dm_document"
   status = dmSelect(session, stmt, "r_object_id object_name owner_name acl_domain acl_name")
   if (status)
      print("dmSelect [" stmt "] was successful")
      print("dmSelect [" stmt "] was not successful")

   printf "\n"
   stmt = "select count(*) from dm_document"
   status = dmSelect(session, stmt,  "count(*)")
   if (status)
      print("dmSelect [" stmt "] was successful")
      print("dmSelect [" stmt "] was not successful")

   printf "\n"
   status = dmDisconnect(session)
   if (status)
      print("successfully disconnected")
      print("error while  disconnecting")

   printf "\n"
   status = dmAPIDeInit()
   if (status)
      print("successfully deInited")
      print("error while  deInited")


o  Let’s test now !
first, set the needed variables for the Documentum environment, if still not done;
$ export DOCUMENTUM=/home/dmadmin/documentum
$ export LD_LIBRARY_PATH=$DOCUMENTUM/product/7.3/bin

o  Then, we need to tell the new gawk where to find the dynamic extension and the awk wrapper;
the environment variables are AWKLIBPATH and AWKPATH respectively;
let’s define those variables on-the-fly for a change:

$ AWKLIBPATH=gawk-4.2.1/extension/.libs AWKPATH=./ gawk-4.2.1/gawk -f ./tdctm.awk

o  Produced output:

dmAPIInit(): 1
dmAPIInit() was successful
in connect(), docbase = dmtest, user_name = dmadmin, password = dmadmin
successful session s0
[DM_SESSION_I_SESSION_START]info: "Session 0100c3508000d6d6 started for user dmadmin."
exiting connect()
dmConnect: session=s0
object 0900c35080008107 dumped:
object_name : 4/10/2018 20:09:28 dm_DBWarning
title : Result of dm_method(dm_DBWarning) with status code (0)
subject : Result of dm_method(dm_DBWarning) with command line: ./dmbasic -f../install/admin/mthd4.ebs -eDBWarning -- -docbase_name dmtest.dmtest -user_name dmadmin -job_id 0800c3508000035f -method_....
authors []:
keywords []:
resolution_label :
owner_name : dmadmin
owner_permit : 7
group_name : docu
group_permit : 5
world_permit : 3
log_entry :
acl_domain : dmadmin
acl_name : dm_4500c35080000101
language_code : FR
r_object_type : dm_document
r_creation_date : 4/10/2018 20:09:41
r_modify_date : 5/13/2018 22:09:06
r_modifier : dmadmin
r_access_date : nulldate
r_composite_id []:
r_composite_label []:
r_component_label []:
r_order_no []:
i_is_replica : F
i_vstamp : 200
in dmExecute(), dql_stmt=update dm_document object set language_code = 'FR' where r_object_id = '0900c35080008107'
dmExecute [update dm_document object set language_code = 'FR' where r_object_id = '0900c35080008107'] was successful
in dmSelect(), dql_stmt=select r_object_id, object_name, owner_name, acl_domain, acl_name from dm_document
[r_object_id] [object_name] [owner_name] [acl_domain] [acl_name] 1: [0900c350800001d0] [Default Signature Page Template] [dmadmin] [dmadmin] [dm_4500c35080000101] 2: [6700c35080000100] [CSEC Plugin] [dmadmin] [dmadmin] [dm_4500c35080000101] 3: [6700c35080000101] [Snaplock Connector] [dmadmin] [dmadmin] [dm_4500c35080000101] 4: [0900c350800001ff] [Blank Word 2007 / 2010 Document] [dmadmin] [dmadmin] [dm_4500c35080000101] 5: [0900c35080000200] [Blank Word 2007 / 2010 Template] [dmadmin] [dmadmin] [dm_4500c35080000101] 6: [0900c35080000201] [Blank Word 2007 / 2010 Macro-enabled Document] [dmadmin] [dmadmin] [dm_4500c35080000101] 7: [0900c35080000202] [Blank Word 2007 / 2010 Macro-enabled Template] [dmadmin] [dmadmin] [dm_4500c35080000101] 8: [0900c35080000203] [Blank Excel 2007 / 2010 Workbook] [dmadmin] [dmadmin] [dm_4500c35080000101] 9: [0900c35080000204] [Blank Excel 2007 / 2010 Template] [dmadmin] [dmadmin] [dm_4500c35080000101] 10: [0900c350800001da] [11/21/2017 16:31:10 dm_PostUpgradeAction] [dmadmin] [dmadmin] [dm_4500c35080000101] ...
885: [0900c35080005509] [11/30/2017 20:11:05 dm_DataDictionaryPublisher] [dmadmin] [dmadmin] [dm_4500c35080000101] 886: [0900c3508000611d] [12/11/2017 19:40:05 dm_DataDictionaryPublisher] [dmadmin] [dmadmin] [dm_4500c35080000101] 887: [0900c35080006123] [12/11/2017 20:10:08 dm_DataDictionaryPublisher] [dmadmin] [dmadmin] [dm_4500c35080000101] 888: [0900c3508000612d] [12/11/2017 20:30:07 dm_UpdateStats] [dmadmin] [dmadmin] [dm_4500c35080000101] 888 rows iterated
dmSelect [select r_object_id, object_name, owner_name, acl_domain, acl_name from dm_document] was successful
in dmSelect(), dql_stmt=select count(*) from dm_document
[count(*)] 1: [888] 1 rows iterated
dmSelect [select count(*) from dm_document] was successful
in dmDisconnect()
exiting disconnect()
successfully disconnected
successfully deInited

That’s it, enjoy your new dmgawk!

Next steps

Now that the extension has proved to be usable, it could be interesting to deploy it system-wide. This can be done by the generated Makefile at the root directory of the package as follows:

$ sudo make install

The loadable extensions are stored in /usr/local/lib/gawk:

$ ll /usr/local/lib/gawk
total 504
-rwxr-xr-x 1 root root 124064 May 13 16:38 filefuncs.so
-rwxr-xr-x 1 root root 26920 May 13 16:38 time.so
-rwxr-xr-x 1 root root 31072 May 13 16:38 dctm.so

As this is the standard place to look for them, there no need to specify the AWKLIBPATH anymore.
There is also a standard place for awk utility scripts, /usr/local/share/awk. If DctmAPI.awk is installed there, it becomes available to anybody from anywhere on the system and setting AWKPATH is not needed any more. In addition, if /usr/local/bin is in the righteous $PATH, the newly extended gawk is accessible directly, along with its extensions and its ancillary scripts, e.g.:
$ gawk -f dmgawk/tdctm.awk
You may also want to define an alias dmawk pointing to the extended gawk:
$ alias dmawk=/usr/local/bin/gawk
or you may want to name it dmgawk so you still can access the original dmawk:
$ alias dmgawk=/usr/local/bin/gawk

I’m not finished yet!

I started this mini-project on a 32-bit Ubuntu VM with the Documentum v5.3 libdmcl40.so library. Once it proved working, I replayed all the steps on a 64-bit Ubuntu VM with the Documentum v7.3 binaries. There, as already noticed in the past, the library libdmcl40.so kept crashing the executable that loads it at run-time, gawk here, so I successfully resorted to libdmcl.so, the one that calls java code behind the scenes through JNI. It is a bit slower at start up but at least does work as expected. It is very likely that the ancient libdmcl40.so is going to be retired in some future release of the content server, so be prepared to switch.
You may remember that I edited the automake Makefile.am to have the extension compiled, and that I had to relink said extension manually with the libdmcl.so library. A better way would be to provide it with its own makefile to automate those steps. This way, no original gawk source file would be altered.

Conclusion, finally

This test program and wrapper are actually the gawk counterparts of the Documentum extension for python I wrote earlier for this blog (check it here). With these 2 extensions, an administrator does not have any longer to be stuck with a limited and, even worse, stagnant tool. The awk language is easy and well suited to the kind of work administrators do with Documentum. Its gawk implementation is powerful, open, very well maintained and, as shown, easily extensible so there is no reason not to use it.
For licensing reasons, the interface cannot be distributed by Documentum but anybody can install it privately without restriction.


Cet article Adding a Documentum Extension to gawk, part II est apparu en premier sur Blog dbi services.

Adding a Documentum Extension to gawk, part I

Yann Neuhaus - Thu, 2018-05-31 04:41

Recently, I was searching my NAS for some files which, to end this intolerable suspense, I did not find but on the other hand I did stumbled across a couple of interesting dmawk scripts I wrote for on customer more than 20 years ago. Some statements looked a bit odd, e.g. access to elements from multi-dimensional arrays such as “a[i1][i2]”, or “delete A” to empty an array (instead of the well-known awk idiom split(“”, A)). After a while, I understood what was going on here: the dmawk that came with content server v3.x was formerly based on the GNU dialect of awk named gawk. It was already a more powerful interpreter than the standard AT&T awk available on the Unix I was using, namely HP-UX, and nowadays it has become even better, as you can see by yourself by checking the official manual here.

At that time, it already allowed to be extended, which Documentum took profit of by turning gawk into a DMCL client, dmawk. However, it was quite a tedious task because it required hacking deeply into the source code. When years later I was playing with this and trying to add Oracle connectivity to gawk (and turn it into an Oracle OCI-based client, oragawk ;-), I remember for instance one step that required the symbol table to be edited in order to add the new functions, and possibly their implementation code inserted in the bison file; finally, the whole gawk source had to be recompiled and relinked. Tedious yet bearable as it didn’t prevent passionate people from forking custom versions with useful new functionalities such as xml processing in xgawk.

Over the years, starting with v4.1.1, gawk has evolved a new, much easier mechanism for adding extensions (aka plugins). It is named dynamic extension (see documentation here); it lets one load shared libraries at run-time and invoke their functions from within a gawk script through a minimum interface to be provided; the other way around, i.e. callbacks, is also possible for the brave, or really motivated, ones. Several powerful extensions have been developed through this system such as: json serialization from/to an associative array (useful e.g. to prepare parameters to pass to javascript functions such as the ones in HighChart or flot libraries), arbitrary-precision integer and floating point support, postgresql database access, etc. (for a list of current extensions, see here). If you have a recent gawk compiled with the MPFR extension, try for example the command below:
gawk -M -v prec=16000 'BEGIN{PREC = prec; printf("a big fp number:\n%f\n\n", 3.0**10**4); n = 3**10**4; print "a large integer number:\n" n; print "it has " length(n) " digits"; exit}'
Impressive, isn’t ?
To know if your local gawk has the option, ask it so:
gawk --version
GNU Awk 4.1.3, API: 1.1 (GNU MPFR 3.1.4, GNU MP 6.1.0)
Copyright (C) 1989, 1991-2015 Free Software Foundation.

This one has it.

So why hasn’t Documentum kept up with gawk and gone for the standard, lesser awk instead ? Maybe because of a licensing issue ? Gawk is protected by the GNU GPL (see its full text in the gawk manual above) which mandates not only its modified source code but also all extension code statically or dynamically linked with it to be provided also in source code form and included in the delivered package, and maybe this was disturbing at the time. Now that open source is omnipresent, from O/Ses to RDBMS, from the network software to WEB development frameworks, from ERP to CRM software, it is so common that even a restrictive license such as the GPL does not shock anyone today. Linux itself, or GNU/Linux as some like to call it, contains a massive amount of GPL-ed software, linux per se being limited to the kernel, and is under the GPL itself. Thus, in the new extension mechanism, the GPL requires to publish the source code of not only the interface to be linked with gawk but also of the proprietary shared library libdmcl40.so (see the definition of Corresponding Source) that is loaded at run-time; but wait: since libdmcl40.so tightly uses other proprietary Documentum libraries, their source code would also need to be published, and so on, transitively. Obviously, this licensing is by design not favorable to closed source, which is exactly the reason d’être of the FSF (aka, the Free Software Foundation). If Documentum chose not to go the GPL way at that time with the simpler EDMS, it is very unlikely that it will in the future unless it drastically changes its business model and become, say, the Red Hat of document management!

Fortunately, there are no reasons to hold one’s breath until that day for I propose here a simple, straightforward interface between gawk and the Documentum libdmcl40.so/libdmcl.so run-time libraries which will not violate the GPL as nothing closed source is distributed. Its usage is exactly the same as documented in the API reference manual. I’ll show you hereafter how to prepare and use it. Since this article will be quite long, I’ll split it in two parts. Part I, you’re reading it, presents the interface and explains how to compile it as a gawk extension. Part II here will deal with a wrapper around that interface providing a few helper functions and a test program to show how to use the extension.


o  Check the currently installed version of gawk;

$ gawk --version
GNU Awk 4.1.3, API: 1.1 (GNU MPFR 3.1.4, GNU MP 6.1.0)
Copyright (C) 1989, 1991-2015 Free Software Foundation.

o  This is an old release; let’s take this opportunity to upgrade to the current latest, the 4.2.1;
    prepare a working directory; the name “dmgawk” fits it well;
$ mkdir ~/dmgawk; cd ~/dmgawk

o  Let’s get the latest source code of gawk, it’s the release 4.2.1;
$ wget http://ftp.gnu.org/gnu/gawk/gawk-4.2.1.tar.gz
$ tar -xpvzf gawk-4.2.1.tar.gz
$ cd gawk-4.2.1

o  compile it;
$ ./configure
$ make

o  Check the new version;
$ ./gawk --version
GNU Awk 4.2.1, API: 2.0
Copyright (C) 1989, 1991-2018 Free Software Foundation.

o  Note that the MPFR and GMP extensions were not linked in; that’s because those libraries were not present on my system;
    to have them, just download, compile and install them, and then run gawk’s ./configure and make again;
    fine so far; we’ll extend this local version; whether it will or can be installed system-wide is up to you;
    move to the extensions directory;
    edit the automake Makefile.am and add the highlighted references below to the new interface, let’s call it dctm.c;
$ cd extension
$ vi Makefile.am
pkgextension_LTLIBRARIES = \
filefuncs.la \
time.la \
time_la_SOURCES = time.c
time_la_LIBADD = $(MY_LIBS)
dctm_la_SOURCES = dctm.c
dctm_la_LIBADD = $(MY_LIBS)

o  save and quit; that’s all for the make file;
    let’s edit the interface dctm.c now and insert the code below;
$ vi dctm.c

 * dctm.c - Builtin functions that provide an interface to Documentum dmapp.h;
 * see dmapp.h for description of functions; 
 * Cesare Cervini
 * dbi-services.com
 * 5/2018
 * go to .libs and and link dmcl.o it with the Documentum library with: gcc -o dctm.so -shared dctm.o path-to-the-shared-library/libdmcl40.so;
#include <config.h>

#include <stdio.h>
#include <assert.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>

#include <sys/types.h>
#include <sys/stat.h>

#include "gawkapi.h"

#include "gettext.h"
#define _(msgid)  gettext(msgid)
#define N_(msgid) msgid

/* make it point to the Documentum dmapp.h on your system */
#include "/home/dmadmin/documentum/share/sdk/include/dmapp.h"

static const gawk_api_t *api;	/* for convenience macros to work */
static awk_ext_id_t ext_id;
static const char *ext_version = "dctm extension: version 1.0";
static awk_bool_t (*init_func)(void) = NULL;

int plugin_is_GPL_compatible;

/*  do_dmAPIInit */
static awk_value_t *
do_dmAPIInit(int nargs, awk_value_t *result, struct awk_ext_func *unused) {
   unsigned int ret = 0;

   assert(result != NULL);

   ret = dmAPIInit();
   ret &= 0xff;

   return make_number(ret, result);

/*  do_dmAPIDeInit */
static awk_value_t *
do_dmAPIDeInit(int nargs, awk_value_t *result, struct awk_ext_func *unused) {
   unsigned int ret = 0;

   assert(result != NULL);

   ret = dmAPIDeInit();
   ret &= 0xff;

   return make_number(ret, result);

/*  do_dmAPIExec */
static awk_value_t *
do_dmAPIExec(int nargs, awk_value_t *result, struct awk_ext_func *unused) {
   awk_value_t str;
   unsigned int ret = 0;

   assert(result != NULL);

   if (get_argument(0, AWK_STRING, & str)) {
      ret = dmAPIExec(str.str_value.str);
      ret &= 0xff;
   } else if (do_lint)
      lintwarn(ext_id, _("dmAPIExec: called with inappropriate argument(s)"));

   return make_number(ret, result);

/*  do_dmAPIGet */
static awk_value_t *
do_dmAPIGet(int nargs, awk_value_t *result, struct awk_ext_func *unused) {
   awk_value_t str;
   char *got_value;

   assert(result != NULL);

   if (get_argument(0, AWK_STRING, & str)) {
      got_value = dmAPIGet(str.str_value.str);
   } else if (do_lint)
      lintwarn(ext_id, _("dmAPIGet: called with inappropriate argument(s)"));

   make_const_string(got_value == NULL ? "" : got_value, strlen(got_value), result);
   return result;

/*  do_dmAPISet */
static awk_value_t *
do_dmAPISet(int nargs, awk_value_t *result, struct awk_ext_func *unused) {
   awk_value_t str1;
   awk_value_t str2;
   unsigned int ret = 0;

   assert(result != NULL);

   if (get_argument(0, AWK_STRING, & str1) && get_argument(0, AWK_STRING, & str2)) {
      ret = dmAPISet(str1.str_value.str, str2.str_value.str);
      ret &= 0xff;
   } else if (do_lint)
      lintwarn(ext_id, _("dmAPISet: called with inappropriate argument(s)"));

   return make_number(ret, result);

/* these are the exported functions along with their min and max arities; */
static awk_ext_func_t func_table[] = {
	{ "dmAPIInit",   do_dmAPIInit, 0, 0, awk_false, NULL },
	{ "dmAPIDeInit", do_dmAPIDeInit, 0, 0, awk_false, NULL },
	{ "dmAPIExec",   do_dmAPIExec, 1, 1, awk_false, NULL },
	{ "dmAPIGet",    do_dmAPIGet, 1, 1, awk_false, NULL },
	{ "dmAPISet",    do_dmAPISet, 2, 2, awk_false, NULL },

/* define the dl_load function using the boilerplate macro */

dl_load_func(func_table, dctm, "")

o  Again, run configure and build the extensions;
$ pwd
$ ./configure
$ make

o  The extensions’ object files and shared libraries are actually contained in the hidden .libs directory below extension; so move there and check the generated dctm.so library;

$ cd .libs
$ ldd dctm.so
linux-vdso.so.1 => (0x00007ffc1d7e5000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f3452930000)
/lib64/ld-linux-x86-64.so.2 (0x00007f3452efd000)

o  The library has still no reference to libdmcl.so; let’s link it with it and check again;
    find that Documentum library on your system; on mine, it’s in /home/dmadmin/documentum/product/7.3/bin;
$ gcc -o dctm.so -shared dctm.o /home/dmadmin/documentum/product/7.3/bin/libdmcl.so
$ ldd dctm.so
linux-vdso.so.1 => (0x00007fff55dbf000)
/home/dmadmin/documentum/product/7.3/bin/libdmcl.so (0x00007fece91bb000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fece8df1000)
libcrypt.so.1 => /lib/x86_64-linux-gnu/libcrypt.so.1 (0x00007fece8bb9000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fece899c000)
libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007fece861a000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fece8311000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007fece80fb000)
/lib64/ld-linux-x86-64.so.2 (0x00007fece95ca000)

Good, so far dctm.so has been linked with libdmcl.so and thus is able to access the dmAPI*() functions. gawk let us load dynamic extensions through two ways:
1. the statement @load “name_of_extension” inserted in the client awk script;
2. the -l | –load “name_of_extension” command-line options
Please, turn now to part II here for the rest of this article.


Cet article Adding a Documentum Extension to gawk, part I est apparu en premier sur Blog dbi services.

Rittman Mead at Kscope 2018

Rittman Mead Consulting - Thu, 2018-05-31 02:20
Rittman Mead at Kscope 2018

Kscope 2018 is just a week away! Magnificent location (Walt Disney World Swan and Dolphin Resort) for one of the best tech conferences of the year! The agenda is impressive (look here) spanning over ten different tracks from the traditional EPM, BI Analytics and Data Visualization, to the newly added Blockchain! Plenty of great content and networking opportunities!

I'll be representing Rittman Mead with two talks: one about Visualizing Streams (Wednesday at 10:15 Northern Hemisphere A2, Fifth Level) on how to build a modern analytical platform including Apache Kafka, Confluent's KSQL, Apache Drill and Oracle's Data Visualization (Cloud or Desktop).

Rittman Mead at Kscope 2018

During the second talk, titled DevOps and OBIEE:
Do it Before it's Too Late!
(Monday at 10:45 Northern Hemisphere A1, Fifth Level), I'll be sharing details, based on our experience, on how OBIEE can be fully included in a DevOps framework, what's the cost of "avoiding" DevOps and automation in general and how Rittman Mead's toolkits, partially described here, can be used to accelerate the adoption of DevOps practices in any situation.

Rittman Mead at Kscope 2018

If you’re at the event and you see me in sessions, around the conference or during my talks, I’d be pleased to speak with you about your projects and answer any questions you might have.

Categories: BI & Warehousing

OpenShift on my Windows 10 laptop with MiniShift

Yann Neuhaus - Thu, 2018-05-31 01:23

If you want to play with OpenShift on your laptop, you can, in a Virtual Machine. I have VirtualBox installed on my laptop. I’ll install Minishift here, which will create the VM to run OpenShift with few simple commands only. On Linux you can refer to Daniel’s post. Here is the Windows version. Oh, and Daniel did that to run Postgres but my goal is to run an Oracle container of course. Or MySQL maybe.

I’ve downloaded minishift-1.18.0-windows-amd64.zip and unzipped it in D:\Downloads\minishift-1.18.0-windows-amd64 where I have minishift.exe


I configure to use VirtualBox

minishift config set vm-driver virtualbox

It is installed in my Windows profile:


Be careful, minishift do not know that we have multiple drives in Windows. When I was running minishift.exe from the D: disk is was not able to find the virtual machine’s files that were on C:

D:\Downloads\minishift-1.18.0-windows-amd64\minishift-1.18.0-windows-amd64>minishift start
-- Starting profile 'minishift'
-- Checking if https://github.com is reachable ... OK
-- Checking if requested OpenShift version 'v3.9.0' is valid ... OK
-- Checking if requested OpenShift version 'v3.9.0' is supported ... OK
-- Checking if requested hypervisor 'virtualbox' is supported on this platform ... OK
-- Checking if VirtualBox is installed ... OK
-- Checking the ISO URL ... OK
-- Downloading OpenShift binary 'oc' version 'v3.9.0'
40.81 MiB / 40.81 MiB [===================================================================================] 100.00% 0s-- Downloading OpenShift v3.9.0 checksums ... OK
-- Checking if provided oc flags are supported ... OK
-- Starting local OpenShift cluster using 'virtualbox' hypervisor ...
-- Minishift VM will be configured with ...
Memory: 2 GB
vCPUs : 2
Disk size: 20 GB
-- Starting Minishift VM ..... FAIL E0529 17:08:31.056327 1448 start.go:391] Error starting the VM: Error creating the VM. Error creating machine: Error in driver during machine creation: open /Users/fpa/.minishift/cache/iso/b2d/v1.3.0/minishift-b2d.iso: The system cannot find the path specified.. Retrying.
Error starting the VM: Error creating the VM. Error creating machine: Error in driver during machine creation: open /Users/fpa/.minishift/cache/iso/b2d/v1.3.0/minishift-b2d.iso: The system cannot find the path specified.

Then, I changed to the C: drive

C:\Users\fpa>dir \Users\fpa\.minishift\cache\iso\b2d\v1.3.0\
Volume in drive C is OS
Volume Serial Number is 26AE-33F7
Directory of C:\Users\fpa\.minishift\cache\iso\b2d\v1.3.0
29-May-18 15:22 .
29-May-18 15:22 ..
29-May-18 15:22 41,943,040 minishift-b2d.iso
1 File(s) 41,943,040 bytes
2 Dir(s) 30,652,370,944 bytes free

And I run minishift from there:

C:\Users\fpa>D:minishift start
-- Starting profile 'minishift'
-- Checking if https://github.com is reachable ... OK
-- Checking if requested OpenShift version 'v3.9.0' is valid ... OK
-- Checking if requested OpenShift version 'v3.9.0' is supported ... OK
-- Checking if requested hypervisor 'virtualbox' is supported on this platform ... OK
-- Checking if VirtualBox is installed ... OK
-- Checking the ISO URL ... OK
-- Checking if provided oc flags are supported ... OK
-- Starting local OpenShift cluster using 'virtualbox' hypervisor ...
-- Minishift VM will be configured with ...
Memory: 2 GB
vCPUs : 2
Disk size: 20 GB
-- Starting Minishift VM .............................. OK
-- Checking for IP address ... OK
-- Checking for nameservers ... OK
-- Checking if external host is reachable from the Minishift VM ...
Pinging ... OK
-- Checking HTTP connectivity from the VM ...
Retrieving http://minishift.io/index.html ... OK
-- Checking if persistent storage volume is mounted ... OK
-- Checking available disk space ... 0% used OK
Importing 'openshift/origin:v3.9.0' . CACHE MISS
Importing 'openshift/origin-docker-registry:v3.9.0' . CACHE MISS
Importing 'openshift/origin-haproxy-router:v3.9.0' . CACHE MISS
-- OpenShift cluster will be configured with ...
Version: v3.9.0
-- Copying oc binary from the OpenShift container image to VM ...................................... OK
-- Starting OpenShift cluster ........................
Using Docker shared volumes for OpenShift volumes
Using public hostname IP as the host IP
Using as the server IP
Starting OpenShift using openshift/origin:v3.9.0 ...
OpenShift server started.
The server is accessible via web console at:
You are logged in as:
User: developer
To login as administrator:
oc login -u system:admin


That’s all. I have a new VM in VirtualBox whith its main files in C:/Users/fpa/.minishift


The VM boots on the Boot2Docker iso, which is the way to run Docker on Windows without enabling HyperV. The first network interface is NAT for internet access. The second one has a DHCP IP from

You can control the VM with minishift (start, stop, configure, ssh,…):

D:\Downloads\minishift-1.18.0-windows-amd64\minishift-1.18.0-windows-amd64>d:minishift ssh
## .
## ## ## ==
## ## ## ## ## ===
/"""""""""""""""""\___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ / ===- ~~~
\______ o __/
\ \ __/
_ _ ____ _ _
| |__ ___ ___ | |_|___ \ __| | ___ ___| | _____ _ __
| '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__|
| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__| < __/ |
|_.__/ \___/ \___/ \__|_____\__,_|\___/ \___|_|\_\___|_|
Boot2Docker version 1.12.6, build HEAD : 5ab2289 - Wed Jan 11 03:20:40 UTC 2017
Docker version 1.12.6, build 78d1802

We have everything running in containers here:

docker@minishift:/mnt/sda1$ docker ps
77c0ef5a80d7 50c7bffa0653 "/usr/bin/openshift-r" 8 minutes ago Up 8 minutes k8s_router_router-1-tsmw7_default_7a1be0e2-635b-11e8-843d-f2c6a11ee2db_1
10fb6a2a6b70 9b472363b07a "/bin/sh -c '/usr/bin" 8 minutes ago Up 8 minutes k8s_registry_docker-registry-1-zfxm5_default_7865ac33-635b-11e8-843d-f2c6a11ee2db_1
2f6b90fb0bb4 openshift/origin-pod:v3.9.0 "/usr/bin/pod" 8 minutes ago Up 8 minutes k8s_POD_router-1-tsmw7_default_7a1be0e2-635b-11e8-843d-f2c6a11ee2db_1
3c720d166989 fae77002371b "/usr/bin/origin-web-" 8 minutes ago Up 8 minutes k8s_webconsole_webconsole-7dfbffd44d-48b9h_openshift-web-console_62b66c66-635b-11e8-843d-f2c6a11ee2db_1
bb5870fc0b7e openshift/origin-pod:v3.9.0 "/usr/bin/pod" 8 minutes ago Up 8 minutes k8s_POD_docker-registry-1-zfxm5_default_7865ac33-635b-11e8-843d-f2c6a11ee2db_1
95663bf29487 openshift/origin-pod:v3.9.0 "/usr/bin/pod" 8 minutes ago Up 8 minutes k8s_POD_webconsole-7dfbffd44d-48b9h_openshift-web-console_62b66c66-635b-11e8-843d-f2c6a11ee2db_1
5fc022dbe112 openshift/origin:v3.9.0 "/usr/bin/openshift s" 8 minutes ago Up 8 minutes origin

But we should not have to connect to this machine.

The minishift executable can be used to control anything. As I have docker client installed on my laptop (the Docker Toolbox) I can get the environment variables:

D:\Downloads\minishift-1.18.0-windows-amd64\minishift-1.18.0-windows-amd64>d:minishift docker-env
SET DOCKER_CERT_PATH=C:\Users\fpa\.minishift\certs
REM Run this command to configure your shell:
REM @FOR /f "tokens=*" %i IN ('minishift docker-env') DO @call %i

and see, from Windows, the docker images that are in the VM:

D:\Downloads\minishift-1.18.0-windows-amd64\minishift-1.18.0-windows-amd64>where docker
C:\Program Files\Docker Toolbox\docker.exe
SET DOCKER_CERT_PATH=C:\Users\fpa\.minishift\certs
D:\Downloads\minishift-1.18.0-windows-amd64\minishift-1.18.0-windows-amd64>docker images
openshift/origin-web-console v3.9.0 fae77002371b 12 days ago 495.1 MB
openshift/origin-docker-registry v3.9.0 9b472363b07a 12 days ago 464.9 MB
openshift/origin-haproxy-router v3.9.0 50c7bffa0653 12 days ago 1.283 GB
openshift/origin-deployer v3.9.0 e4de3cb64af9 12 days ago 1.261 GB
openshift/origin v3.9.0 83ec0170e887 12 days ago 1.261 GB
openshift/origin-pod v3.9.0 b6d2be1df9c0 12 days ago 220.1 MB

While I’m there, I can run whatever I want as a docker container. Let’s try with Oracle.

I need to login to the Docker store (where I have accepted the license conditions)

D:\Downloads\minishift-1.18.0-windows-amd64\minishift-1.18.0-windows-amd64>docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: pachot
Login Succeeded

Let’s pull the Oracle ‘slim’ image:

D:\Downloads\minishift-1.18.0-windows-amd64\minishift-1.18.0-windows-amd64>docker pull store/oracle/database-enterprise: Pulling from store/oracle/database-enterprise
4ce27fe12c04: Downloading [> ] 1.076 MB/83.31 MB
9d3556e8e792: Downloading [> ] 535.3 kB/151 MB
fc60a1a28025: Download complete
0c32e4ed872e: Download complete
b465d9b6e399: Waiting

And run it:

D:\Downloads\minishift-1.18.0-windows-amd64\minishift-1.18.0-windows-amd64>docker run -it --rm --name orcl store/oracle/database-enterprise:
Setup Oracle Database
Oracle Database Setup
Wed May 30 20:16:56 UTC 2018
Check parameters ......
log file is : /home/oracle/setup/log/paramChk.log
paramChk.sh is done at 0 sec
untar DB bits ......
log file is : /home/oracle/setup/log/untarDB.log
untarDB.sh is done at 38 sec

You may find that funny, but the Oracle images in the Docker store contains only a tarball of Oracle Home and a pre-created database. Just the time to untar those and run the instance and after 2 minutes I have my database ready. All is untar-ed to the volume, including the software.

Here are the ports that are redirected to:

C:\Users\fpa>SET DOCKER_HOST=tcp://
C:\Users\fpa>SET DOCKER_CERT_PATH=C:\Users\fpa\.minishift\certs
C:\Users\fpa>docker port orcl
1521/tcp ->
5500/tcp ->

Then, easy to connect with SQL*Net with the credentials provided (see the setup instructions)

C:\Users\fpa>sqlcl sys/Oradoc_db1@// as sysdba
SQLcl: Release 18.1.1 Production on Wed May 30 22:41:10 2018
Copyright (c) 1982, 2018, Oracle. All rights reserved.
Connected to:
Oracle Database 18c Enterprise Edition Release - Production
SQL> select instance_name,host_name from v$instance;
------------- ------------
ORCLCDB 254f96463883


So, that’s an easy way to run Oracle on Docker when you have VirtualBox. One download and 5 commands and I’m ready to connect. But that’s not the goal. Here we have OpenShift here to manage multiple Docker containers.

According to the ‘minishift start’ output I have a Web server on (user: system, password: admin)

It already contains a lot of databases:

They are really easy to use. In two clicks I’ve run a MySQL container:

If you don’t like the GUI, there’s the command line interface of OpenShift within the minishift ‘cache:

C:\Users\fpa>C:\Users\fpa\.minishift\cache\oc\v3.9.0\windows\oc.exe login
Authentication required for (openshift)
Username: system
Login successful.
You have one project on this server: "test"
Using project "test".
C:\Users\fpa>C:\Users\fpa\.minishift\cache\oc\v3.9.0\windows\oc.exe status
In project test on server
svc/mysql -
dc/mysql deploys openshift/mysql:5.7
deployment #1 deployed 7 minutes ago - 1 pod

Now that I have OpenShift running and installed, I’ll be able to run Oracle and manage containers from there. That’s for the next post on this subject.


Cet article OpenShift on my Windows 10 laptop with MiniShift est apparu en premier sur Blog dbi services.

How to generate some big test tables and rapidly export their SQL data

Tom Kyte - Wed, 2018-05-30 10:26
Hello teams, Our Developer manager asks me to export Oracle production database's three big tables with <i><b>CSV</b></i> or this format <i><b>"insert ... into ..."</b></i> for giving another App teams (their database is MySQL 5.7) once again, I r...
Categories: DBA Blogs

Database Link between Oracle 10g and Oracle 12c

Tom Kyte - Wed, 2018-05-30 10:26
Dears, am trying to create a dblink between Oracle 10g and Oracle 12c the dblink is created successfully my case is as below when select data from a table directly over dblink give the result without problems but call a package or stored procedur...
Categories: DBA Blogs

expdp between two scn numbers

Tom Kyte - Wed, 2018-05-30 10:26
Hi Team, Can we run expdp command between two scn numbers? Thanks Krishna
Categories: DBA Blogs

I just want to know the difference between the below two queries.

Tom Kyte - Wed, 2018-05-30 10:26
Q: List the Emps who are senior to their own MGRS. select * from emp e,emp m where e.mgr = m.empno and e.hiredate < m.hiredate; select * from emp e,emp m where e.empno= m.mgr and e.hiredate > m.hiredate; the above two queries are getting s...
Categories: DBA Blogs

Fortune 500 and the Art of Execution

Abhinav Agarwal - Wed, 2018-05-30 07:53
The Fortune 500 Companies 2018 rankings came out last week, and browsing the list, the following random thoughts struck me about the list and the technology industry:

  • Walmart - you can be in a very, very traditional brick-and-mortar business (yes, they have been making inroads into e-commerce, but for the most part, Walmart is a traditional retailer), but as long as you keep doing things well, you can be in the top 10. Not only that, you can be the top-ranked company by revenues for a sixth year in a row. In this case, you can be numero uno, with annual revenues that top five-hundred billion dollars - $500 billion, be more than twice the size of the second-ranked company (Exxon-Mobile is ranked second, with annual revenues of $244B), and also employ the second-most number of employers (2.3 million).
  • Apple - you can be a mass-market luxury brand (yes, that is a contradiction in terms), sell only a handful of products (its Mac, iPhone, and iPad product lines bring in 79% of its revenues) and be in the top 10 - ranked fourth. You will also get to make the profits of any company - $48 billion. You also get to be the most highly valued company - at $922 billion.
  • Amazon - you can sell almost everything under the sun, sell it almost all online (its foray into physical stores and its acquisition of Whole Foods notwithstanding), employ the most employers of any company in America, be a $100 billion plus company, yet grow revenues by more than thirty per-cent (to $177 billion), and crack the top 10 - ranked eighth. You also get to be the second-most highly valued company on earth, at $765 billion.
  • Netflix: you do only one thing: in this case, streaming video content on-demand and producing your own content, almost triple your profits (199% jump year-on-year), not be in the top 200, and yet deliver the best 10-year returns to shareholders (48%, annualized!
  • The top five most valuable companies on the list are all technology companies - Apple, Amazon, Alphabet (the parent company of Google), Microsoft, and Facebook.
Bottom line? What is common across all these companies is a relentless focus on execution. Execution - a simple lesson to learn, yet incredibly difficult to practice. Flipkart, the Indian e-commerce giant in which Walmart (press release) bought a 77% stake for $16 billion, valuing the company at $22 billion, learned that the hard way, when it lost focus in its fight against Amazon.

Further suggested reading:

This is an expanded version of my LinkedIn post.

© 2018, Abhinav Agarwal. All rights reserved.

Index Bouncy Scan 3

Jonathan Lewis - Wed, 2018-05-30 07:15

This is a follow-up to a problem I had with yesterday’s example of using recursive CTEs to “bounce” along a multi-column index to pick out the unique set of combinations of the first two columns. Part of the resulting query used a pair of aggregate scalar subqueries in a select list – and Andrew Sayer improved on my query by introducing a “cross apply” (which I simply hadn’t thought of) which the optimizer transformed into a lateral view (which I had thought of, but couldn’t get to work).

After seeing what the Andrew and the optimizer had done I looked a little more closely at my lateral view experiment and modified it so that it worked. Here are the three critical versions of the relevant code fragment; first is my original code, then Andrew’s cross apply, then my working lateral view version:

        (select min(t1.val1) val1 from t1 where t1.val1 > bounce1.val1) val1,
        (select min(t1.val2) val2 from t1 where t1.val1 > bounce1.val1 and rownum = 1) val2
from    bounce1
where   bounce1.val1 is not null
        ca.val1 ,ca.val2
from    bounce1
cross  apply (select val1, val2
              from  (select /*+ index(t1) no_index_ffs(t1) */
                             val1, val2
                     from    t1
                     where   t1.val1 > bounce1.val1
                     and     rownum = 1
             ) ca
where  bounce1.val1 is not null

        ca.val1 ,ca.val2
from    bounce1, 
        lateral(select val1, val2
              from  (select /*+ index(t1) no_index_ffs(t1) */
                             val1, val2
                     from    t1
                     where   t1.val1 > bounce1.val1
                     and     rownum = 1
             ) ca
where  bounce1.val1 is not null

All I’ve done to modify Andrew’s code is put a comma after the table (actually CTE) bounce1, then change “cross apply” to “lateral”. Compare the resulting text with the following lateral version that doesn’t work:

        ca.val1 ,ca.val2
from    bounce1, 
        lateral (
                   select /*+ index(t1) no_index_ffs(t1) */
                             val1, val2
                     from    t1
                     where   t1.val1 > bounce1.val1
                     and     rownum = 1
             ) ca
where  bounce1.val1 is not null

To get from not working to working all I’ve done is wrap the text in my lateral() subquery inside one more (apparently redundant) layer of “select * from ()”!

In fact my assumption that my code wasn’t working was incorrect – what was really going on was that the code I had written was producing the wrong results but I thought that I had made a mistake in the way I was writing it and couldn’t figure out what I had done wrong.

Problem Solving:

To get a better idea of what’s going on, I took a closer look at the execution plans. Here are the plans (main body only) for the two variants of using the lateral() view – the first from the SQL with the “redundant” select, the second as I originally wrote it. Notice that the number of rows (A-Rows) returned in the first case is the 30 expected while in the second case it’s only 10.

| Id  | Operation                                    | Name            | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
|   0 | SELECT STATEMENT                             |                 |      1 |        |   125 (100)|     30 |00:00:00.01 |      40 |     28 |       |       |          |
|   1 |  SORT ORDER BY                               |                 |      1 |      4 |   125   (3)|     30 |00:00:00.01 |      40 |     28 |  2048 |  2048 | 2048  (0)|
|*  2 |   VIEW                                       |                 |      1 |      4 |   124   (2)|     30 |00:00:00.01 |      40 |     28 |       |       |          |
|   3 |    UNION ALL (RECURSIVE WITH) BREADTH FIRST  |                 |      1 |        |            |     33 |00:00:00.01 |      40 |     28 |  1024 |  1024 |          |
|*  4 |     VIEW                                     |                 |      1 |      2 |    61   (2)|      3 |00:00:00.01 |       8 |      4 |       |       |          |
|   5 |      UNION ALL (RECURSIVE WITH) BREADTH FIRST|                 |      1 |        |            |      3 |00:00:00.01 |       8 |      4 |  1024 |  1024 |          |
|*  6 |       VIEW                                   |                 |      1 |      1 |    29   (0)|      1 |00:00:00.01 |       2 |      1 |       |       |          |
|*  7 |        WINDOW NOSORT STOPKEY                 |                 |      1 |  10000 |    29   (0)|      1 |00:00:00.01 |       2 |      1 | 73728 | 73728 |          |
|   8 |         INDEX FULL SCAN                      | T1_PK           |      1 |  10000 |    29   (0)|      2 |00:00:00.01 |       2 |      1 |       |       |          |
|   9 |       NESTED LOOPS                           |                 |      3 |      1 |    31   (0)|      2 |00:00:00.01 |       6 |      3 |       |       |          |
|  10 |        RECURSIVE WITH PUMP                   |                 |      3 |        |            |      3 |00:00:00.01 |       0 |      0 |       |       |          |
|  11 |        VIEW                                  | VW_LAT_1BBF5C63 |      3 |      1 |     2   (0)|      2 |00:00:00.01 |       6 |      3 |       |       |          |
|  12 |         VIEW                                 |                 |      3 |      1 |     2   (0)|      2 |00:00:00.01 |       6 |      3 |       |       |          |
|* 13 |          COUNT STOPKEY                       |                 |      3 |        |            |      2 |00:00:00.01 |       6 |      3 |       |       |          |
|* 14 |           INDEX RANGE SCAN                   | T1_PK           |      3 |      1 |     2   (0)|      2 |00:00:00.01 |       6 |      3 |       |       |          |
|  15 |     SORT AGGREGATE                           |                 |     30 |      1 |            |     30 |00:00:00.01 |      32 |     24 |       |       |          |
|  16 |      FIRST ROW                               |                 |     30 |      1 |     2   (0)|     27 |00:00:00.01 |      32 |     24 |       |       |          |
|* 17 |       INDEX RANGE SCAN (MIN/MAX)             | T1_PK           |     30 |      1 |     2   (0)|     27 |00:00:00.01 |      32 |     24 |       |       |          |
|  18 |     RECURSIVE WITH PUMP                      |                 |     11 |        |            |     30 |00:00:00.01 |       0 |      0 |       |       |          |

| Id  | Operation                                    | Name            | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
|   0 | SELECT STATEMENT                             |                 |      1 |        |   125 (100)|     10 |00:00:00.01 |      16 |       |       |          |
|   1 |  SORT ORDER BY                               |                 |      1 |      4 |   125   (3)|     10 |00:00:00.01 |      16 |  2048 |  2048 | 2048  (0)|
|*  2 |   VIEW                                       |                 |      1 |      4 |   124   (2)|     10 |00:00:00.01 |      16 |       |       |          |
|   3 |    UNION ALL (RECURSIVE WITH) BREADTH FIRST  |                 |      1 |        |            |     11 |00:00:00.01 |      16 |  1024 |  1024 |          |
|*  4 |     VIEW                                     |                 |      1 |      2 |    61   (2)|      1 |00:00:00.01 |       4 |       |       |          |
|   5 |      UNION ALL (RECURSIVE WITH) BREADTH FIRST|                 |      1 |        |            |      1 |00:00:00.01 |       4 |  1024 |  1024 |          |
|*  6 |       VIEW                                   |                 |      1 |      1 |    29   (0)|      1 |00:00:00.01 |       2 |       |       |          |
|*  7 |        WINDOW NOSORT STOPKEY                 |                 |      1 |  10000 |    29   (0)|      1 |00:00:00.01 |       2 | 73728 | 73728 |          |
|   8 |         INDEX FULL SCAN                      | T1_PK           |      1 |  10000 |    29   (0)|      2 |00:00:00.01 |       2 |       |       |          |
|   9 |       NESTED LOOPS                           |                 |      1 |      1 |    31   (0)|      0 |00:00:00.01 |       2 |       |       |          |
|  10 |        RECURSIVE WITH PUMP                   |                 |      1 |        |            |      1 |00:00:00.01 |       0 |       |       |          |
|* 11 |        VIEW                                  | VW_DCL_1BBF5C63 |      1 |      1 |     2   (0)|      0 |00:00:00.01 |       2 |       |       |          |
|* 12 |         COUNT STOPKEY                        |                 |      1 |        |            |      1 |00:00:00.01 |       2 |       |       |          |
|  13 |          INDEX FULL SCAN                     | T1_PK           |      1 |      1 |     2   (0)|      1 |00:00:00.01 |       2 |       |       |          |
|  14 |     SORT AGGREGATE                           |                 |     10 |      1 |            |     10 |00:00:00.01 |      12 |       |       |          |
|  15 |      FIRST ROW                               |                 |     10 |      1 |     2   (0)|      9 |00:00:00.01 |      12 |       |       |          |
|* 16 |       INDEX RANGE SCAN (MIN/MAX)             | T1_PK           |     10 |      1 |     2   (0)|      9 |00:00:00.01 |      12 |       |       |          |
|  17 |     RECURSIVE WITH PUMP                      |                 |     11 |        |            |     10 |00:00:00.01 |       0 |       |       |          |

Most importantly we can see that the optimizer has used two different transformations. For the working query we see the view name VW_LAT_xxxxxxxx at operation 11, this is Oracle implementing a lateral view; for the problem query we see the view name VW_DCL_xxxxxxxx at operation 11, which is Oracle implementing a transformation to a “decorrelated lateral view”.

My first test after noting this difference was to see what would happen in I added the hint /*+ no_query_transformation */ to the query: it resulted in the VW_DCL_xxxxxxxx view name changing to VW_LAT_xxxxxxxx and the query producing the right result. Andrew Sayer, on the ODC thread, then pointed out that he’d done a couple more experiments and used the /*+ no_decorrelate() */ hint so I tried that with my query, adding it (with no parameters) to the subquery inside the lateral() clause – again the plan changed from using VW_DCL to VW_LAT and the results were correct.

Test Case

Bottom line on this – it looks like the optimizer is decorrelating a subquery when it shouldn’t, leading to wrong results. To make it easier to see this anomaly I stripped the original sample down to a basic test case starting with the table that I used in the previous posting:

rem     Script:         decorralate.sql
rem     Author:         Jonathan Lewis
rem     Dated:          May 2018
rem     Last tested 
rem     -- via liveSQL

create table t1
segment creation immediate
with generator as (
                rownum id
        from dual
        connect by
                level <= 1e4 -- > comment to avoid WordPress format issue
        rownum                          id,
        mod(rownum-1,3)                 val1,
        mod(rownum-1,10)                val2,
        lpad('x',100,'x')               padding
        generator       v1
order by

                ownname     => user,
                tabname     => 'T1',
                method_opt  => 'for all columns size 1'

alter table t1 add constraint t1_pk primary key(val1, val2, id);

Now two versions of a simplified piece of code that should select the distinct values of val1 greater than the lowest value (each row in the UNION ALL of dual is emulating the way in which yesterday’s recursive CTE was effectively saying “this is a current known value, find the next higher”):

prompt  =============
prompt  Right results
prompt  =============

        v1.val1, v1.val2
from    (
        select  0 val1, 0 val2 from dual
        union all
        select 1,0 from dual
        union all
        select 2,0 from dual
        ) bounce1,
        lateral (
            select val1, val2 from (
              select  /*+ index(t1) no_index_ffs(t1) */
                      t1.val1, t1.val2
              from    t1
              where   t1.val1 > bounce1.val1
              and     rownum = 1
        ) v1

prompt  ===========================================
prompt  Wrong results -- "redundant" select removed
prompt  ===========================================

        v1.val1, v1.val2
from    (
        select  0 val1, 0 val2 from dual
        union all
        select 1,0 from dual
        union all
        select 2,0 from dual
        ) bounce1,
        lateral (
            -- select val1, val2 from (
              select  /*+ index(t1) no_index_ffs(t1) */
                      t1.val1, t1.val2
              from    t1
              where   t1.val1 > bounce1.val1
              and     rownum = 1
            -- )
        ) v1

Here’s a cut-n-paste from running the two queries:

Right results

      VAL1       VAL2
---------- ----------
         1          0
         2          0

2 rows selected.

Wrong results  -- "redundant" select removed

no rows selected

Finally, to get an idea of what’s gone wrong – and to show that the optimizer has done something wrong when attempting to decorrelate – we can take a look at the optimizer trace file to see the final transformed SQL that the optimizer has produced a plan for. (I enabled the trace with the command “alter session set events ‘trace [rdbms.SQL_Transform.*]’;” to limit the trace to just the information about optimizer transformations.) This – cosmetically altered – is the final “unparsed” query:

        vw_dcl_a18161ff.val1 val1,
        vw_dcl_a18161ff.val2 val2 
from    ( 
                (select 0 val1 from sys.dual dual) 
                union all  
                (select 1 1 from sys.dual dual) 
                union all  
                (select 2 2 from sys.dual dual)
        ) bounce1, 
                 /*+ no_index_ffs (t1) index (t1) */ 
                t1.val1 val1_0,
                t1.val2 val2_1 
                test_user.t1 t1
                rownum = 1
        ) vw_dcl_a18161ff 
        vw_dcl_a18161ff.val1 > bounce1.val1

As you can see, the lateral view has turned into a non-mergeable inline view which selects the first row available from t1 by following the supplied hints, and joins that single row result set to bounce1. I have a suspicion that lateral views which include rownum predicates should not be decorrelated. I have looked on MoS to see if I can find any bugs related to decorrelating lateral views, but either there are none or my search terms weren’t good enough.


utlrp weird behavior with INVAID objects in an Orace EBS database

Syed Jaffar - Wed, 2018-05-30 07:05
In one of the recent database migration & upgrade activities, I have migrated an Oracle EBS database to a new DB host and upgraded to After migration and post upgrade, an utlrp.sql was ran to validate the 100k invalid objects in the database.

Weirdly, during the utlrp execution and when the INVALID objects count goes to 200, the INVALID objects counts started to increase again, and ultimately reaching the 100k number again. The utlrp was in kind of loop and never exited.

As a workaround, I have manually compiled all invalid objects and ran ultrp which ran successfully. I am yet to unlock the theory caused the situation.

Stay tuned for more updates on this post.

Traditional Export/Import Fail With ORA-01031 in DB Vault Environment Starting Release

Syed Jaffar - Wed, 2018-05-30 06:27
Oracle Database Vault was configured on RAC database v11.2.0.3 and appropriate permissions given to allow the regular duties, such as data pump export/import etc.

However, when classical/traditional import (a dump file from v9 database) was performed, the import on database failed with the following errors:

Import: Release - Production on Tue May 29 16:56:23 2018

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

IMP-00058: ORACLE error 1031 encountered
ORA-01031: insufficient privilegesUsername: 

Data pump jobs were successfully executing without any issues. The only problem is with traditional export/import . According to below Oracle notes, it is an expected behavior in with DB Vault in place.

Classic Export/Import Fail With ORA-01031 in Database Vault Environment Starting Release (Doc ID 1419687.1)
Bug 13615338 - EXP/IMP disabled in Database Vault environment (Doc ID 13615338.8)

According to the notes,

Export and Import tools are no longer recommended to be used, especially in Database Vault environments.
When it is not possible to use data pump (for instance when the export has been created from a pre-10g database),
the patch 13615338 must be installed to allow the usage of classic export/import utilities.

The option would be to upgrade the database to higher than

Will apply the patch and update this blog if the issue is fixed or not.

I’m Charlie Sinks and this is how I work

Duncan Davies - Wed, 2018-05-30 06:00

This week we’re featuring another of the mainstays from the PSAdmin.io slack community  – Charlie Sinks. He is a regular in the community and a frequent guest on their PodCast giving UMRUG recaps and discussing automation. He also presented at the inaugural PSAdmin.conf on Implementing Rundeck with PeopleSoft (the recording is available here and is well worth an hour of your time). I’m thrilled that he has added his profile to our ‘How I Work‘ series.

charlie 2

Name: Charlie Sinks

Occupation: Sr. Software Engineer and PeopleSoft Admin at Andersen Windows
Location: Oak Park Heights, MN
Current computer:
At work: Dell Latitude E7470 laptop running Windows 10.
At home: a PC that I built from bits and pieces currently running Windows 7 and Ubuntu Linux
Current mobile devices: Motorola Moto G4 Plus and a cheap Timex wristwatch
I work: Fervently

What apps/software/tools can’t you live without?

  • PeopleSoft Deployment Packages (DPKs) and Puppet – I can’t even imagine deploying PeopleSoft environments and maintenance the old way anymore. It’s easy to get hung on the things we wish the DPK did better, but it really opened the door to automation in the PS Admin world for those of us who weren’t already using Puppet, Ansible, etc.
  • Rundeck – the ability to execute tasks – from simple commands to complex orchestration – on any number of remote servers has helped us go a long way towards automation. For example: single-click CPU patching on all of our non-Production servers at the same time. Even for simple service restarts, it’s so much easier to do it from a dashboard in Rundeck.
  • BeyondCompare – It’s crazy how many file formats this thing can compare. Best compare utility I’ve come across.
  • Password Safe – Way better than remembering a hundred passwords, and way WAY better than keeping them all in a spreadsheet. Also, using a password manager encourages me to use very long, complex passwords since I’ll never have to remember or retype them. Password Safe is great at storing passwords for all kinds of accounts, but doesn’t have good web browser integration for website accounts. So for that I use…
  • Dashlane – password manager with great browser integration for website accounts. Not as great for any other type of account. For that, see above.
  • Notepad++ / Visual Studio Code – For years I swore by Notepad++ because of its great plugins, macros, and language support. I still think it’s great, but I just discovered Visual Studio Code. That has everything I like about NP++ plus more. Much more streamlined extension and language support, themes, and its integration with Git is phenomenal. And it lets me turn on Vim keymapping with a single keystroke! I have a new favorite now.
  • MobaXterm – a nice upgrade from using a terminal like PuTTY for administering Linux servers. I haven’t begun to use all of the features this tool has, but I was quick to discover that it comes with a suite of simple games. I’m not really sure why… but it’s best not to question some things.

Besides your phone and computer, what gadget can’t you live without?
My coffee-maker. I use an Aeropress at home, and a French press at work. I always have a coffee mug in my hand. I may have a problem.

What’s your workspace like?
If I’m being honest, a bit messy with Post-It notes all over the place and the occasional coffee-mug ring décor. I write Post-It notes to remind myself of all sorts of things and try to put them in obnoxious places where I can’t ignore them. Yet I still manage to ignore most of them… I guess I’m so good at tuning things out that I can’t even trick myself.

workstation2 2

What do you listen to while you work?
Blues, jazz, electronica, old-school hip-hop, skiffle – it really depends on my mood and what type of work I’m doing. A lot of Tom Waits lately.

What PeopleSoft-related productivity apps do you use?
Change Assistant (for our internal releases in addition to the rest of it), DPKs, and puppet. I also like the Chrome extensions PS Utilities and PSChrome.

Do you have a 2-line tip that some others might not know?
When you’re developing or testing, instead of clearing your cache hit Ctrl+Shift+N in Chrome to open Incognito mode.

That was one line, can I do another one? If you’re newly upgrading to PeopleSoft 9.2, take some time to build out some navigation collections and homepage tiles. It’s really easy, will make your life easier, and will impress your end-users. It’ll also limit how often you have to use the new Nav menu… ick.

What SQL/Code do you find yourself writing most often?
Effective Date joins. Amiright?

What would be the one item you’d add to PeopleSoft if you could?
Open-source development! At least for the DPKs. Come on, Oracle. Let us fix the things we complain about so we can stop bothering you about it.

What everyday thing are you better at than anyone else?
No matter how good you are at something, there’s always somebody who’s better at it… so this is hard to answer. I do have a really high tolerance for eating extremely spicy food.

How do you keep yourself healthy and happy?
Spending time with my wife and three kids, reading, and getting out in nature.

What’s the best advice you’ve ever received?
Treat others as you would like to be treated, and always consider the other person’s point of view. Along those lines is my favorite Jack Handey quote: “Before you criticize someone, walk a mile in their shoes. That way, you’ll be a mile from them, and you’ll have their shoes.”

Safely Upgrading to Oracle APEX 18.1

Dimitri Gielis - Wed, 2018-05-30 05:37
Oracle Application Express (APEX) 18.1 has been out now for a couple of days.

I typically don't wait long before doing the upgrade, as with every new release you get many new features I want to use. Also if you want to stay on top of the game, you just want to move as fast as you can. I typically start testing the Early Adopter releases and then when apex.oracle.com gets updated, I do more testing, but having it on your own system with applications that are used day-in-day-out is a different level.

So I thought to share how we update our environment in a safe way.

The first thing we do is put our maintenance pages on. We use an Apache Reverse Proxy in front of Apache Tomcat with ORDS which is connected to the Database. By specifying some ErrorDocuments the maintenance pages are being used the moment there's an error.

For example, you can add this to your httpd.conf:

ErrorDocument 404 https://s3.amazonaws.com/apexRnD/website/maintenance.html
ErrorDocument 500 https://s3.amazonaws.com/apexRnD/website/maintenance.html
ErrorDocument 503 https://s3.amazonaws.com/apexRnD/website/maintenance.html

When you update APEX you don't want any incoming connections, so we stop Apache Tomcat with ORDS. At that moment the Reverse Proxy gets an error and the ErrorDocument kicks in and serves the Maintenance page. This way if people want to use the system, they know we are working on it.

We use Oracle Database 12c container database and pluggable databases. We want to run different versions of APEX next to each other because we have to test APEX Office Print against all APEX releases. Our customers use different releases of Oracle APEX too, so when we do custom development we have to stick to their version, so we really need all supported APEX versions somewhere.

Our setup was like this before the APEX 18.1 upgrade:
- CDB: cdb
- PDB with APEX 4.2: apex42_pdb
- PDB with APEX 5.0: apex50_pdb
- PDB with APEX 5.1 (main - our most used one): apex_pdb

With every new major release of APEX we clone our main PDB and give it the name of the APEX release, so we keep the APEX release we are on.

The steps to clone a pluggable database in Oracle DB 12.1 (SQL*Plus or SQLcl):

alter pluggable database apex_pdb close immediate; 
alter pluggable database apex_pdb open read only; 
create pluggable database APEX51_PDB from APEX_PDB file_name_convert=('/u01/app/oracle/oradata/cdb/APEX_PDB/','/u01/app/oracle/oradata/cdb/APEX51_PDB/') PATH_PREFIX='/u01/app/oracle/oradata/cdb/APEX51_PDB'; 
alter pluggable database apex51_pdb open; 
alter pluggable database apex_pdb close immediate; 
alter pluggable database apex_pdb open;

After the above we have a situation like this:
- CDB: cdb
- PDB with APEX 4.2: apex42_pdb
- PDB with APEX 5.0: apex50_pdb
- PDB with APEX 5.1: apex51_pdb
- PDB with APEX 5.1: apex_pdb  - will be upgraded to APEX 18.1 (main - our most used one)

Note: if you use Transparent Data Encryption (TDE) you have to perform some additional steps.

The installation of APEX 18.1 on the database side are basically 5 steps:
1) download the software from OTN
2) unzip in /tmp folder and cd into the /tmp/apex directory
3) run SQLcl or SQLPlus as sys as sysdba and connect to the apex_pdb container
alter session set container=APEX_PDB;
4) run the apexins command

In my environment the script took about 23 minutes to complete:

Note: the APEX 18.1 scripts are in 3 phases and the wizard shows information and timings for all phases and at the end also a global timing for the whole. If you want to have less downtime you can run the phases separately - see the doc Maximizing Uptime During an Application Express Upgrade

5) run the apex_rest_config command

The pluggable database is ready now and contains APEX 18.1.

During the APEX upgrade and as we already have downtime, we typically make use of that time to upgrade the other components in a typical Oracle APEX stack, namely the web server (e.g. Apache Tomcat) and ORDS (Oracle REST Data Services). Another advantage of going with a new version of your middleware is that you have your working Apache Tomcat and ORDS untouched, so in case you have to rollback there's nothing to do. Note that you can prepare most of the following commands beforehand.

Upgrading the Application (web) Server:

Unzip in your folder of choice.
That is basically all you have to do (on Linux) :)

Unzip in your folder of choice and cd into it.
Run: java -jar ords.war install advanced
and follow the wizard to install ORDS in APEX_PDB
* make sure you use different config dirs for ORDS in order to run multiple versions of ORDS and APEX

Once done, copy the ords.war into /apache-tomcat-version/webapps
Next copy the images folder of the apex directory to /apache-tomcat-version/webapps:
cp -R /tmp/apex/images /apache-tomcat-version/webapps/i

Start Apache Tomcat:
cd bin 

Restart your Apache Reverse Proxy (and optionally take out the ErrorDocuments)
/sbin/service httpd graceful

It sometimes happens to me that APEX isn't working the first time when I run it.
Then I debug the connection and check the logs of the web server.

Another thing that often helps, is running ORDS in standalone mode as it will give me clear messages. e.g.

WARNING: *** jdbc.MaxLimit in configuration |apex|| is using a value of 10, this setting may not be sized adequately for a production environment ***
WARNING: *** jdbc.InitialLimit in configuration |apex|| is using a value of 3, this setting may not be sized adequately for a production environment ***
WARNING: The pool named: |apex|al| is invalid and will be ignored: The username or password for the connection pool named apex_al, are invalid, expired, or the account is locked
WARNING: The pool named: |apex|rt| is invalid and will be ignored: The username or password for the connection pool named apex_rt, are invalid, expired, or the account is locked

The above warning remembers me to change some parameters of ORDS. Or I could look-up my previous configuration and copy those parameters. The above warning also indicates our APEX_LISTENER user can't connect (apex_al), so we need to fix that by specifying the correct password. For example, for apex_rt I forgot which user it was, but it's easy to find by navigating to the ords config folder and view the apex_rt file. It will tell the user in the file.

Now we should have APEX 18.1 up-and-running :)

We also want to access the previous versions of APEX. So I copy the older ordsxx.war files to the new web server, but I name those ords51.war, ords50.war, so the URL I access to the different APEX versions becomes https://www.apexrnd.be/ords50/ or https://www.apexrnd.be/ords51/
https://www.apexrnd.be/ords/ is always the latest version of APEX. 
The images folder of the older APEX version (5.1) we map to /i51/ (instead of /i/ as that is of APEX 18.1 now). In order to have a different image folder you need to run in apex51_pdb following sql:
SQL> @\utilities\reset_image_prefix.sql

We upgraded our systems this weekend, the second day after 18.1 was released. We followed more or less the above procedure and things went fine. Make sure to test your own apps first before doing the upgrade. Most of our apps were running just fine, but for some, we had to replace some older plugins with new versions or remove the plugins and replace by built-in functionality.

Note: there are many different ways of updating your system. It comes down to see what works for you. What I share works for us, but for example, if you can't afford downtime you probably want to work with standby databases and load balancers. Or if you work with virtual machines or Docker, it might be useful to clone the machine and test things on the entire machine first.

Categories: Development


Jonathan Lewis - Wed, 2018-05-30 04:08

One of my maxims for Oracle performance is: “Don’t try to be too clever”. Apart from the obvious reason that no-one else may be able to understand how to modify your code if the requirements change at a future date, there’s always the possibility that an Oracle upgrade will mean some clever trick you implemented will simply stop working.

While searching for information about a possible Oracle bug recently I noticed the following fix control (v$system_fix_control) in

     BUGNO OPTIMIZE SQL_FEATURE                        DESCRIPTION                                                             VALUE
---------- -------- ---------------------------------- ---------------------------------------------------------------- ------------
  18385778          QKSFM_CARDINALITY_18385778         avoid virtual col usage if FI is unusable or invisible 

Maybe that’s just invalidated an idea I published 12 years ago.

I haven’t researched the bug or any underlying SR, but I can think of valid argument both for and against the fix as described.



Virtual Developer Workshop - Container Native Application Development

Want to  take advantage of cloud computing frameworks, which are composed of loosely-coupled cloud services. Oracle would like to invite you to The Oracle Cloud...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Python and cx_Oracle RPMs are available from yum.oracle.com

Christopher Jones - Tue, 2018-05-29 21:04

cx_Oracle logo

This is worth cross posting: Getting Started with Python Development on Oracle Linux

Our Oracle Linux group has made Python and cx_Oracle RPMs available for a while. They recently launched a new landing page with nice, clear instructions on how to install various versions of Python, and how to install the cx_Oracle interface for Oracle Database. Check the link above.

Oracle Utilities and the Oracle Database In-Memory Option

Anthony Shorten - Tue, 2018-05-29 20:09

A few years ago, Oracle introduced an In-Memory option for the database to optimize analytical style applications. In Oracle Database 12c and above, the In-Memory option has been enhanced to support other types of workloads. All Oracle Utilities products are now certified to use the Oracle In-Memory option, on Oracle Database 12c and above, to allow customers to optimize the operational and analytical aspects of the products.

The Oracle In-Memory option is a memory based column store that co-exists with existing caching schemes used within Oracle to deliver faster access speeds for complex queries across the products. It is transparent to the product code and can be easily implemented with a few simple changes to the database to implement the objects to store in memory. Once configured the Oracle Cost Based Optimizer becomes aware of the data loaded into memory and adjusts the execution plan directly, delivering much better performance in almost all cases.

There are just a few option changes that need to be done:

  • Enable the In-Memory Option. The In-Memory capability is actually already in the database software already (no relinking necessary) but it is disabled by default. After licensing the option, you can enable the option by setting the amount of the SGA you want to use for the In-Memory store. Remember to ensure that the SGA is large enough to cover the existing memory areas as well as the In-Memory Data Store. These are setting a few database initialization parameters.
  • Enable Adaptive Plans. To tell the optimizer that you now want to take into account the In-Memory Option, you need to enable Adaptive Plans to enable support. This is flexible where you can actually turn off the In-Memory support without changing In-Memory settings.
  • Decide the Objects to Load into Memory. Now that the In-Memory Option is enabled the next step is to decide what is actually loaded into memory. Oracle provides an In-Memory Advisor that analyzes workloads to make suggestions.
  • Alter Objects to Load into Memory. Create the SQL DDL statements to specify the statements to instruct the loading of objects into memory. This includes priority and compression options for the objects to maximize flexibility of the option. The In-Memory Advisor can be configured to generate these statements from its analysis.

No changes to the code is necessary to use the option to speed up common queries in the products and analytical queries.

A new Implementing Oracle In-Memory Option (Doc Id: 2404696.1) whitepaper available from My Oracle Support has been published which outlines details of this process as well as specific guidelines for implementing this option.

PS. The Oracle In-Memory Option has been significantly enhanced in Oracle Database 18c.



Subscribe to Oracle FAQ aggregator