Feed aggregator

12cR2: Recover nonlogged blocks after NOLOGGING in Data Guard

Yann Neuhaus - Sun, 2017-02-26 09:32

You can accept to do NOLOGGING operations on bulk loads or index build according that you do a backup just after, and that your recovery plan mentions how to load the data again in case of media recovery. With a standby database, we usually force logging because we want redo to be generated for all operations in order to ship it and apply it on standby database. 12.2 brings a new solution: do nologging operations, without generating redo, and then ship the blocks to the standby. This is done on the standby by RMAN.

On primary ORCLA

I create the demo table
SQL> create table DEMO tablespace users pctfree 99 as select rownum n from xmltable('1 to 1000');
Table created.

put it in NOLOGGING
SQL> alter table DEMO nologging;
Table altered.

The database is not in force logging:
SQL> select force_logging from v$database;
FORCE_LOGGING
---------------------------------------
NO

Here is a direct-path insert
SQL> insert /*+ append */ into DEMO select rownum n from xmltable('1 to 100000');
100000 rows created.
 
SQL> commit;
Commit complete.

My rows are here:
SQL> select count(*) from DEMO;
COUNT(*)
----------
200000

This is a nologging operation. Media recovery is not possible. The datafile needs backup:
RMAN> report unrecoverable;
using target database control file instead of recovery catalog
Report of files that need backup due to unrecoverable operations
File Type of Backup Required Name
---- ----------------------- -----------------------------------
7 full or incremental /u01/oradata/ORCLA/users01.dbf

On ADG standby ORCLB

In Active Data Guard, I can query the table, but:
SQL> select count(*) from DEMO
*
ERROR at line 1:
ORA-01578: ORACLE data block corrupted (file # 7, block # 16966)
ORA-01110: data file 7: '/u01/oradata/ORCLB/datafile/o1_mf_users_dbvmwdqc_.dbf'
ORA-26040: Data block was loaded using the NOLOGGING option

The blocks were not replicated because redo was not generated by the primary and then not shipped and applied on the standby.

Note that this is not identifed by RMAN on the standby:
RMAN> report unrecoverable;
using target database control file instead of recovery catalog
Report of files that need backup due to unrecoverable operations
File Type of Backup Required Name
---- ----------------------- -----------------------------------
 
RMAN>

recover nonlogged blocks

If I try some recovery here, I can’t because I’m still is apply mode, here is the message I get if I try:
ORA-01153: an incompatible media recovery is active

Let’s stop the apply:
DGMGRL> edit database orclb set state=apply-off;
Succeeded.

In 12.1 I can recover the datafile from the primary with ‘recover from service’ but in 12.2 there is no need to ship the whole datafile. The non-logged block list has been shipped to the standby, recorded in the standby controlfile, and we can list them from v$nonlogged_block.

And we can recover them with a simple command: RECOVER DATABASE NONLOGGED BLOCK

RMAN> recover database nonlogged block;
Starting recover at 25-FEB-17
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=58 device type=DISK
 
starting recovery of nonlogged blocks
List of Datafiles
=================
File Status Nonlogged Blocks Blocks Examined Blocks Skipped
---- ------ ---------------- --------------- --------------
1 OK 0 0 104959
2 OK 0 0 18783
3 OK 0 0 62719
4 OK 0 0 8959
7 OK 0 16731 18948
 
Details of nonlogged blocks can be queried from v$nonlogged_block view
 
recovery of nonlogged blocks complete, elapsed time: 00:00:03

Here it is, I can query the table now
SQL> select count(*) from DEMO;
COUNT(*)
----------
200000

I re-enable real-time apply
DGMGRL> edit database orclb set state=apply-on;
Succeeded.

Switchover

Now, what would happen if I do a switchover of failover between the nologging operation and the nonlogged recovery?
I did the same on primary and then:
DGMGRL> switchover to orclb;
Performing switchover NOW, please wait...
Operation requires a connection to database "orclb"
Connecting ...
Connected to "ORCLB"
Connected as SYSDBA.
New primary database "orclb" is opening...
Operation requires start up of instance "ORCLA" on database "orcla"
Starting instance "ORCLA"...
ORACLE instance started.
Database mounted.
Database opened.
Connected to "ORCLA"
Switchover succeeded, new primary is "orclb"

I can query the list of nonlogged blocks that was shipped to standby:

SQL> select * from v$nonlogged_block;
 
FILE# BLOCK# BLOCKS NONLOGGED_START_CHANGE# NONLOGGED
---------- ---------- ---------- ----------------------- ---------
NONLOGGED_END_CHANGE# NONLOGGED RESETLOGS_CHANGE# RESETLOGS
--------------------- --------- ----------------- ---------
OBJECT# REASON CON_ID
---------------------------------------- ------- ----------
7 307 16826 2197748
2197825 1396169 22-FEB-17
74006 UNKNOWN 0

But I cannot recover because the database (the old standby that became primary) is opened:
ORA-01126: database must be mounted in this instance and not open in any instance

So what?

This new feature is acceptable if you recover the nonlogged blocks on the standby just after the nologging operation on the primary. This can be used automatically for datawarehouse load, but also manually when doing a reorganization or an application release that touches the data. Just don’t forget the recover on the standby to avoid surprises later. It will not reduce the amount of data that is shipped to the standby, because shipping the blocks is roughly the same as shipping the redo for the direct-path writes. But one the primary you have the performance benefit of nologging operations.

 

Cet article 12cR2: Recover nonlogged blocks after NOLOGGING in Data Guard est apparu en premier sur Blog dbi services.

dbca does hard coding

Michael Dinh - Sun, 2017-02-26 09:12

Creating Additional Data Dictionary Structures

catoctk.sql: Oracle Cryptographic Toolkit package

owminst.plb: Workspace Manager

Workspace Manager is installed by default in the seed database and in all databases created by the Database Configuration Assistant (DBCA).

However, in all other Oracle databases, such as those you create with a customized procedure, you must install Workspace Manager before you can use its features.

I know what you are thinking, “What’s the BIG deal?”.  For the most part, it’s not.

However, when a person uses dbca to create a database with different ORACLE_HOME, SID from one server versus another, it becomes inefficient to search and replace all the hard coded values.

Now I know what you are thinking, “Why are ORACLE_HOME’s different to begin with and don’t I just use dbca to create another database?”

That’s not the scope for this post :=) but I am sure you can ascertain the answers.

oracle@arrow1:HAWKA:/u01/app/oracle/admin/foo/scripts
$ ls -alrt

total 40
drwxr-x---. 3 oracle oinstall 4096 Feb 23 20:45 ..
-rw-r-----. 1 oracle oinstall 1948 Feb 23 20:45 init.ora
-rwxr-xr-x. 1 oracle oinstall  627 Feb 23 20:45 foo.sh
-rw-r-----. 1 oracle oinstall 1143 Feb 23 20:45 CreateDB.sql
-rw-r-----. 1 oracle oinstall  346 Feb 23 20:45 CreateDBFiles.sql
-rw-r-----. 1 oracle oinstall  768 Feb 23 20:45 CreateDBCatalog.sql
-rw-r-----. 1 oracle oinstall  506 Feb 23 20:45 lockAccount.sql
-rwxr-xr-x. 1 oracle oinstall  656 Feb 23 20:45 foo.sql
drwxr-x---. 2 oracle oinstall 4096 Feb 23 20:45 .
-rw-r-----. 1 oracle oinstall  966 Feb 23 20:45 postDBCreation.sql

oracle@arrow1:HAWKA:/u01/app/oracle/admin/foo/scripts
$ cat foo.sh

#!/bin/sh

OLD_UMASK=`umask`
umask 0027
mkdir -p /u01/app/oracle/admin/foo/adump
mkdir -p /u01/app/oracle/admin/foo/dpdump
mkdir -p /u01/app/oracle/admin/foo/pfile
mkdir -p /u01/app/oracle/cfgtoollogs/dbca/foo
mkdir -p /u01/app/oracle/fast_recovery_area
mkdir -p /u01/app/oracle/oradata
mkdir -p /u01/app/oracle/product/11.2.0.4/db_1/dbs
umask ${OLD_UMASK}
ORACLE_SID=foo; export ORACLE_SID
PATH=$ORACLE_HOME/bin:$PATH; export PATH
echo You should Add this entry in the /etc/oratab: foo:/u01/app/oracle/product/11.2.0.4/db_1:Y
/u01/app/oracle/product/11.2.0.4/db_1/bin/sqlplus /nolog @/u01/app/oracle/admin/foo/scripts/foo.sql

oracle@arrow1:HAWKA:/u01/app/oracle/admin/foo/scripts
$ cat foo.sql

set verify off
-- DEFINE sysPassword="hacker"
-- DEFINE systemPassword="hacker"
ACCEPT sysPassword CHAR PROMPT 'Enter new password for SYS: ' HIDE
ACCEPT systemPassword CHAR PROMPT 'Enter new password for SYSTEM: ' HIDE
host /u01/app/oracle/product/11.2.0.4/db_1/bin/orapwd file=/u01/app/oracle/product/11.2.0.4/db_1/dbs/orapwfoo force=y
@/u01/app/oracle/admin/foo/scripts/CreateDB.sql
@/u01/app/oracle/admin/foo/scripts/CreateDBFiles.sql
@/u01/app/oracle/admin/foo/scripts/CreateDBCatalog.sql
host /u01/app/oracle/product/11.2.0.4/db_1/bin/srvctl add database -d foo -o /u01/app/oracle/product/11.2.0.4/db_1 -n foo
@/u01/app/oracle/admin/foo/scripts/lockAccount.sql
@/u01/app/oracle/admin/foo/scripts/postDBCreation.sql

oracle@arrow1:HAWKA:/u01/app/oracle/admin/foo/scripts
$ cat CreateDBCatalog.sql

SET VERIFY OFF
connect "SYS"/"&&sysPassword" as SYSDBA
set echo on
spool /u01/app/oracle/admin/foo/scripts/CreateDBCatalog.log append
@/u01/app/oracle/product/11.2.0.4/db_1/rdbms/admin/catalog.sql;
@/u01/app/oracle/product/11.2.0.4/db_1/rdbms/admin/catblock.sql;
@/u01/app/oracle/product/11.2.0.4/db_1/rdbms/admin/catproc.sql;
@/u01/app/oracle/product/11.2.0.4/db_1/rdbms/admin/catoctk.sql;
@/u01/app/oracle/product/11.2.0.4/db_1/rdbms/admin/owminst.plb;
connect "SYSTEM"/"&&systemPassword"
@/u01/app/oracle/product/11.2.0.4/db_1/sqlplus/admin/pupbld.sql;
connect "SYSTEM"/"&&systemPassword"
set echo on
spool /u01/app/oracle/admin/foo/scripts/sqlPlusHelp.log append
@/u01/app/oracle/product/11.2.0.4/db_1/sqlplus/admin/help/hlpbld.sql helpus.sql;
spool off
spool off

Here’s one that I have for creating single instance DB.

Not the best as I should have labeled “pro *** Running initxml.sql ***”
with its corresponding COMP_ID or COMP_NAME versus name of the sql script.

NOTE: for production environment, you would not autoextend datafile 128M (really bad for performance).

define sysPassword="hacker"
define systemPassword="hacker"
spool crdb.log
set echo on timing on time on
host echo $ORACLE_SID
host sysresv
create spfile from pfile;
startup force nomount;
CREATE DATABASE
MAXINSTANCES 1
MAXLOGFILES 32
MAXLOGMEMBERS 4
MAXLOGHISTORY 100
MAXDATAFILES 400
ARCHIVELOG
CHARACTER SET AL32UTF8
NATIONAL CHARACTER SET AL16UTF16
SET TIME_ZONE='US/Pacific'
USER SYS IDENTIFIED BY "&&sysPassword"
USER SYSTEM IDENTIFIED BY "&&systemPassword"
EXTENT MANAGEMENT LOCAL
DATAFILE SIZE 513M AUTOEXTEND ON NEXT 128M MAXSIZE 8193M
SYSAUX DATAFILE SIZE 257M AUTOEXTEND ON NEXT 128M MAXSIZE 8193M
LOGFILE GROUP 1 SIZE 200M,GROUP 2 SIZE 200M,GROUP 3 SIZE 200M
DEFAULT TEMPORARY TABLESPACE temp TEMPFILE SIZE 256M AUTOEXTEND ON NEXT 128M MAXSIZE 8192M
DEFAULT TABLESPACE users DATAFILE SIZE 129M AUTOEXTEND ON NEXT 128M MAXSIZE 8193M
UNDO TABLESPACE undotbs DATAFILE SIZE 256M AUTOEXTEND ON NEXT 128M MAXSIZE 8192M;
spool off
--
spool run_catalog.log
pro *** Running catalog.sql ***
@?/rdbms/admin/catalog.sql
spool off
--
spool run_catproc.log
pro *** Running catproc.sql ***
@?/rdbms/admin/catproc.sql
spool off
/* 
-- BEG Add additional components as required.
spool run_initjvm.log
pro *** Running initjvm.sql ***
@?/javavm/install/initjvm.sql
spool off
--
spool run_initxml.log
pro *** Running initxml.sql ***
@?/xdk/admin/initxml.sql
spool off
--
spool run_xmlja.log
pro *** Running initxml.sql ***
@?/xdk/admin/xmlja.sql
spool off
--
spool run_catjava.log
pro *** Running catjava.sql ***
@?/rdbms/admin/catjava.sql
spool off
--
spool run_catxdbj.log
pro *** Running catcatxdbj.sql ***
@?/rdbms/admin/catxdbj.sql
spool off
-- END Add additional components as required.
*/ 
spool crdb.log append 
exec dbms_scheduler.set_scheduler_attribute(attribute=>'default_timezone',value=>'US/Pacific'); 
connect system/"&&systemPassword" 
@?/sqlplus/admin/pupbld.sql 
exit

Running Unix Shell Scripts from the PeopleSoft Process Scheduler

David Kurtz - Sun, 2017-02-26 04:42

It is nearly 10 years since I first wrote about how to call Unix shell scripts from the Process Scheduler.  Although very little has changed, I have had a number of questions recently, so I thought it was time I checked the script and updated the posting.  I have used PeopleTools 8.54 in the preparation of this note.
The Process Scheduler is essentially just a mechanism for initiating processes on another server.  Mostly those are other PeopleSoft delivered executables.  The exception is the Application Engine Tuxedo Server process (PSAESRV) where the Process Scheduler submits a service request message that is picked up by one of the server processes that are already running.
NB: although the PSAESRV server is configured by default in the Process Scheduler domain, Oracle recommend that you should only use this when you have lots of very short-lived (runtime less than 30 seconds) application engine processes.  Typically, this only occurs in CRM.
Process Type DefinitionFirst you need to create a new process type. I chose to call it ‘Shell Script’.  It runs a named shell wrapper script, psft.sh.  The wrapper script calls the script that is to be executed. Note that the command line in the process type definition includes the fully qualified path.


Wrapper ScriptThis is the wrapper script, psft.sh, that will be called by the process scheduler.
#!/bin/ksh
# (c) David Kurtz 2007
# Script: psft.sh
#
# Syntax: psft.sh DBNAME ACCESSID ACCESSPSWD PRCSINSTANCE command
# where
# DBNAME is the name of the PeopleSoft datbase with a corresponding TNS entry
# ACCESSID is the schema containing the PeopleSoft database
# ACCESSPSWD is the password to ACCESSID
# PRCSINSTANCE is the process instance number supplied by PeopleSoft
#
# Purpose: To start Standard UNIX Shell Script from Process Scheduler, and interface with the PeopleSoft Process Scheduler
# 07.09.2007 Initial Version
# 23.02.2017 Remove unnecessary logfiles section
#set -x

if [ $# -lt 4 ]; then
echo "Usage $0: <DBNAME> <ACCESSID> <ACCESSPSWD> <PRCSINSTANCE> <command>"
exit 1
fi

CONNECT=$2/$3@$11
PRCSINSTANCE=$4
shift 4

#
# Function to set status of API aware process instance
#
function prcsapi2
{
if [ $# -lt 2 ]; then
echo "Parameter Error in function $0"
exit 1
fi

TIMESTAMPCOL=${1}
STATUS=${2}

if [ ${PRCSINSTANCE} -gt 0 ];then
echo "Setting process request ${PRCSINSTANCE} to status ${STATUS}"
sqlplus -S /nolog <<!
set termout off echo off feedback off verify off
connect ${CONNECT}
UPDATE psprcsque
SET runstatus = ${STATUS}
, sessionidnum = $$3
, lastupddttm = SYSTIMESTAMP
WHERE prcsinstance = ${PRCSINSTANCE}
;
UPDATE psprcsrqst
SET runstatus = ${STATUS}
, prcsrtncd = ${PRCSRTNCD}
, continuejob = DECODE(${STATUS},2,1,7,1,9,1,0)4
, ${TIMESTAMPCOL} = SYSTIMESTAMP
, lastupddttm = SYSTIMESTAMP
;
COMMIT;
exit
!

RET=$?
if [ ! $RET ];then
echo "SQL*Plus Error Return Code: $?"
fi
fi
}

#
# Main Execution Starts Here
#

echo $0:$*
date
uname -a
echo "Current Directory: `pwd`"
echo "Process log files in: ${PSPRCSLOGDIR}"
PRCSRTNCD=0
prcsapi begindttm 75
#set
#Run the command
$*
PRCSRTNCD=$?6

if [ ${PRCSRTNCD} -ne 0 ]; then
prcsapi enddttm 3 # failure
else
prcsapi enddttm 9 # success
fi

date
    Notes:
    1. The Oracle user ID, password and TNS name for the PeopleSoft database are supplied in the first three parameters to the wrapper script. The PeopleSoft Process Instance number is the 4th command line parameter. These parameters are then removed with the shift command leaving any other parameters that have been specified.
    2. Function prcsapi sets the status on the process request row and updates the appropriate timestamp columns in the Process Scheduler tables.
    3. PSPRCSQUE.SESSIONIDNUM holds the operating system process ID of the shell executing the wrapper script.
    4. When the process completes and an end of process status is set (either 9 for success, 3 for failure or 2 for delete) CONTINUEJOB is set to 1, otherwise it is set to 0.
    5. When the wrapper scripts start it sets the process status on the process request record to 7 indicate that it is processing.  This can be seen in the Process Monitor.
    6. The return code of the executed script is captured. Later it will be recorded on
      PSPRCSRQST.PRCSRTNCD. A non-zero return code indicates an error and the process status will be set to error.
    Process DefinitionNow I can create a Process Definition that will use the process type twrapper script to execute another command or script.
    The first four parameters passed to the wrapper script are the name of the database, the access ID and password, and the process instance. A string of further parameters will be appended in the individual Process Definition that is the specific command and parameters to be executed.
    It is important that this new process type is defined as being API aware.  That means the process interacts with the Process Scheduler by updating the process status.  You can see how the interaction should be done by looking at procedure Update-Process-Status in the delivered SQR library prcsapi.sqc. Otherwise, the Process Scheduler cannot determine their status.  Consequently, all API-unaware processes have a run status of Success to indicate that they were started successfully.

    I have written a silly test script called i that I want to be executed by the Process Scheduler.  It just prints out the command line parameters as banner text to both standard output and a file called mybanner.log.  This script will be called by psft.sh
    The Process Scheduler creates a working directory for each process request.  It sets the variable $PSPRCSLOGDIR to the fully qualified location of this directory. Note that mybanner.sh changes the current directory to the location of this variable so that it writes mybanner.log there, and thus it is picked up by the distribution agent and made available via the report repository.  You may wish to do this in your scripts.
    Current working directory can be specified at Process Type or Process definition.  However, during my testing, I found that these settings had no effect.  The working directory of the script did not change, and the value was not found in any environmental variable.
    #!/bin/ksh
    #A silly script to test psft.sh
    #(c) David Kurtz 2017
    #banner function from http://stackoverflow.com/questions/652517/whats-the-deal-with-the-banner-command

    if [ "$PSPRCSLOGDIR" ] ; then
    cd $PSPRCSLOGDIR
    fi

    (
    while [ $# -gt 0 ]
    do
    /opt/oracle/psft/ptdb/custhome/banner $1
    shift
    done
    ) | tee mybanner.log
    exit $?
    I can now create a Process Definition that uses the Shell Script process type that will execute mybanner.sh.  Note that this command line is appended to the existing command line specified in the Process Type definition

    You can't quite see it in the screen shot, but the parameter list includes the process instance number:
    /opt/oracle/psft/ptdb/custhome/mybanner.sh "Hello World" %%INSTANCE%%
    Process Scheduler System Settings During my testing, I found that it was necessary to specify output type settings for process type Other in the Process Scheduler System Settings; otherwise the output files were not posted to the report repository.


    The newly defined Process can be run just as any other process is usually run. Any output from the script on the standard output channel is captured by the Process Scheduler and written to a log file that can then be viewed from the View Log/Trace facility within Process Monitor.
    In this case the standard output was written to OTH_DMKTEST_.log, and I also get the mybanner.log that was written to $PSPRCSLOGDIR  in the list of available files.

    mybanner.log contains just the three words passed as parameters
    H     H         ll      ll              
    H H l l
    H H eeee l l oooo
    HHHHHHH e e l l o o
    H H eeeeee l l o o
    H H e l l o o
    H H eeee lll lll oooo

    W W ll d
    W W l d
    W W oooo rr rr l d
    W W o o rr r l ddddd
    W W W o o r l d d
    W W W W o o r l d d
    W W oooo rr lll dddd d

    33333 99999 99999 5555555 00000
    3 3 9 9 9 9 5 0 00
    3 9 9 9 9 5 0 0 0
    3333 999999 999999 55555 0 0 0
    3 9 9 5 0 0 0
    3 3 9 9 5 5 00 0
    33333 99999 99999 55555 00000
    OTH_DMKTEST_39950.log contains the standard output of the entire command - including the additional messages emitted by psft.sh (in bold).
    Note that the current directory is reported as being the location of the Process Scheduler Tuxedo domain. 
    /opt/oracle/psft/ptdb/custhome/psft.sh:/opt/oracle/psft/ptdb/custhome/mybanner.sh Hello World 39950
    Tue Sep 1 21:59:46 UTC 2015
    Linux hcm.london.go-faster.co.uk 2.6.39-400.215.10.el5uek #1 SMP Tue Sep 9 22:51:46 PDT 2014 x86_64 x86_64 x86_64 GNU/Linux
    Current Directory: /home/psadm2/psft/pt/8.54/appserv/prcs/PRCSDOM
    Process log files in: /home/psadm2/psft/pt/8.54/appserv/prcs/PRCSDOM/log_output/OTH_DMKTEST_39950
    Setting process request 39950 to status 7

    H H ll ll
    H H l l
    H H eeee l l oooo
    HHHHHHH e e l l o o
    H H eeeeee l l o o
    H H e l l o o
    H H eeee lll lll oooo

    W W ll d
    W W l d
    W W oooo rr rr l d
    W W o o rr r l ddddd
    W W W o o r l d d
    W W W W o o r l d d
    W W oooo rr lll dddd d

    33333 99999 99999 5555555 00000
    3 3 9 9 9 9 5 0 00
    3 9 9 9 9 5 0 0 0
    3333 999999 999999 55555 0 0 0
    3 9 9 5 0 0 0
    3 3 9 9 5 5 00 0
    33333 99999 99999 55555 00000

    Setting process request 39950 to status 9
    Tue Sep 1 21:59:46 UTC 2015
    A more detailed version of this document can be found at http://www.go-faster.co.uk/docs/process_scheduler_shell_scripts.pdf.

    Data Guard

    Tom Kyte - Sat, 2017-02-25 21:26
    Hello, Tom Can you explain guaranteed protection mode of the Data Guard? What happens with primary database instance in this mode if standby database is unavailiable? Documentation says in this case all changes to primary is prohibited. If it i...
    Categories: DBA Blogs

    Load JSON data file into Oracle 12c

    Tom Kyte - Sat, 2017-02-25 21:26
    Hi Ask Tom Team, I have a requirement to load JSON file into a table in Oracle database. Have seen some examples using External Table and filling it from a JSON File & then extracting fields through JSON functions to load data into relational ...
    Categories: DBA Blogs

    Update query

    Tom Kyte - Sat, 2017-02-25 21:26
    CREATE TABLE DB_FV_WG ( STORE_ID NUMBER(3,0) Not Null, FV_02 NUMBER, Y_02 NUMBER, YEAR VARCHAR(6), Month_NO NUMBER(2,0), L_Code NUMBER(2,0) ) Insert into DB_FV_WG (STORE_ID,FV_02,Y_02,YEAR,Month_NO,L_Code) values ('111',null,null,2016,...
    Categories: DBA Blogs

    Logminer

    Tom Kyte - Sat, 2017-02-25 21:26
    The v$logmnr_contents view shows the transactions from some tables as "unsupported". It seems to not like some of the column data types... I need to know how is it that in normal RMAN recovery, these transactions are successfully applied to the datab...
    Categories: DBA Blogs

    Procedure help - privileges

    Tom Kyte - Sat, 2017-02-25 03:06
    I am creating a procedure within a schema that has dba access and delete any table privilege. I am deleting from other schemas using this procedure. It gives a compilation error saying the table doesn't exist. Do I need to have direct delete privile...
    Categories: DBA Blogs

    Best approach / best practices for application integration at the database level

    Tom Kyte - Sat, 2017-02-25 03:06
    Hi there, There are two applications, which I'll call Application A and Application P. Both applications are third party products with limited ability to modify the application. However, it's possible to make changes to the database, such as addin...
    Categories: DBA Blogs

    parameterized view revisited

    Tom Kyte - Sat, 2017-02-25 03:06
    Hi Connor and Chris (and welcome Maria), I've read a few entries regarding stored procedures and ref cursor to support parameterized views. However my situation is slightly different. In my case I'm given n tables (TBL_1, TBL_2, ...) each defin...
    Categories: DBA Blogs

    millions of dup rec on join multiple tables

    Tom Kyte - Sat, 2017-02-25 03:06
    Hi Tom, I have one SELECT query , which has join around 10 tables, out of 10 tables 2 of them are big tables and remaining are small tables.These 2 big tables together has 90 laks records. When I execute this query it is throwing me error saying t...
    Categories: DBA Blogs

    cost linked to DBLink

    Tom Kyte - Sat, 2017-02-25 03:06
    Hi Connor/Chris/Maria, My question is I have 2 schemas on 2 different servers Schema A1 and Schema B1. There is a DBlink on Schema B1 to access A1. Now I have a query where I need to access 2 huge tables (each contains 10 million records). My c...
    Categories: DBA Blogs

    How can I track the execution of PL/SQL and SQL?

    Tom Kyte - Sat, 2017-02-25 03:06
    Hi Tom, How can I know what a given user is executing if his status is ACTIVE ? How can I know which PL/SQL blocks or SQL statements are being run by him ? As to SQL statemets, I can join v$session.user# with v$sqlarea.parsing_user_id ( am I reall...
    Categories: DBA Blogs

    migrating data from non-partitioned to partitioned table with exchange then...?

    Tom Kyte - Sat, 2017-02-25 03:06
    Hello, I have to migrating data from one non-partitioned to table into a new range-list composite partitioned. I started with exchange subpartition into P_DEFAULT_MODULES (wich holds all the literal values in subpartition key) without validation...
    Categories: DBA Blogs

    March 15: Fairmont Hotels International―Oracle HCM Cloud Customer Forum

    Linda Fishman Hoyle - Fri, 2017-02-24 15:59

    Join us for an Oracle HCM Cloud Customer Forum call on Wednesday, March 15, 2017, at 9:00 a.m. PDT.

    FRHI Hotels and Resorts is the owner of three major hotel brands―Fairmont, Raffles and Swissôtel―each with 50,000 employees in 30+ countries. In this call, Joël Günter, Director, Talent & Culture Systems Architecture, will discuss why FRHI decided to transform its HR systems and processes in a phased approach to the Oracle HCM Cloud.

    Register now to attend the live forum and learn more about Fairmont Hotels Internationals’ experience with Oracle HCM Cloud.

    ksplice

    Wim Coekaerts - Fri, 2017-02-24 15:36
    As many of you probably know by now, a few days ago there was a report of an old long-standing Linux bug that got fixed. Going back to kernels even down to 2.6.18 and possible earlier. This bug was recently fixed, see here.

    Now, distribution vendors, including us, have released kernel updates that customers/users can download and install but as always a regular kernel upgrade requires a reboot. We have had ksplice as a service for Oracle Linux support customers for quite a few years now and we also support Ubuntu and Fedora for free for anyone (see here).

    One thing that is not often talked about but, I believe is very powerful and I wanted to point out here, is the following:

    Typically the distribution vendors (including us) will release an update kernel that's the 'latest' version with these CVEs fixed, but many customers run older versions of both the distribution and kernels. We now see some other vendors trying to provide the basics for some online patching but by and far it's based on one-offs and for specific kernels. A big part of the ksplice service is the backend infrastructure to easily build updates for literally a few 1000 kernels. This gives customers great flexibility. You can be on one of many dot-releases of the OS and you can use ksplice. Here is a list of example kernel versions for Oracle Linux that you could be running today and we provide updates for with ksplice,for ,for instance, this DCCP bug. That's a big difference with what other folks have been trying to mimic now that online patching has become more and more important for availability.

    Here is an example kernel 2.6.32-573.7.1.el6.x86_64 #1 SMP Tue Sep 22 08:34:17 PDT 2015 So that's a kernel built back in September of 2015, a random 'dot release' I run on one of my machines, and there's a ksplice patch available for these recent CVEs. I don't have to worry about having to install the 'latest' kernel, nor doing a reboot.

    # uptrack-upgrade 
    The following steps will be taken:
    Install [f4muxalm] CVE-2017-6074: Denial-of-service when using IPV6_RECVPKTINFO socket option.
    Install [5ncctcgz] CVE-2016-9555: Remote denial-of-service due to SCTP state machine memory corruption.
    
    Go ahead [y/N]? y
    Installing [f4muxalm] CVE-2017-6074: Denial-of-service when using IPV6_RECVPKTINFO socket option.
    Installing [5ncctcgz] CVE-2016-9555: Remote denial-of-service due to SCTP state machine memory corruption.
    Your kernel is fully up to date.
    Effective kernel version is 2.6.32-642.15.1.el6
    

    and done. That easy. My old 2.6.32-573.7.1 kernel looks like 2.6.32-642.15.1 in terms of critical fixes and CVEs.

    # uptrack-show
    Installed updates:
    [cct5dnbf] Clear garbage data on the kernel stack when handling signals.
    [ektd95cj] Reduce usage of reserved percpu memory.
    [uuhgbl3e] Remote denial-of-service in Brocade Ethernet driver.
    [kg3f16ii] CVE-2015-7872: Denial-of-service when garbage collecting uninstantiated keyring.
    [36ng2h1l] CVE-2015-7613: Privilege escalation in IPC object initialization.
    [33jwvtbb] CVE-2015-5307: KVM host denial-of-service in alignment check.
    [38gzh9gl] CVE-2015-8104: KVM host denial-of-service in debug exception.
    [6wvrdj93] CVE-2015-2925: Privilege escalation in bind mounts inside namespaces.
    [1l4i9dfh] CVE-2016-0774: Information leak in the pipe system call on failed atomic read.
    [xu4auj49] CVE-2015-5157: Disable modification of LDT by userspace processes.
    [554ck5nl] CVE-2015-8767: Denial-of-service in SCTP heartbeat timeout.
    [adgeye5p] CVE-2015-8543: Denial-of-service on out of range protocol for raw sockets.
    [5ojkw9lv] CVE-2015-7550: Denial-of-service when reading and revoking a key concurrently.
    [gfr93o7j] CVE-2015-8324: NULL pointer dereference in ext4 on mount error.
    [ft01zrkg] CVE-2013-2015, CVE-2015-7509: Possible privilege escalation when mounting an non-journaled ext4 filesystem.
    [87lw5yyy] CVE-2015-8215: Remote denial-of-service of network traffic when changing the MTU.
    [2bby9cuy] CVE-2010-5313, CVE-2014-7842: Denial of service in KVM L1 guest from L2 guest.
    [orjsp65y] CVE-2015-5156: Denial-of-service in Virtio network device.
    [5j4hp0ot] Device Mapper logic error when reloading the block multi-queue.
    [a1e5kxp6] CVE-2016-4565: Privilege escalation in Infiniband ioctl.
    [gfpg64bh] CVE-2016-5696: Session hijacking in TCP connections.
    [b4ljcwin] Message corruption in pseudo terminal output.
    [prijjgt5] CVE-2016-4470: Denial-of-service in the keyring subsystem.
    [4y2f30ch] CVE-2016-5829: Memory corruption in unknown USB HID devices.
    [j1mivn4f] Denial-of-service when resetting a Fibre Channel over Ethernet interface.
    [nawv8jdu] CVE-2016-5195: Privilege escalation when handling private mapping copy-on-write.
    [97fe0h7s] CVE-2016-1583: Privilege escalation in eCryptfs.
    [fdztfgcv] Denial-of-service when sending a TCP reset from the netfilter.
    [gm4ldjjf] CVE-2016-6828: Use after free during TCP transmission.
    [s8pymcf8] CVE-2016-7117: Denial-of-service in recvmmsg() error handling.
    [1ktf7029] CVE-2016-4997, CVE-2016-4998: Privilege escalation in the Netfilter driver.
    [f4muxalm] CVE-2017-6074: Denial-of-service when using IPV6_RECVPKTINFO socket option.
    [5ncctcgz] CVE-2016-9555: Remote denial-of-service due to SCTP state machine memory corruption.
    
    Effective kernel version is 2.6.32-642.15.1.el6
    

    Here is the list of kernels we build modules for as part of Oracle Linux customers kernel choices:

    oracle-2.6.18-238.0.0.0.1.el5
    oracle-2.6.18-238.1.1.0.1.el5
    oracle-2.6.18-238.5.1.0.1.el5
    oracle-2.6.18-238.9.1.0.1.el5
    oracle-2.6.18-238.12.1.0.1.el5
    oracle-2.6.18-238.19.1.0.1.el5
    oracle-2.6.18-274.0.0.0.1.el5
    oracle-2.6.18-274.3.1.0.1.el5
    oracle-2.6.18-274.7.1.0.1.el5
    oracle-2.6.18-274.12.1.0.1.el5
    oracle-2.6.18-274.17.1.0.1.el5
    oracle-2.6.18-274.18.1.0.1.el5
    oracle-2.6.18-308.0.0.0.1.el5
    oracle-2.6.18-308.1.1.0.1.el5
    oracle-2.6.18-308.4.1.0.1.el5
    oracle-2.6.18-308.8.1.0.1.el5
    oracle-2.6.18-308.8.2.0.1.el5
    oracle-2.6.18-308.11.1.0.1.el5
    oracle-2.6.18-308.13.1.0.1.el5
    oracle-2.6.18-308.16.1.0.1.el5
    oracle-2.6.18-308.20.1.0.1.el5
    oracle-2.6.18-308.24.1.0.1.el5
    oracle-2.6.18-348.0.0.0.1.el5
    oracle-2.6.18-348.1.1.0.1.el5
    oracle-2.6.18-348.2.1.0.1.el5
    oracle-2.6.18-348.3.1.0.1.el5
    oracle-2.6.18-348.4.1.0.1.el5
    oracle-2.6.18-348.6.1.0.1.el5
    oracle-2.6.18-348.12.1.0.1.el5
    oracle-2.6.18-348.16.1.0.1.el5
    oracle-2.6.18-348.18.1.0.1.el5
    oracle-2.6.18-371.0.0.0.1.el5
    oracle-2.6.18-371.1.2.0.1.el5
    oracle-2.6.18-371.3.1.0.1.el5
    oracle-2.6.18-371.4.1.0.1.el5
    oracle-2.6.18-371.6.1.0.1.el5
    oracle-2.6.18-371.8.1.0.1.el5
    oracle-2.6.18-371.9.1.0.1.el5
    oracle-2.6.18-371.11.1.0.1.el5
    oracle-2.6.18-371.12.1.0.1.el5
    oracle-2.6.18-398.0.0.0.1.el5
    oracle-2.6.18-400.0.0.0.1.el5
    oracle-2.6.18-400.1.1.0.1.el5
    oracle-2.6.18-402.0.0.0.1.el5
    oracle-2.6.18-404.0.0.0.1.el5
    oracle-2.6.18-406.0.0.0.1.el5
    oracle-2.6.18-407.0.0.0.1.el5
    oracle-2.6.18-408.0.0.0.1.el5
    oracle-2.6.18-409.0.0.0.1.el5
    oracle-2.6.18-410.0.0.0.1.el5
    oracle-2.6.18-411.0.0.0.1.el5
    oracle-2.6.18-412.0.0.0.1.el5
    oracle-2.6.18-416.0.0.0.1.el5
    oracle-2.6.18-417.0.0.0.1.el5
    oracle-2.6.18-418.0.0.0.1.el5
    oracle-2.6.32-642.0.0.0.1.el6
    oracle-3.10.0-514.6.1.0.1.el7
    oracle-3.10.0-514.6.2.0.1.el7
    oracle-uek-2.6.39-100.5.1
    oracle-uek-2.6.39-100.6.1
    oracle-uek-2.6.39-100.7.1
    oracle-uek-2.6.39-100.10.1
    oracle-uek-2.6.39-200.24.1
    oracle-uek-2.6.39-200.29.1
    oracle-uek-2.6.39-200.29.2
    oracle-uek-2.6.39-200.29.3
    oracle-uek-2.6.39-200.31.1
    oracle-uek-2.6.39-200.32.1
    oracle-uek-2.6.39-200.33.1
    oracle-uek-2.6.39-200.34.1
    oracle-uek-2.6.39-300.17.1
    oracle-uek-2.6.39-300.17.2
    oracle-uek-2.6.39-300.17.3
    oracle-uek-2.6.39-300.26.1
    oracle-uek-2.6.39-300.28.1
    oracle-uek-2.6.39-300.32.4
    oracle-uek-2.6.39-400.17.1
    oracle-uek-2.6.39-400.17.2
    oracle-uek-2.6.39-400.21.1
    oracle-uek-2.6.39-400.21.2
    oracle-uek-2.6.39-400.23.1
    oracle-uek-2.6.39-400.24.1
    oracle-uek-2.6.39-400.109.1
    oracle-uek-2.6.39-400.109.3
    oracle-uek-2.6.39-400.109.4
    oracle-uek-2.6.39-400.109.5
    oracle-uek-2.6.39-400.109.6
    oracle-uek-2.6.39-400.209.1
    oracle-uek-2.6.39-400.209.2
    oracle-uek-2.6.39-400.210.2
    oracle-uek-2.6.39-400.211.1
    oracle-uek-2.6.39-400.211.2
    oracle-uek-2.6.39-400.211.3
    oracle-uek-2.6.39-400.212.1
    oracle-uek-2.6.39-400.214.1
    oracle-uek-2.6.39-400.214.3
    oracle-uek-2.6.39-400.214.4
    oracle-uek-2.6.39-400.214.5
    oracle-uek-2.6.39-400.214.6
    oracle-uek-2.6.39-400.215.1
    oracle-uek-2.6.39-400.215.2
    oracle-uek-2.6.39-400.215.3
    oracle-uek-2.6.39-400.215.4
    oracle-uek-2.6.39-400.215.6
    oracle-uek-2.6.39-400.215.7
    oracle-uek-2.6.39-400.215.10
    oracle-uek-2.6.39-400.215.11
    oracle-uek-2.6.39-400.215.12
    oracle-uek-2.6.39-400.215.13
    oracle-uek-2.6.39-400.215.14
    oracle-uek-2.6.39-400.215.15
    oracle-uek-2.6.39-400.243.1
    oracle-uek-2.6.39-400.245.1
    oracle-uek-2.6.39-400.246.2
    oracle-uek-2.6.39-400.247.1
    oracle-uek-2.6.39-400.248.3
    oracle-uek-2.6.39-400.249.1
    oracle-uek-2.6.39-400.249.3
    oracle-uek-2.6.39-400.249.4
    oracle-uek-2.6.39-400.250.2
    oracle-uek-2.6.39-400.250.4
    oracle-uek-2.6.39-400.250.5
    oracle-uek-2.6.39-400.250.6
    oracle-uek-2.6.39-400.250.7
    oracle-uek-2.6.39-400.250.9
    oracle-uek-2.6.39-400.250.10
    oracle-uek-2.6.39-400.250.11
    oracle-uek-2.6.39-400.264.1
    oracle-uek-2.6.39-400.264.4
    oracle-uek-2.6.39-400.264.5
    oracle-uek-2.6.39-400.264.6
    oracle-uek-2.6.39-400.264.13
    oracle-uek-2.6.39-400.276.1
    oracle-uek-2.6.39-400.277.1
    oracle-uek-2.6.39-400.278.1
    oracle-uek-2.6.39-400.278.2
    oracle-uek-2.6.39-400.278.3
    oracle-uek-2.6.39-400.280.1
    oracle-uek-2.6.39-400.281.1
    oracle-uek-2.6.39-400.282.1
    oracle-uek-2.6.39-400.283.1
    oracle-uek-2.6.39-400.283.2
    oracle-uek-2.6.39-400.284.1
    oracle-uek-2.6.39-400.284.2
    oracle-uek-2.6.39-400.286.2
    oracle-uek-2.6.39-400.286.3
    oracle-uek-2.6.39-400.290.1
    oracle-uek-2.6.39-400.290.2
    oracle-uek-2.6.39-400.293.1
    oracle-uek-2.6.39-400.293.2
    oracle-uek-2.6.39-400.294.1
    oracle-uek-2.6.39-400.294.2
    oracle-uek-2.6.39-400.128.21
    oracle-uek-3.8.13-16
    oracle-uek-3.8.13-16.1.1
    oracle-uek-3.8.13-16.2.1
    oracle-uek-3.8.13-16.2.2
    oracle-uek-3.8.13-16.2.3
    oracle-uek-3.8.13-16.3.1
    oracle-uek-3.8.13-26
    oracle-uek-3.8.13-26.1.1
    oracle-uek-3.8.13-26.2.1
    oracle-uek-3.8.13-26.2.2
    oracle-uek-3.8.13-26.2.3
    oracle-uek-3.8.13-26.2.4
    oracle-uek-3.8.13-35
    oracle-uek-3.8.13-35.1.1
    oracle-uek-3.8.13-35.1.2
    oracle-uek-3.8.13-35.1.3
    oracle-uek-3.8.13-35.3.1
    oracle-uek-3.8.13-35.3.2
    oracle-uek-3.8.13-35.3.3
    oracle-uek-3.8.13-35.3.4
    oracle-uek-3.8.13-35.3.5
    oracle-uek-3.8.13-44
    oracle-uek-3.8.13-44.1.1
    oracle-uek-3.8.13-44.1.3
    oracle-uek-3.8.13-44.1.4
    oracle-uek-3.8.13-44.1.5
    oracle-uek-3.8.13-55
    oracle-uek-3.8.13-55.1.1
    oracle-uek-3.8.13-55.1.2
    oracle-uek-3.8.13-55.1.5
    oracle-uek-3.8.13-55.1.6
    oracle-uek-3.8.13-55.1.8
    oracle-uek-3.8.13-55.2.1
    oracle-uek-3.8.13-68
    oracle-uek-3.8.13-68.1.2
    oracle-uek-3.8.13-68.1.3
    oracle-uek-3.8.13-68.2.2
    oracle-uek-3.8.13-68.2.2.1
    oracle-uek-3.8.13-68.2.2.2
    oracle-uek-3.8.13-68.3.1
    oracle-uek-3.8.13-68.3.2
    oracle-uek-3.8.13-68.3.3
    oracle-uek-3.8.13-68.3.4
    oracle-uek-3.8.13-68.3.5
    oracle-uek-3.8.13-98
    oracle-uek-3.8.13-98.1.1
    oracle-uek-3.8.13-98.1.2
    oracle-uek-3.8.13-98.2.1
    oracle-uek-3.8.13-98.2.2
    oracle-uek-3.8.13-98.4.1
    oracle-uek-3.8.13-98.5.2
    oracle-uek-3.8.13-98.6.1
    oracle-uek-3.8.13-98.7.1
    oracle-uek-3.8.13-98.8.1
    oracle-uek-3.8.13-118
    oracle-uek-3.8.13-118.2.1
    oracle-uek-3.8.13-118.2.2
    oracle-uek-3.8.13-118.2.4
    oracle-uek-3.8.13-118.2.5
    oracle-uek-3.8.13-118.3.1
    oracle-uek-3.8.13-118.3.2
    oracle-uek-3.8.13-118.4.1
    oracle-uek-3.8.13-118.4.2
    oracle-uek-3.8.13-118.6.1
    oracle-uek-3.8.13-118.6.2
    oracle-uek-3.8.13-118.7.1
    oracle-uek-3.8.13-118.8.1
    oracle-uek-3.8.13-118.9.1
    oracle-uek-3.8.13-118.9.2
    oracle-uek-3.8.13-118.10.2
    oracle-uek-3.8.13-118.11.2
    oracle-uek-3.8.13-118.13.2
    oracle-uek-3.8.13-118.13.3
    oracle-uek-3.8.13-118.14.1
    oracle-uek-3.8.13-118.14.2
    oracle-uek-3.8.13-118.15.1
    oracle-uek-3.8.13-118.15.2
    oracle-uek-3.8.13-118.15.3
    oracle-uek-3.8.13-118.16.2
    oracle-uek-3.8.13-118.16.3
    oracle-uek-4.1.12-32
    oracle-uek-4.1.12-32.1.2
    oracle-uek-4.1.12-32.1.3
    oracle-uek-4.1.12-32.2.1
    oracle-uek-4.1.12-32.2.3
    oracle-uek-4.1.12-37.2.1
    oracle-uek-4.1.12-37.2.2
    oracle-uek-4.1.12-37.3.1
    oracle-uek-4.1.12-37.4.1
    oracle-uek-4.1.12-37.5.1
    oracle-uek-4.1.12-37.6.1
    oracle-uek-4.1.12-37.6.2
    oracle-uek-4.1.12-37.6.3
    oracle-uek-4.1.12-61.1.6
    oracle-uek-4.1.12-61.1.9
    oracle-uek-4.1.12-61.1.10
    oracle-uek-4.1.12-61.1.13
    oracle-uek-4.1.12-61.1.14
    oracle-uek-4.1.12-61.1.16
    oracle-uek-4.1.12-61.1.17
    oracle-uek-4.1.12-61.1.18
    oracle-uek-4.1.12-61.1.19
    oracle-uek-4.1.12-61.1.21
    oracle-uek-4.1.12-61.1.22
    oracle-uek-4.1.12-61.1.23
    oracle-uek-4.1.12-61.1.24
    oracle-uek-4.1.12-61.1.25
    oracle-uek-4.1.12-61.1.27
    rhel-2.6.32-71.el6
    rhel-2.6.32-71.7.1.el6
    rhel-2.6.32-71.14.1.el6
    rhel-2.6.32-71.18.1.el6
    rhel-2.6.32-71.18.2.el6
    rhel-2.6.32-71.24.1.el6
    rhel-2.6.32-71.29.1.el6
    rhel-2.6.32-131.0.15.el6
    rhel-2.6.32-131.2.1.el6
    rhel-2.6.32-131.4.1.el6
    rhel-2.6.32-131.6.1.el6
    rhel-2.6.32-131.12.1.el6
    rhel-2.6.32-131.17.1.el6
    rhel-2.6.32-131.21.1.el6
    rhel-2.6.32-220.el6
    rhel-2.6.32-220.2.1.el6
    rhel-2.6.32-220.4.1.el6
    rhel-2.6.32-220.4.2.el6
    rhel-2.6.32-220.7.1.el6
    rhel-2.6.32-220.13.1.el6
    rhel-2.6.32-220.17.1.el6
    rhel-2.6.32-220.23.1.el6
    rhel-2.6.32-279.el6
    rhel-2.6.32-279.1.1.el6
    rhel-2.6.32-279.2.1.el6
    rhel-2.6.32-279.5.1.el6
    rhel-2.6.32-279.5.2.el6
    rhel-2.6.32-279.9.1.el6
    rhel-2.6.32-279.11.1.el6
    rhel-2.6.32-279.14.1.el6
    rhel-2.6.32-279.19.1.el6
    rhel-2.6.32-279.22.1.el6
    rhel-2.6.32-358.el6
    rhel-2.6.32-358.0.1.el6
    rhel-2.6.32-358.2.1.el6
    rhel-2.6.32-358.6.1.el6
    rhel-2.6.32-358.6.2.el6
    rhel-2.6.32-358.6.2.el6.x86_64.crt1
    rhel-2.6.32-358.11.1.el6
    rhel-2.6.32-358.14.1.el6
    rhel-2.6.32-358.18.1.el6
    rhel-2.6.32-358.23.2.el6
    rhel-2.6.32-431.el6
    rhel-2.6.32-431.1.2.el6
    rhel-2.6.32-431.3.1.el6
    rhel-2.6.32-431.5.1.el6
    rhel-2.6.32-431.11.2.el6
    rhel-2.6.32-431.17.1.el6
    rhel-2.6.32-431.20.3.el6
    rhel-2.6.32-431.20.5.el6
    rhel-2.6.32-431.23.3.el6
    rhel-2.6.32-431.29.2.el6
    rhel-2.6.32-504.el6
    rhel-2.6.32-504.1.3.el6
    rhel-2.6.32-504.3.3.el6
    rhel-2.6.32-504.8.1.el6
    rhel-2.6.32-504.12.2.el6
    rhel-2.6.32-504.16.2.el6
    rhel-2.6.32-504.23.4.el6
    rhel-2.6.32-504.30.3.el6
    rhel-2.6.32-573.el6
    rhel-2.6.32-573.1.1.el6
    rhel-2.6.32-573.3.1.el6
    rhel-2.6.32-573.7.1.el6
    rhel-2.6.32-573.8.1.el6
    rhel-2.6.32-573.12.1.el6
    rhel-2.6.32-573.18.1.el6
    rhel-2.6.32-573.22.1.el6
    rhel-2.6.32-573.26.1.el6
    rhel-2.6.32-642.el6
    rhel-2.6.32-642.1.1.el6
    rhel-2.6.32-642.3.1.el6
    rhel-2.6.32-642.4.2.el6
    rhel-2.6.32-642.6.1.el6
    rhel-2.6.32-642.6.2.el6
    rhel-2.6.32-642.11.1.el6
    rhel-2.6.32-642.13.1.el6
    rhel-2.6.32-642.13.2.el6
    rhel-3.10.0-123.el7
    rhel-3.10.0-123.1.2.el7
    rhel-3.10.0-123.4.2.el7
    rhel-3.10.0-123.4.4.el7
    rhel-3.10.0-123.6.3.el7
    rhel-3.10.0-123.8.1.el7
    rhel-3.10.0-123.9.2.el7
    rhel-3.10.0-123.9.3.el7
    rhel-3.10.0-123.13.1.el7
    rhel-3.10.0-123.13.2.el7
    rhel-3.10.0-123.20.1.el7
    rhel-3.10.0-229.el7
    rhel-3.10.0-229.1.2.el7
    rhel-3.10.0-229.4.2.el7
    rhel-3.10.0-229.7.2.el7
    rhel-3.10.0-229.11.1.el7
    rhel-3.10.0-229.14.1.el7
    rhel-3.10.0-229.20.1.el6.x86_64.knl2
    rhel-3.10.0-229.20.1.el7
    rhel-3.10.0-327.el7
    rhel-3.10.0-327.3.1.el7
    rhel-3.10.0-327.4.4.el7
    rhel-3.10.0-327.4.5.el7
    rhel-3.10.0-327.10.1.el7
    rhel-3.10.0-327.13.1.el7
    rhel-3.10.0-327.18.2.el7
    rhel-3.10.0-327.22.2.el7
    rhel-3.10.0-327.28.2.el7
    rhel-3.10.0-327.28.3.el7
    rhel-3.10.0-327.36.1.el7
    rhel-3.10.0-327.36.2.el7
    rhel-3.10.0-327.36.3.el7
    rhel-3.10.0-514.el7
    rhel-3.10.0-514.2.2.el7
    rhel-3.10.0-514.6.1.el7
    rhel-3.10.0-514.6.2.el7
    rhel-2.6.18-92.1.10.el5
    rhel-2.6.18-92.1.13.el5
    rhel-2.6.18-92.1.17.el5
    rhel-2.6.18-92.1.18.el5
    rhel-2.6.18-92.1.22.el5
    rhel-2.6.18-128.el5
    rhel-2.6.18-128.1.1.el5
    rhel-2.6.18-128.1.6.el5
    rhel-2.6.18-128.1.10.el5
    rhel-2.6.18-128.1.14.el5
    rhel-2.6.18-128.1.16.el5
    rhel-2.6.18-128.2.1.el5
    rhel-2.6.18-128.4.1.el5
    rhel-2.6.18-128.7.1.el5
    rhel-2.6.18-149.el5
    rhel-2.6.18-164.el5
    rhel-2.6.18-164.2.1.el5
    rhel-2.6.18-164.6.1.el5
    rhel-2.6.18-164.9.1.el5
    rhel-2.6.18-164.10.1.el5
    rhel-2.6.18-164.11.1.el5
    rhel-2.6.18-164.15.1.el5
    rhel-2.6.18-194.el5
    rhel-2.6.18-194.3.1.el5
    rhel-2.6.18-194.8.1.el5
    rhel-2.6.18-194.11.1.el5
    rhel-2.6.18-194.11.3.el5
    rhel-2.6.18-194.11.4.el5
    rhel-2.6.18-194.17.1.el5
    rhel-2.6.18-194.17.4.el5
    rhel-2.6.18-194.26.1.el5
    rhel-2.6.18-194.32.1.el5
    rhel-2.6.18-238.el5
    rhel-2.6.18-238.1.1.el5
    rhel-2.6.18-238.5.1.el5
    rhel-2.6.18-238.9.1.el5
    rhel-2.6.18-238.12.1.el5
    rhel-2.6.18-238.19.1.el5
    rhel-2.6.18-274.el5
    rhel-2.6.18-274.3.1.el5
    rhel-2.6.18-274.7.1.el5
    rhel-2.6.18-274.12.1.el5
    rhel-2.6.18-274.17.1.el5
    rhel-2.6.18-274.18.1.el5
    rhel-2.6.18-308.el5
    rhel-2.6.18-308.1.1.el5
    rhel-2.6.18-308.4.1.el5
    rhel-2.6.18-308.8.1.el5
    rhel-2.6.18-308.8.2.el5
    rhel-2.6.18-308.11.1.el5
    rhel-2.6.18-308.13.1.el5
    rhel-2.6.18-308.16.1.el5
    rhel-2.6.18-308.20.1.el5
    rhel-2.6.18-308.24.1.el5
    rhel-2.6.18-348.el5
    rhel-2.6.18-348.1.1.el5
    rhel-2.6.18-348.2.1.el5
    rhel-2.6.18-348.3.1.el5
    rhel-2.6.18-348.4.1.el5
    rhel-2.6.18-348.6.1.el5
    rhel-2.6.18-348.12.1.el5
    rhel-2.6.18-348.16.1.el5
    rhel-2.6.18-348.18.1.el5
    rhel-2.6.18-371.el5
    rhel-2.6.18-371.1.2.el5
    rhel-2.6.18-371.3.1.el5
    rhel-2.6.18-371.4.1.el5
    rhel-2.6.18-371.6.1.el5
    rhel-2.6.18-371.8.1.el5
    rhel-2.6.18-371.9.1.el5
    rhel-2.6.18-371.11.1.el5
    rhel-2.6.18-371.12.1.el5
    rhel-2.6.18-398.el5
    rhel-2.6.18-400.el5
    rhel-2.6.18-400.1.1.el5
    rhel-2.6.18-402.el5
    rhel-2.6.18-404.el5
    rhel-2.6.18-406.el5
    rhel-2.6.18-407.el5
    rhel-2.6.18-408.el5
    rhel-2.6.18-409.el5
    rhel-2.6.18-410.el5
    rhel-2.6.18-411.el5
    rhel-2.6.18-412.el5
    rhel-2.6.18-416.el5
    rhel-2.6.18-417.el5
    rhel-2.6.18-418.el5
    

    compare that to kpatch or kgraft or so.

    Spotlight Series Video on Elasticsearch and PeopleSoft

    PeopleSoft Technology Blog - Fri, 2017-02-24 09:58

    We've just posted a new Spotlight Series video on using Elasticsearch with PeopleSoft.  This video is the first in a series, and covers the following topics:

    • What is Elasticsearch and why did PeopleSoft move to it for our search engine?
    • What releases of PeopleTools and PeopleSoft applications are needed to deploy Elasticsearch?
    • How do customers obtain it?  Are there any licensing implications?
    • What are the advantages of Elasticsearch?

    I addition to these points, the main purpose of the video is to provide an overview of the process for deploying Elasticsearch with your PeopleSoft environments.  We cover the overall process, and describe the scenarios that may apply to your enterprise.

    We will be releasing additional videos on Elasticsearch with PeopleSoft in short order.  These videos will cover the following topics in greater detail:

    • Transitioning to Elasticsearch from Oracle's Secure Enterprise Search
    • Initial installation of Elasticsearch
    • Administration and maintenance of Elasticsearch

    All these videos will be posted on the PeopleSoft YouTube channel.

    Data Preparation for "Analyse Elections with Oracle Data Visualisation Desktop"

    Rittman Mead Consulting - Fri, 2017-02-24 09:13

    In the previous post I wrote about some interesting insights we can get from the elections data. And this post concentrates on the data preparation for it. As the first one, it doesn't work as a guide and aimed to show some data manipulation things end-user can do without IT department and coding. As data preparation is not absolutely needed for the understanding of the visual part but still a piece of the story it was extracted as a separate post.

    The Data

    From the logical point of view I have two datasets.

    1. Voting results by election commissions: number of votes for every political party and a lot of technical measures like number of registered voters, number of good and damaged voting ballots and so on.
    2. Turnout figures at given times throughout the day.

    From the physical point of view, both datasets are big files (about 500 Mb combined) each line of which is a single record. These records represent information at various levels of aggregation. I will use this fact to show some of Data flows features (great introduction from Francesco Tisiot).

    Generally, each record is a JSON which looks like this:

    {  
     [...]
     "Data":{  
      "1 | Number of voters included in the voters list at the time of the end of voting":"102",
      "2 | The number of ballots received by the precinct election commission":"102",
      "3 | The number of ballots issued to voters who voted early":"0",
      [...]
      "19 | 1 Russian political party 'Rodina'":"0",
      "20 | 2 Political party 'COMMUNISTS COMMUNIST PARTY OF RUSSIA'":"3",
      "21 | 3 Political party 'Russian Party of Pensioners for Justice'":"0",
      [...]
      },
      "Loc":[  
       "Nizhny Novgorod Region",
       "129 Nizhny Novgorod Region - Nizhny Novgorod",
       "28 Krasnooktyabrskaya",
       "PEC #1148"
       ]
    }
    

    You can find a full sample of the JSON here.

    Data part is a set of 32 measures and Loc is an array of one to four elements. The more detailed the record, the more elements in Loc array.

    The whole votes dataset is slightly smaller than 100'000 of records. As I said, records have various levels of aggregation, and this sample shows the most detailed data (its Loc has four elements). Nizhny Novgorod Region is obviously the region, 129 Nizhny Novgorod Oblast - Nizhny Novgorod is a concatenation of district electoral commission number (129), region name (Nizhny Novgorod Region) and district electoral commission name (Nizhny Novgorod). 28 Krasnooktyabrskaya is number and name of a territorial election commission, and PEC #1148 is precinct electoral commission. For the most aggregated records Loc looks like:

    "Loc":[  
          "DEC #129"
       ]
    

    This is an aggregated data for a district election commission #129.

    Unfortunately, DVD doesn't work with JSON format directly. As one of the possibilities, we can use Apache Drill (for more information see previous posts by Francesco Tisiot and Robin Moffatt). Apache Drill is a supported option for Oracle DV, but it isn't an end-user tool and the story is not about building a complete and reusable system. It is about the storytelling. So I did a one-time manual conversion using Excel magic and got a plain table.

    The second part of the data is turnout. In JSON form it's basically the same: one record per line, Data part with four values and Loc part based on the same principles. Data is significantly smaller and has only four values - turnout at four fixed moments. But here I need it in a more dimesional form. I want time as a dimension rather than pivoted.

    In my opinion, pivot/unpivot is one more highly desired feature DVD needs. We can live without it in tools like OBIEE because its models aren't built on rapidly changing user-generated data structures, but for desktop data exploration tools like DVD powerful data transformation capabilities are crucial.

    But while it isn't implemented, I made unpivot in Excel and got this.

    It wasn't obvious that I need in in this particular form in the beginning of data preparation stage, but the good news is both visualisation and data manipulation live in one product, and in reality I was able to build charts, understand that I need more data/attributes/etc, add them and go back to analysis. Very smooth and clean.

    I Thought We Weren’t Supposed to Use Excel?

    While Data flows DVD feature is powerful and very useful there are some data manipulation tasks that Excel can do just easier or better than DVD. I hope that Oracle will improve Data flows over time but right now it’s not perfect.

    I used Excel for the following:

    1. Splitting of long strings into fields. I could do it using locate+substring+left+right functions in DVD. But that would produce unreadable code. Excel Text to columns tool is a weapon of choice so far.
    2. UNION. When I need to combine two datasets into one, Excel’s copy and paste are beyond competition. Adding set operations (not only union) will greatly improve Oracle’s tool.
    3. A few technical tasks like translating the data. Initially, the data was not in English and I had to translate it. I could use both Excel and DVD for this task but chose Excel in order to simplify the post. But if I did it in DVD, it would be just a few more joins. Can't say what is simpler, so no clear winner here.

    Note: I'm talking about static data sets. In case of a changing data that may be not true.

    Data Transformations in DVD

    While I was able to (had to) make some of the data preparation in Excel, it's better and easier to do some tasks in DVD using relatively new Data flows feature. I will start with building a clean dictionary of electoral commissions.

    When I finished my Excel part, I got this:

    It isn't so bad, but I can make it better in a few minutes. I want to rename Location1 into Subject, split Location2 into DEC #129 and Nizhny Novgorod, remove leading digits from Location3 and rename it to TEC name, and also rename Location4 into PEC #.

    Here is the data flow which does it. It may not look like the simplest thing in the world, but I spent about the same time building it as you spent reading what it does.

    The second thing I want to do is to split my wide file with 32 measures into two. One for party results and another one for other let's say "technical" measures like number of voters, ballots and so on. It's not absolutely necessary but will please me and anyone else who will see it. Both data flows for this task use Commissions dictionary I made earlier and the same source file. I filter the source to get only the most detailed data, join with commissions, select only interesting columns and save the result. Simple and straightforward. The only difference is sets of selected columns.

    I could start building visualisations with this data, but I'd prefer to calculate some relative measures. Absolute numbers won't help me because every commission has its own number of voters. We can't compare them directly but relative measures will do the trick.
    Data flow for these calculations is the simplest of all.
    The only not immediately obvious part is why I placed two consequential Add columns steps. The first one calculates Total votes column as a sum of votes for all parties and the second one uses this column for calculation of a percentage.

    And it is really cool how we can use results of different flows. I took Protocols, made Commisions, joined Protocols and Commisions and got Votes data and later made calculations based on Votes data. That was very cool.

    And here I want to show how the autocomplete feature in formulas editor works (because I really like it). I hope to see it on OBIEE one day.

    I have one more data flow for the turnout data, but it simply joins source file with the dictionary of commissions.

    Summary

    I believe Oracle DVD is a very powerful tool for a data analyst. While many users prefer to stick with Excel exclusively, DVD can significantly increase their data preparation and transformation capabilities. Even now at the very beginning of its journey DVD allows making many data manipulation operations fast and easy. It obviously worth give it a try and include into a data analysis process. Look how the data from this post may be used for analysis and making interesting conclusions: Analyse Elections with Oracle Data Visualisation Desktop.

    Categories: BI & Warehousing

    IMPDP over network link ORA-3916 Schema XXXXX not found error

    Tom Kyte - Fri, 2017-02-24 08:46
    Hi there. I'm trying ot migrate database objects between 2 instances via network link. Source database is 11.2.0.3 version Target database is 12.1.0.2 version On both systems exists a schema named TAGDATI with DATAPUMP_EXP_FULL_DATABASE and DATAP...
    Categories: DBA Blogs

    Alternative to Oracle Wrap Utility

    Tom Kyte - Fri, 2017-02-24 08:46
    Hello, We have requirement to release our product at customer location(usually we just deliver processed data to the customer) and allow them to use it for sometime before they are satisfied, hence to secure code/algorithm we need to wrap the code...
    Categories: DBA Blogs

    Pages

    Subscribe to Oracle FAQ aggregator