Feed aggregator

Using opatch report feature

Michael Dinh - Sat, 2016-11-05 13:51

I have been patching a lot lately and started to wonder if this is a simpler way to check for patch conflict.

Reviewing the log from running opatch auto shows opatch auto does perform check; hence, why do we need to check this manually?

What about creating scripts to do this? Done this as well and will required modifications when requirements are changed.

If the post name did not give it away or if you want more details, then take a look at opatch auto report

Here is the script I have to check for conflict if you are still interested.
#!/bin/sh -ex
export PATCH_TOP_DIR=/media/sf_Linux/11.2.0.4/Oct2106PSU
export GRID_HOME=`grep ':' /etc/oratab|grep -v '^#'|awk -F ":" '{print $2}'|grep 'grid'|uniq`
export ORACLE_HOME=`grep ':' /etc/oratab|grep -v '^#'|awk -F ":" '{print $2}'|grep 'db'|uniq`
export AGENT_HOME=
env|egrep 'PATCH|GRID_HOME|ORACLE_HOME|AGENT_HOME'
#
ls -lrt $PATCH_TOP_DIR
ls -ld $GRID_HOME
ls -ld $ORACLE_HOME
#
ps -ef|grep pmon
ps -ef|grep tns
ps -ef|grep agent_inst
#
grep ':' /etc/oratab|grep -v '^#'
grep ':' /etc/oratab|grep -v '^#'|awk -F ":" '{print $2}'|uniq
which make ar ld nm
#
echo "Check OPatch Version"
$GRID_HOME/OPatch/opatch version -oh $GRID_HOME -v2c 11.2.0.3.6
$ORACLE_HOME/OPatch/opatch version -oh $ORACLE_HOME -v2c 11.2.0.3.6
#
echo "Check Conflict for 24436338"
$GRID_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -oh $GRID_HOME -phBaseDir $PATCH_TOP_DIR/24436338
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -oh $ORACLE_HOME -phBaseDir $PATCH_TOP_DIR/24436338
#
echo "Check Conflict for 24315821"
$GRID_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -oh $GRID_HOME -phBaseDir $PATCH_TOP_DIR/24315821
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -oh $ORACLE_HOME -phBaseDir $PATCH_TOP_DIR/24315821
#
echo "Check lsinventory > /tmp/gi_inv.txt"
$GRID_HOME/OPatch/opatch lspatches -oh $GRID_HOME
$GRID_HOME/OPatch/opatch lsinventory -detail -oh $GRID_HOME > /tmp/gi_inv.txt
#
echo "Check lsinventory > /tmp/db_inv.txt"
$ORACLE_HOME/OPatch/opatch lspatches -oh $ORACLE_HOME
$ORACLE_HOME/OPatch/opatch lsinventory -detail -oh $ORACLE_HOME > /tmp/db_inv.txt
#
head -50 /tmp/gi_inv.txt
tail -50 /tmp/gi_inv.txt
head -50 /tmp/db_inv.txt
tail -50 /tmp/db_inv.txt
echo "******** COMPLETED ********"
exit
 Here are the results from running the script.
[oracle@rac01:emu1:/media/sf_Linux/scripts]
$ ./check_psu.sh
+ export PATCH_TOP_DIR=/media/sf_Linux/11.2.0.4/Oct2106PSU
+ PATCH_TOP_DIR=/media/sf_Linux/11.2.0.4/Oct2106PSU
++ grep : /etc/oratab
++ grep -v '^#'
++ awk -F : '{print $2}'
++ grep grid
++ uniq
+ export GRID_HOME=/u01/app/11.2.0.4/grid
+ GRID_HOME=/u01/app/11.2.0.4/grid
++ grep : /etc/oratab
++ grep db
++ awk -F : '{print $2}'
++ grep -v '^#'
++ uniq
+ export ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/db_1
+ ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/db_1
+ export AGENT_HOME=
+ AGENT_HOME=
+ env
+ egrep 'PATCH|GRID_HOME|ORACLE_HOME|AGENT_HOME'
AGENT_HOME=
GRID_HOME=/u01/app/11.2.0.4/grid
PATCH_TOP_DIR=/media/sf_Linux/11.2.0.4/Oct2106PSU
ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/db_1
+ ls -lrt /media/sf_Linux/11.2.0.4/Oct2106PSU
total 736824
drwxrwx---. 1 root vboxsf      4096 Sep 14 04:40 24436338
drwxrwx---. 1 root vboxsf      4096 Sep 15 09:00 24315821
-rwxrwx---. 1 root vboxsf    112021 Oct 18 07:09 PatchSearch.xml
-rwxrwx---. 1 root vboxsf  42351279 Nov  5 11:25 p24315821_112040_Linux-x86-64.zip
-rwxrwx---. 1 root vboxsf 712035213 Nov  5 11:28 p24436338_112040_Linux-x86-64.zip
+ ls -ld /u01/app/11.2.0.4/grid
drwxr-xr-x. 72 root oinstall 4096 Nov  5 06:40 /u01/app/11.2.0.4/grid
+ ls -ld /u01/app/oracle/product/11.2.0.4/db_1
drwxrwxr-x. 79 oracle oinstall 4096 Nov  5 11:39 /u01/app/oracle/product/11.2.0.4/db_1
+ ps -ef
+ grep pmon
oracle    1969     1  0 10:12 ?        00:00:01 ora_pmon_emu1_1
grid      3654     1  0 06:42 ?        00:00:03 asm_pmon_+ASM1
oracle    9458  9439  0 11:47 pts/0    00:00:00 grep pmon
+ ps -ef
+ grep tns
root        19     2  0 Nov04 ?        00:00:00 [netns]
grid      4234     1  0 06:43 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/tnslsnr LISTENER -inherit
grid      4255     1  0 06:43 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/tnslsnr LISTENER_SCAN1 -inherit
oracle    9460  9439  0 11:47 pts/0    00:00:00 grep tns
+ ps -ef
+ grep agent_inst
oracle    9462  9439  0 11:47 pts/0    00:00:00 grep agent_inst
+ grep : /etc/oratab
+ grep -v '^#'
+ASM1:/u01/app/11.2.0.4/grid:N          # line added by Agent
db01:/u01/app/oracle/product/11.2.0.4/db_1:N
emu1:/u01/app/oracle/product/11.2.0.4/db_1:N
emu1_1:/u01/app/oracle/product/11.2.0.4/db_1:N
emu1_2:/u01/app/oracle/product/11.2.0.4/db_1:N
emu:/u01/app/oracle/product/11.2.0.4/db_1:N             # line added by Agent
+ grep : /etc/oratab
+ grep -v '^#'
+ uniq
+ awk -F : '{print $2}'
/u01/app/11.2.0.4/grid
/u01/app/oracle/product/11.2.0.4/db_1
+ which make ar ld nm
/usr/bin/make
/usr/bin/ar
/usr/bin/ld
/usr/bin/nm
+ echo 'Check OPatch Version'
Check OPatch Version
+ /u01/app/11.2.0.4/grid/OPatch/opatch version -oh /u01/app/11.2.0.4/grid -v2c 11.2.0.3.6
OPatch could not create/open history file for writing.

Oracle Interim Patch Installer version 11.2.0.3.15
Copyright (c) 2016, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/app/11.2.0.4/grid
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/11.2.0.4/grid/oraInst.loc
OPatch version    : 11.2.0.3.15
OUI version       : 11.2.0.4.0
Log file location : /u01/app/11.2.0.4/grid/cfgtoollogs/opatch/opatch2016-11-05_11-47-31AM_1.log


OPatch version for all Oracle Homes registered in Central Inventory:
Location                   Version
/u01/app/11.2.0.4/grid     11.2.0.3.15


All of the Oracle Home(s) pass the Minimum OPatch Version requirement.


OPatch succeeded.
+ /u01/app/oracle/product/11.2.0.4/db_1/OPatch/opatch version -oh /u01/app/oracle/product/11.2.0.4/db_1 -v2c 11.2.0.3.6
Oracle Interim Patch Installer version 11.2.0.3.12
Copyright (c) 2016, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/app/oracle/product/11.2.0.4/db_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/11.2.0.4/db_1/oraInst.loc
OPatch version    : 11.2.0.3.12
OUI version       : 11.2.0.4.0
Log file location : /u01/app/oracle/product/11.2.0.4/db_1/cfgtoollogs/opatch/opatch2016-11-05_11-47-32AM_1.log


OPatch version for all Oracle Homes registered in Central Inventory:
Location                                  Version
/u01/app/oracle/product/11.2.0.4/db_1     11.2.0.3.12


All of the Oracle Home(s) pass the Minimum OPatch Version requirement.


OPatch succeeded.
+ echo 'Check Conflict for 24436338'
Check Conflict for 24436338
+ /u01/app/11.2.0.4/grid/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -oh /u01/app/11.2.0.4/grid -phBaseDir /media/sf_Linux/11.2.0.4/Oct2106PSU/24436338
OPatch could not create/open history file for writing.

Oracle Interim Patch Installer version 11.2.0.3.15
Copyright (c) 2016, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /u01/app/11.2.0.4/grid
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/11.2.0.4/grid/oraInst.loc
OPatch version    : 11.2.0.3.15
OUI version       : 11.2.0.4.0
Log file location : /u01/app/11.2.0.4/grid/cfgtoollogs/opatch/opatch2016-11-05_11-47-33AM_1.log

Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.
+ /u01/app/oracle/product/11.2.0.4/db_1/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -oh /u01/app/oracle/product/11.2.0.4/db_1 -phBaseDir /media/sf_Linux/11.2.0.4/Oct2106PSU/24436338
Oracle Interim Patch Installer version 11.2.0.3.12
Copyright (c) 2016, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /u01/app/oracle/product/11.2.0.4/db_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/11.2.0.4/db_1/oraInst.loc
OPatch version    : 11.2.0.3.12
OUI version       : 11.2.0.4.0
Log file location : /u01/app/oracle/product/11.2.0.4/db_1/cfgtoollogs/opatch/opatch2016-11-05_11-47-39AM_1.log

Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.
+ echo 'Check Conflict for 24315821'
Check Conflict for 24315821
+ /u01/app/11.2.0.4/grid/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -oh /u01/app/11.2.0.4/grid -phBaseDir /media/sf_Linux/11.2.0.4/Oct2106PSU/24315821
OPatch could not create/open history file for writing.

Oracle Interim Patch Installer version 11.2.0.3.15
Copyright (c) 2016, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /u01/app/11.2.0.4/grid
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/11.2.0.4/grid/oraInst.loc
OPatch version    : 11.2.0.3.15
OUI version       : 11.2.0.4.0
Log file location : /u01/app/11.2.0.4/grid/cfgtoollogs/opatch/opatch2016-11-05_11-47-47AM_1.log

Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.
+ /u01/app/oracle/product/11.2.0.4/db_1/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -oh /u01/app/oracle/product/11.2.0.4/db_1 -phBaseDir /media/sf_Linux/11.2.0.4/Oct2106PSU/24315821
Oracle Interim Patch Installer version 11.2.0.3.12
Copyright (c) 2016, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /u01/app/oracle/product/11.2.0.4/db_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/11.2.0.4/db_1/oraInst.loc
OPatch version    : 11.2.0.3.12
OUI version       : 11.2.0.4.0
Log file location : /u01/app/oracle/product/11.2.0.4/db_1/cfgtoollogs/opatch/opatch2016-11-05_11-47-48AM_1.log

Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.
+ echo 'Check lsinventory > /tmp/gi_inv.txt'
Check lsinventory > /tmp/gi_inv.txt
+ /u01/app/11.2.0.4/grid/OPatch/opatch lspatches -oh /u01/app/11.2.0.4/grid
OPatch could not create/open history file for writing.

22502505;ACFS Patch Set Update : 11.2.0.4.160419 (22502505)
22502549;OCW Patch Set Update : 11.2.0.4.160419 (22502549)
22502456;Database Patch Set Update : 11.2.0.4.160419 (22502456)

OPatch succeeded.
+ /u01/app/11.2.0.4/grid/OPatch/opatch lsinventory -detail -oh /u01/app/11.2.0.4/grid
+ echo 'Check lsinventory > /tmp/db_inv.txt'
Check lsinventory > /tmp/db_inv.txt
+ /u01/app/oracle/product/11.2.0.4/db_1/OPatch/opatch lspatches -oh /u01/app/oracle/product/11.2.0.4/db_1
22502549;OCW Patch Set Update : 11.2.0.4.160419 (22502549)
22502456;Database Patch Set Update : 11.2.0.4.160419 (22502456)

OPatch succeeded.
+ /u01/app/oracle/product/11.2.0.4/db_1/OPatch/opatch lsinventory -detail -oh /u01/app/oracle/product/11.2.0.4/db_1
+ head -50 /tmp/gi_inv.txt
OPatch could not create/open history file for writing.

Oracle Interim Patch Installer version 11.2.0.3.15
Copyright (c) 2016, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/app/11.2.0.4/grid
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/11.2.0.4/grid/oraInst.loc
OPatch version    : 11.2.0.3.15
OUI version       : 11.2.0.4.0
Log file location : /u01/app/11.2.0.4/grid/cfgtoollogs/opatch/opatch2016-11-05_11-47-52AM_1.log

Lsinventory Output file location : /u01/app/11.2.0.4/grid/cfgtoollogs/opatch/lsinv/lsinventory2016-11-05_11-47-52AM.txt

--------------------------------------------------------------------------------
Local Machine Information::
Hostname: rac01
ARU platform id: 226
ARU platform description:: Linux x86-64

Installed Top-level Products (1):

Oracle Grid Infrastructure 11g                                       11.2.0.4.0
There are 1 products installed in this Oracle Home.


Installed Products (89):

Agent Required Support Files                                         10.2.0.4.5
Assistant Common Files                                               11.2.0.4.0
Automatic Storage Management Assistant                               11.2.0.4.0
Bali Share                                                           1.1.18.0.0
Buildtools Common Files                                              11.2.0.4.0
Character Set Migration Utility                                      11.2.0.4.0
Cluster Ready Services Files                                         11.2.0.4.0
Cluster Verification Utility Common Files                            11.2.0.4.0
Cluster Verification Utility Files                                   11.2.0.4.0
Database SQL Scripts                                                 11.2.0.4.0
Deinstallation Tool                                                  11.2.0.4.0
Enterprise Manager Common Core Files                                 10.2.0.4.5
Enterprise Manager Common Files                                      10.2.0.4.5
Enterprise Manager plugin Common Files                               11.2.0.4.0
Expat libraries                                                       2.0.1.0.1
HAS Common Files                                                     11.2.0.4.0
HAS Files for DB                                                     11.2.0.4.0
Installation Common Files                                            11.2.0.4.0
Installation Plugin Files                                            11.2.0.4.0
Installer SDK Component                                              11.2.0.4.0
Java Development Kit                                                1.5.0.51.10
+ tail -50 /tmp/gi_inv.txt
     oraus.msg --> ORACLE_HOME/rdbms/mesg/oraus.msg
     ordim.jar --> ORACLE_HOME/ord/jlib/ordim.jar
     ordimimg.jar --> ORACLE_HOME/ord/jlib/ordimimg.jar
     orion.o --> ORACLE_HOME/rdbms/lib/orion.o
     prvtadv.plb --> ORACLE_HOME/rdbms/admin/prvtadv.plb
     prvtblin.plb --> ORACLE_HOME/rdbms/admin/prvtblin.plb
     prvtbsdp.plb --> ORACLE_HOME/rdbms/admin/prvtbsdp.plb
     prvtbsmt.plb --> ORACLE_HOME/rdbms/admin/prvtbsmt.plb
     prvtbstr.plb --> ORACLE_HOME/rdbms/admin/prvtbstr.plb
     prvtdadv.plb --> ORACLE_HOME/rdbms/admin/prvtdadv.plb
     prvthsdp.plb --> ORACLE_HOME/rdbms/admin/prvthsdp.plb
     prvthsmt.plb --> ORACLE_HOME/rdbms/admin/prvthsmt.plb
     prvthstr.plb --> ORACLE_HOME/rdbms/admin/prvthstr.plb
     prvtlmcb.plb --> ORACLE_HOME/rdbms/admin/prvtlmcb.plb
     prvtlmd.plb --> ORACLE_HOME/rdbms/admin/prvtlmd.plb
     prvtpckl.plb --> ORACLE_HOME/rdbms/admin/prvtpckl.plb
     prvtredacta.plb --> ORACLE_HOME/rdbms/admin/prvtredacta.plb
     prvtutil.plb --> ORACLE_HOME/rdbms/admin/prvtutil.plb
     recover.bsq --> ORACLE_HOME/rdbms/admin/recover.bsq
     schema --> ORACLE_HOME/bin/schema
     sdodbmig.sql --> ORACLE_HOME/md/admin/sdodbmig.sql
     sdopatch.sql --> ORACLE_HOME/md/admin/sdopatch.sql
     sjsex.o --> ORACLE_HOME/rdbms/lib/sjsex.o
     spawrio.sql --> ORACLE_HOME/rdbms/admin/spawrio.sql
     utlu112i.sql --> ORACLE_HOME/rdbms/admin/utlu112i.sql
     utlu112x.sql --> ORACLE_HOME/rdbms/admin/utlu112x.sql
     xml --> ORACLE_HOME/bin/xml
   Patch Location in Inventory:
     /u01/app/11.2.0.4/grid/inventory/oneoffs/22502456
     /u01/app/11.2.0.4/grid/inventory/oneoffs/21948347
     /u01/app/11.2.0.4/grid/inventory/oneoffs/21352635
     /u01/app/11.2.0.4/grid/inventory/oneoffs/20760982
     /u01/app/11.2.0.4/grid/inventory/oneoffs/20299013
     /u01/app/11.2.0.4/grid/inventory/oneoffs/19769489
     /u01/app/11.2.0.4/grid/inventory/oneoffs/19121551
     /u01/app/11.2.0.4/grid/inventory/oneoffs/18522509
     /u01/app/11.2.0.4/grid/inventory/oneoffs/18031668
     /u01/app/11.2.0.4/grid/inventory/oneoffs/17478514
   Patch Location in Storage area:
     /u01/app/11.2.0.4/grid/.patch_storage/22502456_Mar_21_2016_11_49_22



Rac system comprising of multiple nodes
  Local node = rac01
  Remote node = rac02

--------------------------------------------------------------------------------

OPatch succeeded.
+ head -50 /tmp/db_inv.txt
Oracle Interim Patch Installer version 11.2.0.3.12
Copyright (c) 2016, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/app/oracle/product/11.2.0.4/db_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/11.2.0.4/db_1/oraInst.loc
OPatch version    : 11.2.0.3.12
OUI version       : 11.2.0.4.0
Log file location : /u01/app/oracle/product/11.2.0.4/db_1/cfgtoollogs/opatch/opatch2016-11-05_11-47-55AM_1.log

Lsinventory Output file location : /u01/app/oracle/product/11.2.0.4/db_1/cfgtoollogs/opatch/lsinv/lsinventory2016-11-05_11-47-55AM.txt

--------------------------------------------------------------------------------
Local Machine Information::
Hostname: rac01
ARU platform id: 226
ARU platform description:: Linux x86-64

Installed Top-level Products (1):

Oracle Database 11g                                                  11.2.0.4.0
There are 1 products installed in this Oracle Home.


Installed Products (136):

Agent Required Support Files                                         10.2.0.4.5
Assistant Common Files                                               11.2.0.4.0
Bali Share                                                           1.1.18.0.0
Buildtools Common Files                                              11.2.0.4.0
Character Set Migration Utility                                      11.2.0.4.0
Cluster Verification Utility Common Files                            11.2.0.4.0
Database Configuration and Upgrade Assistants                        11.2.0.4.0
Database SQL Scripts                                                 11.2.0.4.0
Database Workspace Manager                                           11.2.0.4.0
Deinstallation Tool                                                  11.2.0.4.0
Enterprise Edition Options                                           11.2.0.4.0
Enterprise Manager Agent                                             10.2.0.4.5
Enterprise Manager Agent Core Files                                  10.2.0.4.5
Enterprise Manager Common Core Files                                 10.2.0.4.5
Enterprise Manager Common Files                                      10.2.0.4.5
Enterprise Manager Database Plugin -- Agent Support                  11.2.0.4.0
Enterprise Manager Database Plugin -- Repository Support             11.2.0.4.0
Enterprise Manager Grid Control Core Files                           10.2.0.4.5
Enterprise Manager plugin Common Files                               11.2.0.4.0
Enterprise Manager Repository Core Files                             10.2.0.4.5
Exadata Storage Server                                               11.2.0.1.0
Expat libraries                                                       2.0.1.0.1
Generic Connectivity Common Files                                    11.2.0.4.0
+ tail -50 /tmp/db_inv.txt
     prvtblin.plb --> ORACLE_HOME/rdbms/admin/prvtblin.plb
     prvtbsdp.plb --> ORACLE_HOME/rdbms/admin/prvtbsdp.plb
     prvtbsmt.plb --> ORACLE_HOME/rdbms/admin/prvtbsmt.plb
     prvtbstr.plb --> ORACLE_HOME/rdbms/admin/prvtbstr.plb
     prvtdadv.plb --> ORACLE_HOME/rdbms/admin/prvtdadv.plb
     prvthsdp.plb --> ORACLE_HOME/rdbms/admin/prvthsdp.plb
     prvthsmt.plb --> ORACLE_HOME/rdbms/admin/prvthsmt.plb
     prvthstr.plb --> ORACLE_HOME/rdbms/admin/prvthstr.plb
     prvtlmcb.plb --> ORACLE_HOME/rdbms/admin/prvtlmcb.plb
     prvtlmd.plb --> ORACLE_HOME/rdbms/admin/prvtlmd.plb
     prvtpckl.plb --> ORACLE_HOME/rdbms/admin/prvtpckl.plb
     prvtredacta.plb --> ORACLE_HOME/rdbms/admin/prvtredacta.plb
     prvtutil.plb --> ORACLE_HOME/rdbms/admin/prvtutil.plb
     recover.bsq --> ORACLE_HOME/rdbms/admin/recover.bsq
     schema --> ORACLE_HOME/bin/schema
     sdodbmig.sql --> ORACLE_HOME/md/admin/sdodbmig.sql
     sdopatch.sql --> ORACLE_HOME/md/admin/sdopatch.sql
     sjsex.o --> ORACLE_HOME/rdbms/lib/sjsex.o
     spawrio.sql --> ORACLE_HOME/rdbms/admin/spawrio.sql
     utlu112i.sql --> ORACLE_HOME/rdbms/admin/utlu112i.sql
     utlu112x.sql --> ORACLE_HOME/rdbms/admin/utlu112x.sql
     xml --> ORACLE_HOME/bin/xml
     xmlcg --> ORACLE_HOME/bin/xmlcg
     xmldiff --> ORACLE_HOME/bin/xmldiff
     xmlpatch --> ORACLE_HOME/bin/xmlpatch
     xsl --> ORACLE_HOME/bin/xsl
     xvm --> ORACLE_HOME/bin/xvm
   Patch Location in Inventory:
     /u01/app/oracle/product/11.2.0.4/db_1/inventory/oneoffs/22502456
     /u01/app/oracle/product/11.2.0.4/db_1/inventory/oneoffs/21948347
     /u01/app/oracle/product/11.2.0.4/db_1/inventory/oneoffs/21352635
     /u01/app/oracle/product/11.2.0.4/db_1/inventory/oneoffs/20760982
     /u01/app/oracle/product/11.2.0.4/db_1/inventory/oneoffs/20299013
     /u01/app/oracle/product/11.2.0.4/db_1/inventory/oneoffs/19769489
     /u01/app/oracle/product/11.2.0.4/db_1/inventory/oneoffs/19121551
     /u01/app/oracle/product/11.2.0.4/db_1/inventory/oneoffs/18522509
     /u01/app/oracle/product/11.2.0.4/db_1/inventory/oneoffs/18031668
     /u01/app/oracle/product/11.2.0.4/db_1/inventory/oneoffs/17478514
   Patch Location in Storage area:
     /u01/app/oracle/product/11.2.0.4/db_1/.patch_storage/22502456_Mar_21_2016_11_49_22



Rac system comprising of multiple nodes
  Local node = rac01
  Remote node = rac02

--------------------------------------------------------------------------------

OPatch succeeded.
+ echo '******** COMPLETED ********'
******** COMPLETED ********
+ exit
[oracle@rac01:emu1:/media/sf_Linux/scripts]
$

ORA-04031 Unable to Allocate Shared Memory

VitalSoftTech - Sat, 2016-11-05 02:38
When executing a procedure I am getting "ORA-04031: unable to allocate shared memory" error. How do I avoid this error?
Categories: DBA Blogs

Oracle Completes Tender Offer for Acquisition of NetSuite

Oracle Press Releases - Sat, 2016-11-05 00:00
Press Release
Oracle Completes Tender Offer for Acquisition of NetSuite Deal to Close Monday

Redwood Shores Calif—Nov 5, 2016

Oracle Corporation (NYSE: ORCL) announced that a majority of the unaffiliated shares of NetSuite Inc. (NYSE: N) were validly tendered and the other conditions to the tender offer have been satisfied.  The acquisition of NetSuite will be consummated by Oracle on Monday, November 7, 2016.  American Stock Transfer & Trust Company LLC, the depositary for the tender offer, has indicated that as of 12:00 Midnight, Eastern time, at the end of November 4, 2016, approximately 21,775,553 unaffiliated Shares, or 53.21% of the total unaffiliated Shares, and 62,330,605 total Shares, or 76.39% of the total Shares issued and outstanding, have been tendered into and not properly withdrawn from the tender offer. Both figures do not include 1,037,279 unaffiliated Shares tendered pursuant to the guaranteed delivery procedures set forth in the Offer to Purchase.  When the 1,037,279 unaffiliated Shares tendered pursuant to the guaranteed delivery procedures are included in the results, approximately 22,812,832 unaffiliated Shares, or 55.76% of the total unaffiliated Shares, and 63,367,884 total Shares, or 77.65% of the total shares issued and outstanding, will have been tendered into and not properly withdrawn from the tender offer.  

Contact Info
Deborah Hellinger
Oracle
+1.212.508.7935
deborah.hellinger@oracle.com
Ken Bond
Oracle Investor Relations
+1.650.506.4073
ken.bond@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Additional Information

This communication does not constitute an offer to buy or solicitation of an offer to sell any securities. This communication is for informational purposes only. The tender offer is not being made to, nor will tenders be accepted from, or on behalf of, holders of shares in any jurisdiction in which the making of the tender offer or the acceptance thereof would not comply with the laws of that jurisdiction. The tender offer is being made pursuant to a tender offer statement on Schedule TO (including the Offer to Purchase, a related Letter of Transmittal and other offer materials) filed by Napa Acquisition Corporation with the U.S. Securities and Exchange Commission ("SEC") on August 18, 2016, as amended from time to time. In addition, on August 18, 2016, NetSuite Inc. filed a Solicitation/Recommendation statement on Schedule 14D-9 with the SEC related to the tender offer. Stockholders of NetSuite Inc. are urged to read these documents, all amendments thereto and other documents filed with the SEC carefully in their entirety because they contain important information about the tender offer. The tender offer statement and certain other offer documents, along with the Solicitation/Recommendation statement, will be made available to all stockholders of NetSuite Inc. at no expense to them. These documents are available at no charge through the web site maintained by the SEC at http://www.sec.gov. The Offer to Purchase, related Letter of Transmittal, the Solicitation/Recommendation statement and other offering documents may also be obtained for free by contacting the Information Agent for the tender offer, Innisfree, M&A Incorporated, toll-free at 888-750-5834.

Cautionary Statement Regarding Forward-Looking Statements

This document contains certain forward-looking statements about Oracle and NetSuite Inc., including statements that involve risks and uncertainties concerning Oracle's proposed acquisition of NetSuite Inc., anticipated customer benefits and general business outlook. When used in this document, the words "anticipates", "can", "will", "look forward to", "expected" and similar expressions and any other statements that are not historical facts are intended to identify those assertions as forward-looking statements. Any such statement may be influenced by a variety of factors, many of which are beyond the control of Oracle or NetSuite Inc., that could cause actual outcomes and results to be materially different from those projected, described, expressed or implied in this document due to a number of risks and uncertainties. Potential risks and uncertainties include, among others, the possibility that the transaction will not close or that the closing may be delayed, the anticipated synergies of the combined companies may not be achieved after closing, the combined operations may not be successfully integrated in a timely manner, if at all, general economic conditions in regions in which either company does business, and the possibility that Oracle or NetSuite Inc. may be adversely affected by other economic, business, and/or competitive factors. Accordingly, no assurances can be given that any of the events anticipated by the forward-looking statements will transpire or occur, or if any of them do so, what impact they will have on the results of operations or financial condition of Oracle or NetSuite Inc. You are cautioned to not place undue reliance on forward-looking statements, which speak only as of the date of this document. Neither Oracle nor NetSuite Inc. is under any duty to update any of the information in this document.

In addition, please refer to the documents that Oracle and NetSuite Inc., respectively, file with the SEC on Forms 10-K, 10-Q and 8-K. These filings identify and address other important factors that could cause Oracle's and NetSuite Inc.'s respective operational and other results to differ materially from those contained in the forward-looking statements set forth in this document. You are cautioned to not place undue reliance on forward-looking statements, which speak only as of the date of this document. Neither Oracle nor NetSuite Inc. is under any duty to update any of the information in this document.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Deborah Hellinger

  • +1.212.508.7935

Ken Bond

  • +1.650.506.4073

Mechanism to migrate and two-way sync database

Tom Kyte - Fri, 2016-11-04 22:46
Hi Tom I have two schemas in different physical servers. The data structures and data are different because of poor data design. I want to migrate data from one server to another server and then two-way sync databases. In two-way sync, I pla...
Categories: DBA Blogs

Listener working without listener.ora

Tom Kyte - Fri, 2016-11-04 22:46
I have an oracleDB working en production for more than a year. I have my listener working but NO listener.ora file. <code>[oracle@base admin]$ lsnrctl status LSNRCTL for Linux: Version 11.2.0.4.0 Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST...
Categories: DBA Blogs

Ora 22288 while trying to wirte an XML file to a clob column

Tom Kyte - Fri, 2016-11-04 22:46
Hi Tom, I am trying to put the contents of an xml into a clob column of a table in database using a sql query and am not being able to do so. I am getting <ORA-22288: file or LOB operation FILEOPEN failed ORA-06512: at "SYS.XMLTYPE", line 296 OR...
Categories: DBA Blogs

Unsafe behavior when using functions results in SELECTs and Views

Tom Kyte - Fri, 2016-11-04 22:46
Hello, I post you my issue after Oracle support says there is no 'great' workaround. It seems to be non documented and only oracle performance teams where informed of this 'expected' behavior (in their opinion). To summarize the problem : W...
Categories: DBA Blogs

bind variables

Tom Kyte - Fri, 2016-11-04 22:46
Hello, I created bind variables in a dataset and would like to reference them in a second dataset using "where 1=1". what do I need to put after each variable in the first for the reference to hold?
Categories: DBA Blogs

loop in plsql

Tom Kyte - Fri, 2016-11-04 22:46
Hi, Is loop (for,while) creates implicit or explicit cursor ? To me, it looks like a implicit cursor because it open the memory area implicitly. But the definition of implicit cursor is any SELECT and DML operation defined a implicit cursor. C...
Categories: DBA Blogs

Scope of Savepoint

Tom Kyte - Fri, 2016-11-04 22:46
1. How to access savepoint which is created in another procedure ? 2. Can we rollback only the records processed between two save points ?
Categories: DBA Blogs

Row Locking on Insert

Tom Kyte - Fri, 2016-11-04 22:46
Hello team, I want to discuss an Issue of Locking occur on Insert in my Table Issue is that I have a Table A with Primary Key on Column 1, foreign key on Nullable Column 2 with out any Index. Table B is having foreign key of Table A P.key w...
Categories: DBA Blogs

Partner Webcast – Build your Digital Business with Oracle Cloud

Becoming a digital business is now an imperative for all organizations. The key to this transformation are the productivity tools and services that employees rely on to deliver business value....

We share our skills to maximize your revenue!
Categories: DBA Blogs

Yes

Wim Coekaerts - Fri, 2016-11-04 16:22
More Linux work :)

Running SQL*Plus from a Python script

Bobby Durrett's DBA Blog - Fri, 2016-11-04 16:08

I needed to write a new script that was running on a Red Hat Linux 6 virtual machine and that would connect to databases using SQL*Plus. I was going to write a bash shell script but decided to use Python instead to see if I could do it using the Python that came with this version of Linux. I wont paste the entire script here but the key was to run SQL*Plus from Python instead of a shell script. Here is a simple example showing how I did it:

$ cat test.py
import subprocess

"""

Example of running a sqlplus script from python 2.6.6.

"""

def run_sqlplus(sqlplus_script):

    """

    Run a sql command or group of commands against
    a database using sqlplus.

    """

    p = subprocess.Popen(['sqlplus','/nolog'],stdin=subprocess.PIPE,
        stdout=subprocess.PIPE,stderr=subprocess.PIPE)
    (stdout,stderr) = p.communicate(sqlplus_script)
    stdout_lines = stdout.split("\n")

    return stdout_lines

sqlplus_script="""
connect test/test
select * from dual;
exit

"""

sqlplus_output = run_sqlplus(sqlplus_script)

for line in sqlplus_output:
    print line

Here is the output:

$ python test.py

SQL*Plus: Release 12.1.0.2.0 Production on Fri Nov 4 15:44:30 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

SQL> SQL> Connected.
SQL>
D
-
X

SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

The function run_sqlplus takes a multi-line string as input. This is the text of a SQL*Plus script. It runs sqlplus /nolog to get a SQL*Plus prompt and then passes the strings in as the lines typed at the prompt. The function returns SQL*Plus’s output as a list of strings.

Anyway, this is just an example. I built a real script using these concepts. In this case we are using Python 2.6.6 without cx_Oracle installed so I couldn’t connect directly to an Oracle database. Instead I just ran SQL*Plus from Python.

Bobby

Categories: DBA Blogs

Documentum story – Migrate repository cryptography from crypto key to the crypto keystore

Yann Neuhaus - Fri, 2016-11-04 11:45

In this blog post I will explain the migration of the crypto key (aek.key) to the crypto keystore (lockbox) for an upgraded repository. In fact, during customer activities, I has been involved in the migration of a repository hosted on a content server 6.7 SP2 to a new infrastructure. A content server 7.2 P05 was on the new infrastructure where the lockbox was already used by other repositories. The pre-requisite to achieve this migration is to have an already configured repository using the lockbox which will help to re-encrypt the password of the database owner stored later in the dbpasswd.txt file. The repository for which we wanted to migrate the crypto key was properly upgraded to the content server 7.2 P05 using the Content Server Configuration Program keeping the existing aek.key from the old system. Our goal was to align the cryptography mechanism to use the lockbox.

Please note that the above instructions cannot be used if the file store is encrypted with the Trusted Content Services.

We will also recommend removing LDAP objects and re-create them later so the password can be properly re-encrypted and DER et PEM file re-created under $DOCUMENTUM/dba/secure/ldapdb.

First of all we need to check which encryption algorithm is currently used by retrieving the r_crypto_mode from the docbase config

1> select r_crypto_mode from dm_docbase_config
2> GO
r_crypto_mode
----------------------------------------------------------------
3DES_RSA1024_SHA256
(1 row affected)

Before shutting down the repository we will retrieve a list of all users that are configured with an inline password. We will have to reset their password as the encryption mechanism will be changed.

Query to retrieve inline password users:

select user_login_name from dm_user where user_source = 'inline password';

We can now shutdown the repository:

$DOCUMENTUM/dba/dm_shutdown_repo1

It’s a best practice to backup the aek.key and the lockbox files located under the $DOCUMENTUM/dba/secure folder:

cd $DOCUMENTUM/dba
tar -czvf secure-bck-MigAek-$(date +%Y%m%d).tgz secure

It’s also a good practice to backup all files that will be updated by this procedure:

cd $DOCUMENTUM/dba/config/repo1
cp server.ini server.ini.bck-MigAek-$(date +%Y%m%d)
cp dbpasswd.txt dbpasswd.txt.bck-MigAek-$(date +%Y%m%d)

We will now connect to the Database schema to remove all encryption values. During the next startup of the repository, encryption values will be properly regenerated automatically.

sqlplus repo1/passwd@db1

Once connected you can issue the following SQL instructions :

update dm_docbase_config_s set i_crypto_key = ' ', i_ticket_crypto_key = ' ';

delete from dmi_vstamp_s where i_application = 'dm_docbase_config_crypto_key_init';
delete from dmi_vstamp_s where i_application = 'dm_docbase_config_ticket_crypto_key_init';

delete dm_sysobject_s where r_object_id = (select r_object_id from dm_public_key_certificate_s where key_type = 1);
delete dm_sysobject_r where r_object_id = (select r_object_id from dm_public_key_certificate_s where key_type = 1);
delete dm_public_key_certificate_s where key_type = 1;

delete dm_sysobject_s where r_object_id = (select r_object_id from dm_cryptographic_key_s where key_type = 1);
delete dm_sysobject_r where r_object_id = (select r_object_id from dm_cryptographic_key_s where key_type = 1);
delete dm_cryptographic_key_s where key_type = 1;
commit;

The repository initialization file can now be updated to move from the crypto key (aek.key) to the the crypto keystore (lockbox):

vi $DOCUMENTUM/dba/config/repo1/server.ini

Inside this file, you should comment the following lines:

crypto_keystore=Local
crypto_mode=3DES_RSA1024_SHA256
crypto_keyname=aek.key

Then uncomment/add the following lines:

crypto_mode = AES256_RSA1024_SHA256
crypto_keystore = Local
crypto_lockbox = lockbox.lb
crypto_keyname = CSaek

Please make sure that the crypto_mode in the first section is the same as the one retrieved before stopping the repository. The crypto_mode of the second section (new one) can contain the value that you want. The strongest mode available is “AES256_RSA1024_SHA256″ so that’s probably the value that you will want to set. In our case, we faced an issue because of the old repository and we had to keep the same crypto_mode as the original which was “3DES_RSA1024_SHA256″. Therefore at the end, our server.ini file contained the following:

crypto_mode = 3DES_RSA1024_SHA256
crypto_keystore = Local
crypto_lockbox = lockbox.lb
crypto_keyname = CSaek

Once this is done, we now need to encrypt the database owner of the repository repo1. But how can we do this since the repository isn’t started? Simple: we need an already available repository on this content server (it’s a prerequisite actually) which is already configured to use the lockbox. We used the global registry for that purpose:

> iapi gr_dbi

Please enter a user (dmadmin):
Please enter password for dmadmin:


EMC Documentum iapi - Interactive API interface
(c) Copyright EMC Corp., 1992 - 2015
All rights reserved.
Client Library Release 7.2.0050.0084


Connecting to Server using docbase gr_dbi
[DM_SESSION_I_SESSION_START]info: "Session 010f583d80095f8a started for user dmadmin."


Connected to Documentum Server running Release 7.2.0050.0214 Linux64.Oracle
Session id is s0
API> initcrypto,c
...
OK
API> encrypttext,c,xxxx
...
DM_ENCR_TEXT_V2=AAAAEJpJA5bVkJGghYFqrik3kxJ0gaWIRNvhVmZA586sBuFx7NqKnbKSpdUYf/BvJgn10OQpoZXL1T7Y2L+RmwZRUPkWqsv139zbU7u2vw7UYKX3

We can now set the encrypted password to the dbpasswd.txt as follow:

vi /app/dctm/server/dba/config/repo1/dbpasswd.txt

Add the following content inside this file:

DM_ENCR_TEXT_V2=AAAAEPAcQcFYDlYJ26cqrQ17oC0yXek7E+1g83e069TNkYXjwoRd5zXpZmCm4NdrNYAJDLJ4iEKEzrzFlAuGUWw25KXs/98L0AI7VJx2DLUZNkjX

When this has been done, the repository is now ready to start:

$DOCUMENTUM/dba/dm_start_repo1

Check the log files and validate that the repository has been properly started. Something similar should be displayed:

2016-09-05T09:11:28.907641 21003[21003] 0000000000000000 [DM_SERVER_I_START_SERVER]info: "Docbase repo1 opened"

2016-09-05T09:11:28.907790 21003[21003] 0000000000000000 [DM_SERVER_I_SERVER]info: "Setting exception handlers to catch all interrupts"

2016-09-05T09:11:28.907820 21003[21003] 0000000000000000 [DM_SERVER_I_START]info: "Starting server using service name: repo1"

2016-09-05T09:11:29.402450 21003[21003] 0000000000000000 [DM_SERVER_I_LAUNCH_MTHDSVR]info: "Launching Method Server succeeded."

2016-09-05T09:11:29.420505 21003[21003] 0000000000000000 [DM_SERVER_I_LISTENING]info: "The server is listening on network address (Service Name: repo1, Host Name: content_server_01 :V4 IP)"

2016-09-05T09:11:29.420579 21003[21003] 0000000000000000 [DM_SERVER_I_IPV6_DISABLED]info: "The server can not listen on IPv6 address because the operating system does not support IPv6"

2016-09-05T09:11:29.426518 21003[21003] 0000000000000000 [DM_SERVER_I_LISTENING]info: "The server is listening on network address (Service Name: repo1_s, Host Name: content_server_01:V4 IP)"

2016-09-05T09:11:29.429969 21003[21003] 0000000000000000 [DM_SERVER_I_IPV6_DISABLED]info: "The server can not listen on IPv6 address because the operating system does not support IPv6"

2016-09-05T09:11:31.162134 21003[21003] 0000000000000000 [DM_WORKFLOW_I_AGENT_START]info: "Workflow agent master (pid : 21060, session 010095b980000007) is started sucessfully."
IsProcessAlive: Process ID 0 is not > 0
2016-09-05T09:11:31.163095 21003[21003] 0000000000000000 [DM_WORKFLOW_I_AGENT_START]info: "Workflow agent worker (pid : 21061, session 010095b98000000a) is started sucessfully."
IsProcessAlive: Process ID 0 is not > 0
2016-09-05T09:11:32.165584 21003[21003] 0000000000000000 [DM_WORKFLOW_I_AGENT_START]info: "Workflow agent worker (pid : 21074, session 010095b98000000b) is started sucessfully."
IsProcessAlive: Process ID 0 is not > 0
2016-09-05T09:11:33.167476 21003[21003] 0000000000000000 [DM_WORKFLOW_I_AGENT_START]info: "Workflow agent worker (pid : 21087, session 010095b98000000c) is started sucessfully."
2016-09-05T09:11:34.167673 21003[21003] 0000000000000000 [DM_SERVER_I_START]info: "Sending Initial Docbroker check-point "

2016-09-05T09:11:34.188560 21003[21003] 0000000000000000 [DM_MQ_I_DAEMON_START]info: "Message queue daemon (pid : 21105, session 010095b980000456) is started sucessfully."
2016-09-05T09:11:34.488373 21102[21102] 010095b980000003 [DM_DOCBROKER_I_PROJECTING]info: "Sending information to Docbroker located on host (content_server_01) with port (1490). Information: (Config(repo1), Proximity(1), Status(Open), Dormancy Status(Active))."
Mon Sep 05 09:12:28 2016 [INFORMATION] [AGENTEXEC 21546] Detected during program initialization: Version: 7.2.0050.0214 Linux64
Mon Sep 05 09:12:32 2016 [INFORMATION] [AGENTEXEC 21546] Detected during program initialization: Agent Exec connected to server repo1: [DM_SESSION_I_SESSION_START]info: "Session 010095b980502d02 started for user dmadmin."

 

The migration of the encryption key to the encryption keystore is nearly completed. It remains 2 tasks which are the password reset of the internal Documentum users such as the dm_bof_registry, dmc_wdk_preferences_owner and the dmc_wdk_presets_owner users and the password reset of the application specific users (the list created just before stopping the repository) but this will not be described in this blog.

As said before, if the LDAP configuration is required, it is better to remove the LDAP objects before the migration and recreate them now that the repository is running properly using the new encryption mechanisms.

All the steps and information to migrate the encryption mechanisms to the lockbox have been provided and I hope that it can be useful for you.

 

Cet article Documentum story – Migrate repository cryptography from crypto key to the crypto keystore est apparu en premier sur Blog dbi services.

SQL Server AlwaysOn availability groups and statistic issues on secondaries

Yann Neuhaus - Fri, 2016-11-04 11:23

I would like to share with you an interesting issue you may face while using SQL Server AlwaysOn availability groups and secondary read-only replicas. For those who use secondary read-only replicas as reporting servers, keep reading this blog post because it is about update statistics behavior on the secondary replicas and as you may know cardinality estimation accuracy is an important part of the queries performance in this case.

So a couple of days ago, I had an interesting discussion with one of my MVP French friend about an availability group issue he faced at a customer shop and related to the above topic. Without going into details now, he was surprised to see a difference between the primary replica and one secondary about last update statistic dates as well as rows sampled value for specific statistics. The concerned environment runs with SQL Server 2014 SP2.

First of all, let’s say that having different updated statistic dates between a primary and a secondary is part of a normal process. Indeed, changes related to statistic objects are replicated over the time from the primary to secondaries but it is not uncommon to encounter situation where data from the primary is updated until reaching the update statistic threshold value and to notice auto update statistics triggered by SQL Server on the secondary replica from Reporting queries. But what is more surprising is that this behavior noticed by my friend was exacerbated in his specific context with large tables.

Let’s demonstrate with an example. I was able to easily reproduce the issue on my environment but I was also interested in testing behaviors from different versions of SQL Server in response to this specific issue. In fact, I tried to figure out if the problem concerned only a specific build of SQL Server – SQL Server 2014 SP2 in this case – or if the problem concerns all the SQL Server versions.

 

blog 108 - 0 - demo archi

 

Let’s use the AdventureWorks2012 database with the bigTransactionHistory table that contains roughly 34 million of rows (343910073 rows).

Let’s say statistic information you will see later in this blog post came from each secondary that runs on a specific SQL Server version (respectively 2012, 2014 and 2016) by using the following T-SQL script on each secondary replica.

use AdventureWorks2012;

select 
	object_name(s.object_id) as table_name,
	s.name as stat_name,
	s.is_temporary,
	ds.last_updated,
	ds.modification_counter,
	ds.rows,
	ds.rows_sampled,
	CAST(ds.modification_counter * 100. / ds.rows AS DECIMAL(5,2)) AS modif_ratio,
	ds.steps
from sys.stats as s (nolock)
cross apply sys.dm_db_stats_properties(s.object_id, s.stats_id) as ds
where s.object_id = object_id('dbo.bigTransactionHistory');
go

 

Let’s begin with the starting scenario where I inserted approximatively 20% of the initial data in the bigTransactionHistory table as you may notice below. During the test we will focus only on the idx_cl_big_TransactionHistoryTransactionDate statistic related to the clustered index on the bigTransactionHistory table.

blog 108 - 1 - stats after modif and before update fullscan

Let’s update then the idx_cl_big_TransactionHistoryTransactionDate statistic with FULLSCAN from the primary replica. This operation may be part of a maintenance plan on daily / monthly basis depending on your scenario. Here a picture of statistic information from each secondary:

blog 108 - 2 - stats after modif and after update fullscan

Regarding this first output, we may notice that the modification counter from thesys.dm_db_stats_properties DMF did not drop to zero. To be honest I expected a value equal to 0 here. In addition, executing a Reporting query from each secondary did not have effect in this case. The reporting query is as follows and it is designed to use specifically the idx_cl_big_TransactionHistoryTransactionDate statistic.

use AdventureWorks2012;


select count(*)
from dbo.bigTransactionHistory
where TransactionDate between '20060101' and '20080101';
go

 

Keep going and let’s continue by inserting a new bunch of data (approximatively 10% more). After running a new update statistics operation with FULLSCAN from the primary (let’s say we are again in the situation where a maintenance plan comes into play) here the corresponding statistic information output from each secondary:

blog 108 - 3 - stats after insert data and after update fullscan

As expected, the modification of the rows counter value increased up to 24% but once again we may only notice that running update statistics with FULLSCAN on the primary doesn’t reset correctly the modification rows counter on the secondary regardless the SQL Server version. Let’s run the Reporting query from each secondary and let’s have a look at the statistic information output

blog 108 - 4 - stats after insert data and after update fullscan and read data from secondary

Well, it seems that some update statistics stuff came into play but surprisingly we get different results between versions. Let’s focus first on SQL Server 2012 (first line) where an auto update statistics operation was triggered by SQL Server. Thus the last_updated column value was updated, the concerned statistic is now defined as temporary on the concerned secondary replica and the rows sampled value is different from the previous step (423928 vs 45774517). This is because SQL Server used a default sampling algorithm in this case which does not correspond to that using in the previous step with FULLSCAN method.

Then if we move quickly to the last two lines (respectively SQL Server 2014 and 2016), we may notice only one change that concerns the is_temporary column and no changes concerning either the last update statistic date, the modification counter value or sampled rows. At this point, I’m not sure to understand the reason. Is it a metadata issue? Is it a normal behavior? Well, I will go back there to update this section if I get further information.

Let’s continue by performing the same previous tasks (insert a bunch of data and then update statistics with FULLSCAN from the primary). The statistic output from each secondary is as follows:

 

blog 108 - 7 - stats after update fullscan and 2 insert

I’m sure you are beginning to understand what’s happening here. The update statistic with FULLSCAN from the primary replica seems to never reset the modification counter on each secondary. This is a big problem here because if we execute the Reporting query on each secondary we have now good chance to invalidate quickly what has been done by the update statistics operation with FULLSCAN from the primary. In our context, the main concern is the sampled rows value that can lead to inaccurate cardinality estimations. Let’s run again the Reporting query from each secondary and let’s have a look at the corresponding statistics information output

blog 108 - 8 - stats after update fullscan and 2 insert and read data from secondary

Got it! This time, each concerned statistic switched to temporary and the modification counter was reset to 0. Note also that a default sampling was used in this case in contrast to previous picture that concerned update statistic with FULLSCAN. And if I continue ever and ever in this way I will face every time the same behavior for all versions of SQL Server since 2012.

Let’s summarize the situation: in common scenarios, we would probably not pay attention to this specific issue because cardinality estimation will not be affected and statistics concerned by Reporting queries on the secondaries are fewer in order of magnitude. But my friend encountered a specific scenario with large tables where the issue is magnified.

As workaround, you may consider to exclude the concerned statistic(s) from the auto update statistics algorithm (NORECOMPUTE option). You may also vote for the following connect item opened by my friend if it makes sense for you!

Hope it helps!

 

 

 

Cet article SQL Server AlwaysOn availability groups and statistic issues on secondaries est apparu en premier sur Blog dbi services.

Salesforce Says Einstein, I Say Frankenstein

Linda Fishman Hoyle - Fri, 2016-11-04 08:53

A Guest Post by VP Des Cahill, head evangelist for Oracle's CX Cloud Suite

After completing (and recovering from!) my first CX Central at Oracle OpenWorld, I made my way to Moscone Center in San Francisco one more time to check out Dreamforce 2016.

This time I wouldn’t be worried about my slides or prepping for analyst briefings or a video interview with the Cube. Instead, it would be an opportunity to see how the competition runs its event and to listen for the key messages it is sending to the market.

Salesforce always does a good job with event production.

And this year was no different. The DJ playing funk on the first floor of Moscone set a positive tone from the moment I walked in to register. The conference followed a National Parks theme and key customers were identified as “trailblazers.” This theme was carried out to the nth degree in signage, props, t-shirts, cutesy mascots, slide templates, and other production elements.

Salesforce galvanized its presenters around a single, overarching technology message: AI.

I’m always impressed by Salesforce's ability to prominently weave an overall message into the fabric of the event. The key message this year was artificial intelligence, in the form of Einstein, the umbrella brand for Salesforce AI efforts. And yes, there was even an Einstein mascot roaming around during the keynotes.

Cute as the Einstein mascot was, he couldn’t make the message compelling for me.

The AI message fell flat for three reasons:

1.  In some cases, Einstein was simply a re-brand of an old feature. If any feature even remotely smacked of AI, Salesforce slapped the Einstein brand on it.

As a CMO, it’s hard for me to remember a time when marketing automation software wasn’t capable of lead scoring. But it must be new to Salesforce Marketing Cloud, because Salesforce suddenly branded it as Einstein. What? It didn’t have lead scoring before?

Same with Salesforce Commerce Cloud: Salesforce just bought DemandWare in June, but apparently it was built with Einstein even before it was acquired. Because yes, the shopping recommendation engine is powered by Einstein. (And the shopping demo I saw was nothing new.) This transparent attempt to elevate the Einstein brand only served to diminish its value.

You can’t fool customers, Salesforce.

2. There isn’t a lot of intelligence behind Einstein. Intelligence is the ability to acquire and apply knowledge, and Salesforce isn’t providing any new knowledge to customers. Unlike Oracle, Salesforce can only apply its AI to the data you’ve collected in Salesforce apps. What if you just started using its apps? What if you’re a smaller customer without a lot of data?

With Oracle Data Cloud and Adaptive Intelligent Apps, customers can augment existing customer data with rich, third-party data, in order to drive really intelligent offers, actions, and decisions. Einstein cannot do this.

3. Einstein faces a bit of a hype backlash in achieving widespread adoption by Salesforce customers. One analyst I spoke with noted that most Salesforce customers are still uneasy about embracing last year’s Dreamforce hype—Lightning. That same feeling applies to Einstein, and Salesforce didn’t do itself any favors with “everything but the kitchen sink” positioning.

So, what do you get when you take a bunch of existing parts from across your platform, slap them together, and relabel them as something new?

Frankenstein—not Einstein. And it is really scary.

Creating Security Profiles in ODI 12c

Rittman Mead Consulting - Fri, 2016-11-04 05:00
Creating Security Profiles in ODI 12c

As a newcomer to ODI I enjoy hearing from the more seasoned veterans about common situations they encounter on projects. One of these recurring situations (especially if the company has a very small dev team) is the lack of security. I will not discuss how Oracle improved security by using public/private key pairs for Cloud services, external hackers or any of the buzz words the media likes to toss about. But, I will share with you an easy way to create profiles in ODI to setup a more secure work environment.

Generally speaking, security is neglected because admins, operators or users are not aware of how to set it up or they find it too limiting and tedious to deal with. Other times you might see the exact opposite, where someone has it so locked down you have to request project permissions on the hour just to get work done (Pro-tip: never let control freaks setup or manage security! Just kidding. Maybe.)

Prior to starting any security profile setups, make sure to sit down and really put some thought into the types of profiles you want to create. Think about the different types of work being done in ODI (developer, operator, etc) and what level of permission someone may require. Review the built-in generic profiles here. Keep in mind that you will need to setup security in each environment (Dev, Test, QA, Prod and any others you might use) that you want to connect to. No security setup 'automatically' transfers over to other environments, and not all users require access to each environment.

In this tutorial we will take into consideration the following users:

  • DI Projects Developer - Level I
  • Senior BI Project Manager
  • Consultant

We will setup the Security Profile (access) for each user and connect it to the appropriate User for the DEV environment.

NOTE: This tutorial is specific to ODI internal password storage and authentication, not external authentication.

The first step is to decide what type of security profile (access) each user will need.

  • DI Projects Developer - Level I: Entry level DI developer. Should be able to develop in select projects only. Should also have 'view-all' access across the environment
  • Senior BI Project Manager: Full access to all related tasks in Designer, Operator and Topology. Might also have ability to edit or create new users.
  • Consultant: Brought in to assist in developing mappings and to load new data from a recently acquired company
  • Ok, now we can begin the setups.

    1. In a work environment you will login using an ADMIN or Security Profile that has the credentials to create security profiles. In our example you are assumed to be logged in as Admin, Training or Supervisor and have the correct access to set the profiles up.
    2. Navigate to the Security tab in ODI.
    3. Creating Security Profiles in ODI 12c
      If you do not see it, go to the very top menu and click Window > ODI Security Navigator (seen below)
      Creating Security Profiles in ODI 12c

      Now we will create the User logins that will be linked to each profile.

      Most of you will already have User logins, just double-click the specific 'User Profile' when logged in under the ADMIN or full access account to edit permissions for the User.

    4. Expand the 'Users' accordion and click 'New User' to open up the properties window
      Creating Security Profiles in ODI 12c
    5. Create a login for Jane Maine, our Level I ETL Developer. Assign her the profiles that will allow strictly regulated access in the Designer Navigator to projects and models but allow her view access for everything. (Review the profile descriptions)

      We see that in order to limit her development access in DEV to specific projects and models in the Designer Navigator, we must use the non-generic profiles NG_DESIGNER and NG_VERSION_ADMIN. We also must include CONNECT so she has the option to connect to DEV.

      Fast Review: An object is a representation of a design-time or run-time artifact handled through Oracle Data Integrator. Examples of objects include agents, projects, models, data stores, scenarios, mappings, and even repositories. An instance is a particular occurrence of an object. For example, the Datawarehouse project is an instance of the Project object. A method is an action that can be performed on an object, such as edit or delete.

      Generic profiles allow access to all methods of all instances of an object.

      Non-generic profiles are not authorized for all methods on the instances, an admin must grant rights on the methods for each instance.

    6. Input Jane's information and assign her a password (abc123) by clicking 'Enter Password'. Make sure that Jane's password will expire in 6 months, forcing her to change it for security purposes. Click 'OK'
    7. Best Practice: Always go in to your account and change any temporary password. See the video on how to do that, here.

      Creating Security Profiles in ODI 12c
      Your screen should now look like this (description is optional - I always add them in):
      Creating Security Profiles in ODI 12c

    8. On the same properties window click the 'Authorization' tab on the top left side. We are granting very limited access because her projects, as a entry level developer, are limited. Click 'Projects' and allow her access to all methods within 'Projects'. Select all methods (use the checkmark in the top left of each objects) from the following: Select SAVE after each object group methods have been selected:
      • Column
      • Condition
      • Diagram
      • Folder
      • Interface
      • Load Plan
      • Mapping
      • Package
      • Procedure
      • Procedure Command
      • Scenario
      • Scenario Variable

      Your Security Navigator should look similar to this:
      Creating Security Profiles in ODI 12c

    9. Now we create the User and Profile for a recently hired Senior BI Manager named Will Doe. Following the same steps, create the User by expanding (or locating) the Users accordion and clicking New User. Make sure to set the password to expire in 6 months.

    10. Creating Security Profiles in ODI 12c
      Creating Security Profiles in ODI 12c

    11. Unlike the entry level employee, Will Doe needs full access as Senior Manager but he does not need Supervisor access. Check each generic profile (do not check any that start with NG) and click save. Your screen should look similar to the image below.
      Creating Security Profiles in ODI 12c
      Pro Tip: If you aren't sure your security settings are correct, after your new user/profile is saved, expand the 'Objects' and/or 'Instances' (orange boxes on the screenshots above) under the Users name and see what is available.
    12. Now we need to create the Consultants general User and profile. The Consultant password does not need to expire, since we will let the account expire after a month.

    13. Create a new User under the Users accordion. Use the name: 'Consultant', Password: abc123, Notes: Temp consultant for ETL DEV work only.
    14. In this situation, the consultant will need nearly full access but not total access to everything. Check all of the generic profiles EXCEPT version admin. Select the NG VERSION ADMIN to allow selective version access. Your screen should look similar to below.
      Creating Security Profiles in ODI 12c

    15. Click on the 'Authorizations' tab on the top left and scroll down in the objects list and select 'Version' and check only Compare, Restore and View. Click Save. Your image should look similar to below.

    16. Creating Security Profiles in ODI 12c

      Now we test our user settings. Disconnect ODI and login using each USER you created. Look at the limitations for each user.

    17. This screenshot shows how Jane Maine can only access Projects and Load Plans, but not any of the models. What are differences you see for your profiles?

    18. Creating Security Profiles in ODI 12c

      There are so many options for creating secure Users and Profiles within ODI that allow the appropriate amount of access with maximum security - and in fact, it's fairly easy. You can block out high level access such as the entire Operator Navigator or Designer Navigator, all the way down to granular level security where you can block out very specific Methods and Objects associated with it.

      A word to the wise: It is strongly suggested that you only use a generic SUPERVISOR or ADMIN account that has full ODI access for creating users, profiles, changing passwords, etc. Create your own personal user/profile to perform daily work. The reason for this is to know who specifically is doing what. If the user is assigned ADMIN (or something generic) then there is no way to tell who used the login.

      Other suggested settings to try out: You can create Users and Profiles for admin purposes including a 'Designer Navigator only' access, 'Topology Navigator only' access, 'Operator Navigator only' access and variations where you can only access the Designer Navigator and Toplogy navigator, but not the Operator tab.
      ------------
      Make sure to check out these videos and subscribe:

Categories: BI & Warehousing

Sql scripts

Tom Kyte - Fri, 2016-11-04 04:26
Hi , can we call sql scripts containing procedures and packages without a SQL*plus. As of now we are running some scripts through sql*plus command line. So if we want to run these scripts from a system where they don't have sql plus installed is t...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator