Feed aggregator

5 reasons you should use dashboards in marketing industry

Nilesh Jethwa - Tue, 2017-03-28 13:20

What's the first thing you look at when you flip on your computer screen in the morning? What's the item you peek at on your phone even when your dinner partner wishes you'd leave work behind for the evening? For most people that addictive checking-tendency applies to their emails, but the truly business-savvy marketers are addicted to looking at their dashboards!

And why is that? It's because dashboards provide a true and comprehensive look at everything important that is happening in your business.

With a well-constructed dashboard, you can see exactly where things stand with your business, literally at a glance. Let us just break down the benefits for you!

Read more at http://www.infocaptor.com/dashboard/5-benefits-of-dashboards-in-the-marketing-industry

How to Patch an Exadata (Part 1) – Introduction and Prerequisites

Pythian Group - Tue, 2017-03-28 12:30

Once you have installed your new Exadata machine will come a time where you’ll be asked :

shouldn’t we patch the Exadata” ?

And the answer is “yes, definitely“.

 
Indeed, Oracle releases huges (~ 10 GB) “Quarterly Full Stack” patches (aka Bundles) every quarter (for example : Patch 24436624 – Quarterly Full Stack Download For Oracle Exadata (Oct 2016 – 12.1.0.2)); these Bundles contain all the patches for all the components that make an Exadata. You will need (almost :)) nothing else to be able to patch your whole Exadata.
 
Even if it looks a tough operation at first sight, it is not that much. And this blog’s aim is to clearly describe every step to make it easier for all of us. Let’s start with a preview of this patching with the order we will be proceeding and the tools we will be using :
 
 

 
 
As it is quite a long odyssey, I will split this blog in different parts which are also a logic order to patch all the components :
 
0/ An advice

1/ General Information

2/ Some prerequisites it is worth doing before the maintenance

3/ The patching procedure

3.1/ Patching the cells (aka Storage servers)
3.2/ Patching the IB switches
3.3/ Patching the Database servers (aka Compute Nodes)
3.4/ Patching the Grid Infrastructure
3.5/ Patching the databases ORACLE_HOMEs

4/ The Rollback procedure

4.1/ Cell Rollback
4.2/ DB nodes Rollback
4.3/ IB Switches Rollback

5/ Troubleshooting

5.1/ Cell patching issue
5.2/ CRS does not restart issue
5.3/ A procedure to add instances to a database
5.4/ OPatch resume

6/ Timing

 

 
 
 

0/ An advice

First of all, please strongly keep in mind this advice :

Do NOT continue to the next step before a failed step is properly resolved.

Indeed, everything that needs to be redundant is redundant and it is supported to run different versions between servers. In the MOS note “Exadata Patching Overview and Patch Testing Guidelines (Doc ID 1262380.1)“, we can read that :
 

It is supported to run different Exadata versions between servers. For example, some storage servers may run 11.2.2.4.2 while others run 11.2.3.1.1, or all storage servers may run 11.2.3.1.1 while database servers run 11.2.2.4.2. However, it is highly recommended that this be only a temporary configuration that exists for the purpose and duration of rolling upgrade.

 
Then if when patching your cells one cell is not rebooting, stop here, do not continue, do not force patch the next one. Indeed, everything will still be working fine and in a supported manner with one cell down (I did it on production, no user could notice anything), it will most likely not be the case with 2 cells down. If this kind of issue happens, have a look at the troubleshooting section of this blog and open a MOS Sev 1.

 
 
 

1/ General Information

Some information you need to know before starting to patch your Exadata :

  • It is better to have a basic understanding of what is an Exadata before jumping to this patch procedure
  • This procedure does not apply to an ODA (Oracle Database Appliance)
  • I will use the /patches/OCT2016_bundle_patch FS to save the Bundle in the examples of this blog
  • I use the “DB node” term here, it means “database node“, aka “Compute node“; the nodes where the Grid Infrastructure and the database are running, I will also use the db01 term for the database node number 1, usually named db01
  • I use the “cell” word aka “storage servers“, the servers that manage your storage. I will also use cel01 for the storage server number 1, usually named cel01
  • It is good to have the screen utility installed; if not, use nohup
  • Almost all the procedure will be executed as root
  • I will patch the IB Switches from the DB node 1 server
  • I will patch the cells from the DB node 1 server
  • I will patch the DB nodes from the cel01 server

 
 
 

1/ Some prerequisites it is worth doing before the maintenance 1.1/ Download and unzip the Bundle

Review the Exadata general note (Exadata Database Machine and Exadata Storage Server Supported Versions (Doc ID 888828.1)) to find the latest Bundle, download, unzip it; be sure that every directory is owned by oracle:dba to avoid any issue in the future :

/patches/OCT2016_bundle_patch >
oracle@myclusterdb01) ls -ltr
total 9609228
-rw-r--r-- 1 oracle oinstall 560430690 Nov 16 18:24 p24436624_121020_Linux-x86-64_10of10.zip
-rw-r--r-- 1 oracle oinstall 1030496554 Nov 16 18:26 p24436624_121020_Linux-x86-64_1of10.zip
-rw-r--r-- 1 oracle oinstall 1032681260 Nov 16 18:27 p24436624_121020_Linux-x86-64_2of10.zip
-rw-r--r-- 1 oracle oinstall 1037111138 Nov 16 18:29 p24436624_121020_Linux-x86-64_3of10.zip
-rw-r--r-- 1 oracle oinstall 1037009057 Nov 16 18:31 p24436624_121020_Linux-x86-64_4of10.zip
-rw-r--r-- 1 oracle oinstall 1037185003 Nov 16 18:33 p24436624_121020_Linux-x86-64_5of10.zip
-rw-r--r-- 1 oracle oinstall 1026218494 Nov 16 18:35 p24436624_121020_Linux-x86-64_6of10.zip
-rw-r--r-- 1 oracle oinstall 1026514887 Nov 16 18:36 p24436624_121020_Linux-x86-64_7of10.zip
-rw-r--r-- 1 oracle oinstall 1026523343 Nov 16 18:39 p24436624_121020_Linux-x86-64_8of10.zip
-rw-r--r-- 1 oracle oinstall 1025677014 Nov 16 18:41 p24436624_121020_Linux-x86-64_9of10.zip

/patches/OCT2016_bundle_patch >
oracle@myclusterdb01) for I in `ls p24436624_121020_Linux-x86-64*f10.zip`
do
unzip $I
done
Archive: p24436624_121020_Linux-x86-64_10of10.zip
 inflating: 24436624.tar.splitaj
...
Archive: p24436624_121020_Linux-x86-64_9of10.zip
 inflating: 24436624.tar.splitai

/patches/OCT2016_bundle_patch >
oracle@myclusterdb01) cat *.tar.* | tar -xvf -
24436624/
24436624/automation/
24436624/automation/bp1-out-of-place-switchback.xml
24436624/automation/bp1-auto-inplace-rolling-automation.xml

...

 
 

1.2/ SSH keys

For this step, if you are not confident with the dbs_group, cell_group, etc… files,  here is how to create them as I have described it in this post (look for “dbs_group” in the post).

[root@myclusterdb01 ~]# ibhosts | sed s'/"//' | grep db | awk '{print $6}' | sort > /root/dbs_group
[root@myclusterdb01 ~]# ibhosts | sed s'/"//' | grep cel | awk '{print $6}' | sort > /root/cell_group
[root@myclusterdb01 ~]# cat /root/dbs_group ~/cell_group > /root/all_group
[root@myclusterdb01 ~]# ibswitches | awk '{print $10}' | sort > /root/ib_group
[root@myclusterdb01 ~]#

We would need few SSH keys deployed in order to ease the patches application :

  • root ssh keys deployed from the db01 server to the IB Switches (you will have to enter the root password once for each IB Switch)
[root@myclusterdb01 ~]# cat ~/ib_group
myclustersw-ib2
myclustersw-ib3
[root@myclusterdb01 ~]# dcli -g ~/ib_group -l root -k -s '-o StrictHostKeyChecking=no'
root@myclustersw-ib3's password:
root@myclustersw-ib2's password:
myclustersw-ib2: ssh key added
myclustersw-ib3: ssh key added
[root@myclusterdb01 ~]#
  • root ssh keys deployed from the cel01 server to all the database nodes (you will have to enter the root password once for each database server)
[root@myclustercel01 ~]# cat ~/dbs_group
myclusterdb01
myclusterdb02
myclusterdb03
myclusterdb04
[root@myclustercel01 ~]# dcli -g ~/dbs_group -l root -k -s '-o StrictHostKeyChecking=no'
root@myclusterdb01's password:
root@myclusterdb03's password:
root@myclusterdb04's password:
root@myclusterdb02's password:
myclusterdb01: ssh key added
myclusterdb02: ssh key added
myclusterdb03: ssh key added
myclusterdb04: ssh key added
[root@myclustercel01 ~]#
  • root ssh keys deployed from the db01 server to all the cells (you will have to enter the root password once for each cell)
[root@myclusterdb01 ~]# dcli -g ~/cell_group -l root hostname
myclustercel01: myclustercel01.mydomain.com
myclustercel02: myclustercel02.mydomain.com
myclustercel03: myclustercel03.mydomain.com
myclustercel04: myclustercel04.mydomain.com
[root@myclusterdb01 ~]# dcli -g ~/cell_group -l root -k -s '-o StrictHostKeyChecking=no'
root@myclustercel04's password:
...
root@myclustercel03's password:
myclustercel01: ssh key added
...
myclustercel06: ssh key added
[root@myclusterdb01 ~]#
1.3/ Upgrade opatch

It is highly recommended to upgrade opatch before any patching activity and this Bundle is not an exception. Please find the procedure to quickly upgrade opatch with dcli in this post.

Please note that upgrading opatch will also allow you to be ocm.rsp-free !

 
 

1.4/ Run the prechecks

It is very important to run those prechecks and take a good care of the outputs. They have to be 100% successful to ensure a smooth application of the patches.

  • Cell patching prechecks (launch them from the DB Node 1 as you will patch them from here)
[root@myclusterdb01 ~]# cd /patches/OCT2016_bundle_patch/24436624/Infrastructure/12.1.2.3.3/ExadataStorageServer_InfiniBandSwitch/patch_12.1.2.3.3.161013/
[root@myclusterdb01 ~]# ./patchmgr -cells ~/cell_group -patch_check_prereq -rolling

 
 

  • DB Nodes prechecks (launch them from the cel01 server as you will patch them from here)

As we will use the cell node 1 server to patch the databases servers, we first need to copy patchmgr and the ISO file to this server

[root@myclusterdb01 ~]#  scp /patches/OCT2016_bundle_patch/24436624/Infrastructure/SoftwareMaintenanceTools/DBServerPatch/5.161014/p21634633_121233_Linux-x86-64.zip root@myclustercel01:/tmp/.                    # This is patchmgr
[root@myclusterdb01 ~]#  scp /patches/OCT2016_bundle_patch/24436624/Infrastructure/12.1.2.3.3/ExadataDatabaseServer_OL6/p24669306_121233_Linux-x86-64.zip root@myclustercel01:/tmp/.                               # This is the ISO file, do NOT unzip it
[root@myclusterdb01 ~]#  ssh root@myclustercel01
[root@myclustercel01 ~]#  cd /tmp
[root@myclustercel01 ~]#  nohup unzip p21634633_121233_Linux-x86-64.zip &
[root@myclustercel01 ~]# cd /tmp/dbserver_patch_5.5.161014
[root@goblxdex02cel01 dbserver_patch_5.161014]# ./patchmgr -dbnodes ~/dbs_group -precheck -iso_repo /tmp/p24669306_121233_Linux-x86-64.zip -target_version 12.1.2.3.3.161013
[root@goblxdex02cel01 dbserver_patch_5.161014]#

Note : if you have some NFS mounted, you will have some error messages, you can ignore them at this stage, we will umount the NFS before patching the DB nodes
 
 

  • IB Switches prechecks (launch them from the DB Node 1 as you will patch them from here)
[root@myclusterdb01]# cd /patches/OCT2016_bundle_patch/24436624/Infrastructure/12.1.2.3.3/ExadataStorageServer_InfiniBandSwitch/patch_12.1.2.3.3.161013/
[root@myclusterdb01]# patch_12.1.2.3.3.161013]# ./patchmgr -ibswitches ~/ib_group -upgrade -ibswitch_precheck

 
 

  • Grid Infrastructure prechecks
[root@myclusterdb01]# . oraenv <<< +ASM1
[root@myclusterdb01]# $ORACLE_HOME/OPatch/opatchauto apply /patches/OCT2016_bundle_patch/24436624/Database/12.1.0.2.0/12.1.0.2.161018DBBP/24448103 -oh /u01/app/12.1.0.2/grid -analyze
[root@myclusterdb01]#

Notes :

  • You will most likely see some warnings here, check the logfiles and they will probably be due to some patches that will be rolled back as they will not be useful any more.

 
 

[root@myclusterdb01]# $ORACLE_HOME/OPatch/opatch prereq CheckSystemSpace -phBaseFile /tmp/patch_list_gihome.txt

With the following /tmp/patch_list_gihome.txt file (check the README as the patch numbers will change with the versions)

[root@myclusterdb01]#cat /tmp/patch_list_gihome.txt
/patches/OCT2016_bundle_patch/24436624/Database/12.1.0.2.0/12.1.0.2.161018DBBP/24448103/21436941
/patches/OCT2016_bundle_patch/24436624/Database/12.1.0.2.0/12.1.0.2.161018DBBP/24448103/24007012
/patches/OCT2016_bundle_patch/24436624/Database/12.1.0.2.0/12.1.0.2.161018DBBP/24448103/24846605
/patches/OCT2016_bundle_patch/24436624/Database/12.1.0.2.0/12.1.0.2.161018DBBP/24448103/24340679
[root@myclusterdb01]#

 
 

  • Database patch prechecks
[oracle@myclusterdb01]$ . oraenv <<< A_DATABASE_WITH_THE_ORACLE_HOME_YOU_WANT_TO_PATCH
[oracle@myclusterdb01]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /patches/OCT2016_bundle_patch/24436624/Database/12.1.0.2.0/12.1.0.2.161018DBBP/24448103/24340679 $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /patches/OCT2016_bundle_patch/24436624/Database/12.1.0.2.0/12.1.0.2.161018DBBP/24448103/24846605 $ORACLE_HOME/OPatch/opatch prereq CheckSystemSpace -phBaseFile /tmp/patch_list_dbhome.txt

The file /tmp/patch_list_dbhome.txt containing (check the README, the patch numbers will change depending on the versions) :

/patches/OCT2016_bundle_patch/24436624/Database/12.1.0.2.0/12.1.0.2.161018DBBP/24448103/24340679
/patches/OCT2016_bundle_patch/24436624/Database/12.1.0.2.0/12.1.0.2.161018DBBP/24448103/24846605

 
 

  • OJVM prechecks
[oracle@myclusterdb01]$ cd /patches/OCT2016_bundle_patch/24436624/Database/12.1.0.2.0/12.1.0.2.161018OJVMPSU/24315824
[oracle@myclusterdb01]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -ph ./
[oracle@myclusterdb01]$

- Do a lsinventory -all_nodes before patching and save the output somewhere

[oracle@myclusterdb01]$ $ORACLE_HOME/OPatch/opatch lsinventory -all_nodes

 
 

  • Check disk_repair_time and set it to 24h

Oracle recommends to set this parameter to 8h. As we had issues in the past with a very long cell patching, we now use to set this parameter to 24h as Oracle has recommended us.
Please note that this prerequisite is only needed for a rolling patch application.

SQL> select dg.name as diskgroup, a.name as attribute, a.value from v$asm_diskgroup dg, v$asm_attribute a where dg.group_number=a.group_number and (a.name like '%repair_time' or a.name = 'compatible.asm');

DISKGROUP ATTRIBUTE VALUE
-------------------- ---------------------------------------- ----------------------------------------
DATA disk_repair_time 3.6h
DATA compatible.asm 11.2.0.2.0
DBFS_DG disk_repair_time 3.6h
DBFS_DG compatible.asm 11.2.0.2.0
RECO_ disk_repair_time 3.6h
RECO compatible.asm 11.2.0.2.0

6 rows selected.

SQL> connect / as sysasm
Connected.
SQL> ALTER DISKGROUP DATA SET ATTRIBUTE 'disk_repair_time' = '24h' ;

Diskgroup altered.

SQL> ALTER DISKGROUP DBFS_DG SET ATTRIBUTE 'disk_repair_time' = '24h' ;

Diskgroup altered.

SQL> ALTER DISKGROUP RECO SET ATTRIBUTE 'disk_repair_time' = '24h' ;

Diskgroup altered.

SQL>

 
If one of this precheck points a problem, resolve it before heading to the next steps.
 
 
Now that everything is downloaded, unzipped, updated, we can safely jump to the patching procedure in part 2 !
 
 

Quick links to Part 1 / Part 2 / Part 3 / Part 4 / Part 5 / Part 6
 

Categories: DBA Blogs

Oracle RAC 12.1 – lsnodes exited with code 9

Amardeep Sidhu - Tue, 2017-03-28 11:31

I was trying to do a 2 node RAC setup on Solaris 11.3 where Oracle Solaris Cluster 4.3 was already configured. Installed was running but the Cluster Node Information screen was appearing like this

error

The install log shows this:

INFO: Checking cluster configuration details

INFO: Found Vendor Clusterware. Fetching Cluster Configuration

INFO: Executing [/tmp/OraInstall2017-03-28_12-50-48PM/ext/bin/lsnodes]

with environment variables {TERM=xterm, LC_COLLATE=, SHLVL=3, JAVA_HOME=, XFILESEARCHPATH=/usr/dt/app-defaults/%L/Dt, SSH_CLIENT=172.16.64.55 56370 22, LC_NUMERIC=, LC_MESSAGES=, MAIL=/var/mail/oracle, PWD=/export/software/grid/grid, XTERM_VERSION=XTerm(320), WINDOWID=2097165, LOGNAME=oracle, _=*50727*/export/software/grid/grid/install/.oui, NLSPATH=/usr/dt/lib/nls/msg/%L/%N.cat, SSH_CONNECTION=172.16.64.55 56370 172.16.72.18 22, OLDPWD=/export/oracle, LC_CTYPE=, CLASSPATH=, PATH=/usr/bin:/usr/ccs/bin:/usr/bin:/bin:/export/software/grid/grid/install, LC_ALL=, DISPLAY=localhost:10.0, LC_MONETARY=, USER=oracle, HOME=/export/oracle, XTERM_SHELL=/bin/bash, XAUTHORITY=/tmp/ssh-xauth-mlq21a/xauthfile, A__z=”*SHLVL, XTERM_LOCALE=en_US.UTF-8, TZ=localtime, LC_TIME=, LANG=en_US.UTF-8}

INFO: Starting Output Reader Threads for process /tmp/OraInstall2017-03-28_12-50-48PM/ext/bin/lsnodes

INFO: The process /tmp/OraInstall2017-03-28_12-50-48PM/ext/bin/lsnodes exited with code 9

So we can see the problem. lsnodes is not able to list the nodes. Let us try to run that command manually.

-bash-4.1$ export PATH=PATH=/usr/bin:/usr/ccs/bin:/usr/bin:/bin:/export/software/grid/grid/install

-bash-4.1$ /tmp/OraInstall2017-03-28_12-50-48PM/ext/bin/lsnodes

ld.so.1: lsnodes: fatal: libskgxn2.so: open failed: No such file or directory

Killed

-bash-4.1$

So looks like it is not able to find this library called libskgxn2.so. If we do a find for this file name we can see that it is present in this directory /usr/cluster/lib/sparcv9/libskgxn2.so .

Some googling and MOS searches revealed that it expects the library to be present at /opt/ORCLcluster/lib. This directory doesn’t exist here. As a workaround we can create this directory manually and create symbolic link to file libskgxn2.so

The lsnodes command worked fine after this workaround and installer also shows both the nodes listed.

Categories: BI & Warehousing

Group by Rollup

Tom Kyte - Tue, 2017-03-28 10:46
Good afternoon Tom! I had a difficulty with the query with intermediate totals (group would rollup). You can tell how to apply the having condition to the intermediate results. having sum(Proc)<7 <code>with query1 as ( select '123' schet,...
Categories: DBA Blogs

ALTER SESSION ENABLE PARALLEL DML

Tom Kyte - Tue, 2017-03-28 10:46
Hi, I need to execute MERGE statement on certain tables on Archival DB., statement looks like MERGE /*+ PARALLEL(tb_test,pin_max_process) */ INTO tb_test@db_link tgt USING ( SELECT /*+ PARALLEL(tb_test,pin_max_process) */ * FROM tb_test WHERE ...
Categories: DBA Blogs

Using SPM

Tom Kyte - Tue, 2017-03-28 10:46
Hi, It would be very helpful for me if you share the detailed steps for forcing a query to select a particular hash plan value whenever the same is executed using SPM(Sequel Performance Management). I would like to know the tables and query which ...
Categories: DBA Blogs

Find out which SQL or PL/SQL caused row lock contention days ago

Tom Kyte - Tue, 2017-03-28 10:46
Hi Team, As we known 'enq: TX - row lock contention' often happened during a DBA lifetime, and Tom used to say that 'The number one cause of deadlocks in the Oracle database, in my experience, is unindexed foreign keys. The number two cause is bitm...
Categories: DBA Blogs

How are dirty buffers written to the datafiles?

Tom Kyte - Tue, 2017-03-28 10:46
Hello, I have a question regarding how dirty buffers are written to the datafiles on disk. Most of the cases the db blocks are 8K in size, so suppose we execute a update which just modifies only few bytes in the entire 8K block. Ex will be a colum...
Categories: DBA Blogs

Physical writes for delete vs update statement.

Tom Kyte - Tue, 2017-03-28 10:46
Hello, The DBWR will write out the whole block even if only few bytes have changed following an DML statement. Considering the db block size of 8K and I have a table of just 7 columns with row size small enough to fit into a single block. If I exe...
Categories: DBA Blogs

golden gate

Tom Kyte - Tue, 2017-03-28 10:46
Hi , As we all aware that to implement golden gate replication ,minimal supplemental logging should be enabled at source db end. 1)Why we need to enable supplemental logging as redo contains both undo and redo vector details. 2)What is the u...
Categories: DBA Blogs

Desert Island DBA

Tom Kyte - Tue, 2017-03-28 10:46
Hi Guys, I've just re-read oracle insights (Tales of the oak table) and want to get your take on something in the book. I believe it's in the section on Compulsive Tuning Disorder (CTD) where the author says that DBA's are taken to a desert islan...
Categories: DBA Blogs

Monitor audit_file_dest !

Laurent Schneider - Tue, 2017-03-28 09:25

Until 11.2, audit_file_dest used to remain small with default settings and reasonably sized and active database. Suddenly, in 12c, you will sooned or later get ORA-09925: Unable to create audit trail file.

At that point, no more connection is possible to the database, it is a complete loss of service.

Why suddenly in 12c ? This is because the default for audit_sys_operations changed to true. In 11g, you used to get an 1K file each time you connect as sysdba. So a few tousands sysdba connections a weeks, a few mega, no worries.


Mon Mar 27 14:08:01 2017 +02:00
LENGTH : '155'
ACTION :[7] 'CONNECT'
DATABASE USER:[1] '/'
PRIVILEGE :[6] 'SYSDBA'
CLIENT USER:[6] 'oracle'
CLIENT TERMINAL:[0] ''
STATUS:[1] '0'

Suddenly in 12c, you get plenty files that are many Mb. For instance for AUTOTASK jobs, every single select is dumped to the filesystem. A single week-end of an quiet database may generate 1Gb of *.aud files of DBMS_SCHEDULER.

Those DB001_j000_12345_20170327140802123456789.aud files are highly useless and annoying.


LENGTH : '641'
ACTION :[490] 'select /*+  no_parallel(t) no_parallel_index(t) dbms_stats cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring xmlindex_sel_idx_tbl no_substrb_pad  */ substrb(dump("PERIOD_END_TIME",16,0
,64),1,240) val,
                      rowidtochar(rowid) rwid from "SYS"."WRP$_REPORTS_TIME_BANDS" t where rowid in (chartorowid('AABJ58AADAAAMsrAAA'),chartorowid('AABJ58AADAAAMsrAAB'),chartorowid('AABJ58AADAAAMsrAAC'),chartorowid('AABJ58A
ADAAAMssAAA')) order by "PERIOD_END_TIME"'
DATABASE USER:[3] 'SYS'
PRIVILEGE :[4] 'NONE'
CLIENT USER:[0] ''
CLIENT TERMINAL:[7] 'UNKNOWN'
STATUS:[1] '0'

Once your audit_file_dest is getting full, your database stops, so better delete those *_j00*_* and *_m00*_* quickly enough!

Gartner Peer Insight: Please Write a Review!

WebCenter Team - Tue, 2017-03-28 08:55

Have you deployed an Oracle solution in the past 2 - 3 years? 

If so, why not share your experience and knowledge with others and write a review!

Did you know B2B technology purchase decisions are highly influenced by peers? 

  • In fact, peer input was ranked #1 when making a technology purchase decision by 85% of the respondents to an independent study.
  • By sharing your experience with Oracle products and services, not only can you provide valuable feedback to us, you can also help your peers in their technology decisions. 
So please go to the website below and write a review.  Thank you.

Gartner Peer Insight: Please Write a Review!

WebCenter Team - Tue, 2017-03-28 08:55

Have you deployed an Oracle solution in the past 2 - 3 years? 

If so, why not share your experience and knowledge with others and write a review!

Did you know B2B technology purchase decisions are highly influenced by peers? 

  • In fact, peer input was ranked #1 when making a technology purchase decision by 85% of the respondents to an independent study.
  • By sharing your experience with Oracle products and services, not only can you provide valuable feedback to us, you can also help your peers in their technology decisions. 
So please go to the website below and write a review.  Thank you.

Vertically scale your PostgreSQL infrastructure with pgpool – 3 – Adding another standby

Yann Neuhaus - Tue, 2017-03-28 08:52

In the first and second post in this series we did the basic pgpool setup including the watchdog configuration and then did a simple setup for automatically failover from a PostgreSQL master instance in case it goes down for any reason. In addition we told pgpool how an old master instance can be reconfigured as a new standby instance that follows the new master. In this post we’ll add another standby instance and then teach pgpool how a standby can be made aware of a new master when the master fails. Sound interesting? Lets go…

As reminder this is how the system looks like right now:

pgpool-architecture

What we want to have is:

pgpool-extended

The idea behind a third node is that we always will have at least one standby server up and running in case the master node goes down. What do we need to do to bring in another instance? Once the operating system is up and running, PostgreSQL is installed it is actually quite easy. As a first step lets create the new standby database on the third node using exactly the same layout as on the other nodes:

postgres@pgpool3:/home/postgres/ [pg962] mkdir -p /u02/pgdata/PG1
postgres@pgpool3:/home/postgres/ [pg962] cd /u02/pgdata/PG1
postgres@pgpool3:/u02/pgdata/PG1/ [pg962] pg_basebackup -h 192.168.22.34 -x -D /u02/pgdata/PG1/
postgres@pgpool3:/u02/pgdata/PG1/ [pg962] echo "standby_mode = 'on'
primary_conninfo = 'host=pgpool1 user=postgres'
primary_slot_name = 'standby2'
recovery_target_timeline = 'latest'
trigger_file='/u02/pgdata/PG1/failover_trigger'" > recovery.conf
postgres@pgpool3:/u02/pgdata/PG1/ [pg962] psql -h pgpool1 -c "select * from pg_create_physical_replication_slot('standby2')" postgres
postgres@pgpool3:/u02/pgdata/PG1/ [PG1] pg_ctl -D /u02/pgdata/PG1/ start

Now we have one master instance with two standby instances attached. Lets configure the third instance into pool (the configuration change needs to be done on both pgpool nodes, of course). The lines we need to add to pgpool.conf are:

backend_hostname2 = '192.168.22.40'
backend_port2 = 5432
backend_weight2 = 1
backend_data_directory2 = '/u02/pgdata/PG1'
backend_flag2 = 'ALLOW_TO_FAILOVER'

Reload pgpool (a stop is not necessary) and check the current status:

[postgres@centos7_pgpool_m1 etc]$ pgpool reload
[postgres@centos7_pgpool_m1 etc]$ psql -h 192.168.22.38 -c "show pool_nodes" postgres
 node_id |   hostname    | port | status | lb_weight |  role   | select_cnt | load_balance_node | replication_delay 
---------+---------------+------+--------+-----------+---------+------------+-------------------+-------------------
 0       | 192.168.22.34 | 5432 | up     | 0.333333  | primary | 0          | true              | 0
 1       | 192.168.22.35 | 5432 | up     | 0.333333  | standby | 0          | false             | 0
 2       | 192.168.22.40 | 5432 | up     | 0.333333  | standby | 0          | false             | 0
(3 rows)

We have a second standby database which is used to load balance read requests. In case the master fails now what we want is that one of the standby instances gets promoted and the remaining standby instance should be reconfigured to follow the new master. What do we need to do?

As we are using replication slots in this setup we need a way to make the failover scripts independent of the name of the replication slot. The first script that we need to change is “promote.sh” on all the PostgreSQL nodes because currently there is a hard coded request to create the replication slot:

#!/bin/bash
PGDATA="/u02/pgdata/PG1"
PATH="/u01/app/postgres/product/96/db_2/bin/:$PATH"
export PATH PGDATA
pg_ctl promote -D ${PGDATA} >> /var/tmp/failover.log
psql -c "select * from pg_create_physical_replication_slot('standby1')" postgres >> /var/tmp/failover.log

The easiest way to do this is to create as many replication slots as you plan to add standby instances, e.g.:

#!/bin/bash
PGDATA="/u02/pgdata/PG1"
PATH="/u01/app/postgres/product/96/db_2/bin/:$PATH"
export PATH PGDATA
pg_ctl promote -D ${PGDATA} >> /var/tmp/failover.log
psql -c "select * from pg_create_physical_replication_slot('standby1')" postgres >> /var/tmp/failover.log
psql -c "select * from pg_create_physical_replication_slot('standby2')" postgres >> /var/tmp/failover.log
psql -c "select * from pg_create_physical_replication_slot('standby3')" postgres >> /var/tmp/failover.log
psql -c "select * from pg_create_physical_replication_slot('standby4')" postgres >> /var/tmp/failover.log

Of course this is not a good way to do it as you would need to adjust the script every time the amount of standby instances changes. One better way to do it is to centrally manage the amount of standby instances and the relation of the standby instances to the replication slots in a configuration in the $PGDATA directory of each PostgreSQL node and on each pgpool node in the HOME directory of the postgres user:

postgres@pgpool1:/u02/pgdata/PG1/ [PG1] cat pgpool_local.conf 
# the total amount of instances that
# participate in this configuration
INSTANCE_COUNT=3
# the mapping of the hostname to
# to the replication slot it uses for
# the PostgreSQL instance it is running
# in recovery mode
192.168.22.34=standby1
192.168.22.35=standby2
192.168.22.40=standby3

Having this we can adjust the promote.sh script (sorry I have to use a screenshot as the source code destroys the formatting of this post. let me know if you want to have the script):
pgp-promote-sh

Now the script will create all the replication slots on a promoted instance and in addition drops the replication slot it used before being promoted. What else do we need? As we now have a third instance in the configuration there is another point we need to take care of: When the master fails a new standby is promoted, so far so good. But in addition we want the second standby to follow the new master automatically, don’t we? For this we need to tell pgpool to call another script which is executed on the active pgpool node after failover:

[postgres@centos7_pgpool_m1 ~]$ grep follow /u01/app/postgres/product/pgpool-II/etc/pgpool.conf
follow_master_command = '/home/postgres/follow_new_master.sh "%h" "%H"' 

This will be executed when there is failover (all pgpool nodes need to have this script)

#!/bin/sh
set -x
master_node_host_name=$2
detached_node_host_name=$1
tmp=/tmp/mytemp$$
trap "rm -f $tmp" 0 1 2 3 15
PGDATA="/u02/pgdata/PG1"
PATH="/u01/app/postgres/product/96/db_2/bin/:$PATH"
RECOVERY_NODE_REPLICATION_SLOT=`cat /home/postgres/pgpool_local.conf | grep ${detached_node_host_name} | awk -F '=' '{print $2}'`
export PATH PGDATA
# make sure the instance is down
ssh -T $detached_node_host_name /home/postgres/stop_instance.sh
cat > $tmp <<EOF
standby_mode = 'on'
primary_conninfo = 'host=$master_node_host_name user=postgres'
primary_slot_name = '${RECOVERY_NODE_REPLICATION_SLOT}'
recovery_target_timeline = 'latest'
trigger_file='/u02/pgdata/PG1/failover_trigger'
EOF
scp $tmp $detached_node_host_name:$PGDATA/recovery.conf
ssh ${detached_node_host_name} /home/postgres/start_instance.sh
psql -c "select 'done'" postgres

So, this is the status now:

[postgres@centos7_pgpool_m1 ~]$ psql -h 192.168.22.38 -c "show pool_nodes" postgres
 node_id |   hostname    | port | status | lb_weight |  role   | select_cnt | load_balance_node | replication_dela
y 
---------+---------------+------+--------+-----------+---------+------------+-------------------+-----------------
 0       | 192.168.22.34 | 5432 | up     | 0.333333  | primary | 4          | true              | 0
 1       | 192.168.22.35 | 5432 | up     | 0.333333  | standby | 2          | false             | 0
 2       | 192.168.22.40 | 5432 | up     | 0.333333  | standby | 0          | false             | 0
(3 rows)

Lets shutdown the primary and see what happens:

postgres@pgpool1:/home/postgres/ [PG1] pg_ctl -D /u02/pgdata/PG1/ stop -m immediate
waiting for server to shut down.... done
server stopped

Pgpool is telling this:

[postgres@centos7_pgpool_m1 ~]$ psql -h 192.168.22.38 -c "show pool_nodes" postgres
 node_id |   hostname    | port | status | lb_weight |  role   | select_cnt | load_balance_node | replication_dela
y 
---------+---------------+------+--------+-----------+---------+------------+-------------------+-----------------
--
 0       | 192.168.22.34 | 5432 | down   | 0.333333  | standby | 4          | false             | 0
 1       | 192.168.22.35 | 5432 | up     | 0.333333  | primary | 4          | true              | 0
 2       | 192.168.22.40 | 5432 | down   | 0.333333  | standby | 0          | false             | 0
(3 rows)

Re-attach:

[postgres@centos7_pgpool_m1 ~]$ pcp_attach_node -w -n 0
pcp_attach_node -- Command Successful
[postgres@centos7_pgpool_m1 ~]$ pcp_attach_node -w -n 2
pcp_attach_node -- Command Successful
[postgres@centos7_pgpool_m1 ~]$ psql -h 192.168.22.38 -c "show pool_nodes" postgres
 node_id |   hostname    | port | status | lb_weight |  role   | select_cnt | load_balance_node | replication_dela
y 
---------+---------------+------+--------+-----------+---------+------------+-------------------+-----------------
--
 0       | 192.168.22.34 | 5432 | up     | 0.333333  | standby | 4          | false             | 0
 1       | 192.168.22.35 | 5432 | up     | 0.333333  | primary | 4          | true              | 0
 2       | 192.168.22.40 | 5432 | up     | 0.333333  | standby | 0          | false             | 0
(3 rows)

Perfect. The only pain point is that we need to manually re-attach the nodes, everything else is automated. But, luckily there is way to get around this: As we are on the pgpool nodes when the script is executed we can just use pcp_attach_node at the end of the follow_new_master.sh script (and pass the node id %d into the script):

[postgres@centos7_pgpool_m1 ~]$ grep follow /u01/app/postgres/product/pgpool-II/etc/pgpool.conf
follow_master_command = '/home/postgres/follow_new_master.sh "%h" "%H" %d' 
[postgres@centos7_pgpool_m1 ~]$ cat follow_new_master.sh 
#!/bin/sh
set -x
master_node_host_name=$2
detached_node_host_name=$1
detached_node_id=$3
tmp=/tmp/mytemp$$
trap "rm -f $tmp" 0 1 2 3 15
PGDATA="/u02/pgdata/PG1"
PATH="/u01/app/postgres/product/96/db_2/bin/:$PATH"
RECOVERY_NODE_REPLICATION_SLOT=`cat /home/postgres/pgpool_local.conf | grep ${detached_node_host_name} | awk -F '=' '{print $2}'`
export PATH PGDATA
# make sure the old master is down
ssh -T $detached_node_host_name /home/postgres/stop_instance.sh
cat > $tmp <<EOF
standby_mode = 'on'
primary_conninfo = 'host=$master_node_host_name user=postgres'
primary_slot_name = '${RECOVERY_NODE_REPLICATION_SLOT}'
recovery_target_timeline = 'latest'
trigger_file='/u02/pgdata/PG1/failover_trigger'
EOF
scp $tmp $detached_node_host_name:$PGDATA/recovery.conf
ssh ${detached_node_host_name} /home/postgres/start_instance.sh
psql -c "select 'done'" postgres
pcp_attach_node -w -n ${detached_node_id}

And now, when you shutdown the master everything is automatic. Hope this helps.

 

Cet article Vertically scale your PostgreSQL infrastructure with pgpool – 3 – Adding another standby est apparu en premier sur Blog dbi services.

RMAN-06820: WARNING: failed to archive current log at primary database

Michael Dinh - Tue, 2017-03-28 08:18

The best solution is a simple one.

Standard Edition Oracle with Manual Standby generating error – RMAN-06820.

From RMAN-06820 ORA-17629 During Backup at Standby Site (Doc ID 1616074.1):
As of 11.2.0.4, we now include the current standby redo log as part of an RMAN archivelog backup at the standby site.
This is achieved by forcing a log switch at the primary site.

Some options to remedy the issue which seems too complicated.
RMAN-06820 ORA-17629 ORA-12154 During Backup of a Standby Database (Doc ID 2025142.1)
OERR: RMAN-6613 “Connect identifier for DB_UNIQUE_NAME %s not configured” Reference Note (Doc ID 2050646.1)
RMAN-06613: Connect identifier for DB_UNIQUE_NAME not configured (Doc ID 1598653.1)

DEMO:
oracle@arrow2:HAWKB:/home/oracle
$ rman target /

Recovery Manager: Release 11.2.0.4.0 - Production on Thu Dec 1 14:09:07 2016

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database: HAWK (DBID=3187737370, not open)

RMAN> show all;

using target database control file instead of recovery catalog
RMAN configuration parameters for database with db_unique_name HAWKB are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION ON;
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u01/app/oracle/product/11.2.0.4/db_1/dbs/snapcf_HAWKB.f'; # default

RMAN> CONFIGURE DB_UNIQUE_NAME 'HAWKB' CONNECT IDENTIFIER 'HAWKB';

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of configure command at 12/01/2016 14:09:26
RMAN-05021: this configuration cannot be changed for a BACKUP or STANDBY control file

RMAN> backup archivelog all not backed up 2 times tag='ALL';

Starting backup at 2016-DEC-01 14:09:53
RMAN-06820: WARNING: failed to archive current log at primary database
ORACLE error from target database:
ORA-17629: Cannot connect to the remote database server
ORA-17627: ORA-00942: table or view does not exist

allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=36 device type=DISK
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=342 RECID=274 STAMP=929450567
input archived log thread=1 sequence=343 RECID=275 STAMP=929451468
input archived log thread=1 sequence=344 RECID=276 STAMP=929452365
input archived log thread=1 sequence=345 RECID=277 STAMP=929453265
input archived log thread=1 sequence=346 RECID=278 STAMP=929454166
input archived log thread=1 sequence=347 RECID=279 STAMP=929455067
channel ORA_DISK_1: starting piece 1 at 2016-DEC-01 14:09:54
channel ORA_DISK_1: finished piece 1 at 2016-DEC-01 14:09:55
piece handle=/fra/HAWKB/backupset/2016_12_01/o1_mf_annnn_ALL_d417xlg1_.bkp tag=ALL comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 2016-DEC-01 14:09:55

RMAN> backup archivelog until time 'sysdate' not backed up 2 times tag='UNTIL';

Starting backup at 2016-DEC-01 14:10:22
using channel ORA_DISK_1
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=342 RECID=274 STAMP=929450567
input archived log thread=1 sequence=343 RECID=275 STAMP=929451468
input archived log thread=1 sequence=344 RECID=276 STAMP=929452365
input archived log thread=1 sequence=345 RECID=277 STAMP=929453265
input archived log thread=1 sequence=346 RECID=278 STAMP=929454166
input archived log thread=1 sequence=347 RECID=279 STAMP=929455067
channel ORA_DISK_1: starting piece 1 at 2016-DEC-01 14:10:22
channel ORA_DISK_1: finished piece 1 at 2016-DEC-01 14:10:23
piece handle=/fra/HAWKB/backupset/2016_12_01/o1_mf_annnn_UNTIL_d417ygxb_.bkp tag=UNTIL comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 2016-DEC-01 14:10:23

RMAN> backup archivelog all not backed up 2 times tag='ALL';

Starting backup at 2016-DEC-01 14:10:26
RMAN-06820: WARNING: failed to archive current log at primary database
ORACLE error from target database:
ORA-17629: Cannot connect to the remote database server
ORA-17627: ORA-00942: table or view does not exist

using channel ORA_DISK_1
skipping archived logs of thread 1 from sequence 342 to 347; already backed up
Finished backup at 2016-DEC-01 14:10:26

RMAN> backup archivelog until time 'sysdate' not backed up 2 times tag='UNTIL';

Starting backup at 2016-DEC-01 14:10:32
using channel ORA_DISK_1
skipping archived logs of thread 1 from sequence 342 to 347; already backed up
Finished backup at 2016-DEC-01 14:10:32

RMAN> exit


Recovery Manager complete.
oracle@arrow2:HAWKB:/home/oracle
$

Database Backup and Recovery User’s Guide (11.2)
Backing Up Archived Redo Logs with RMAN
https://docs.oracle.com/cd/E11882_01/backup.112/e10642/rcmbckba.htm#BRADV81399

Online Redo Log Switch will occur with backup contains the following clause:
PLUS ARCHIVELOG
ARCHIVELOG ALL
ARCHIVELOG FROM ..

The simple solutions for not switching redo at primary.

Use backup archivelog until time ‘sysdate’ instead of backup archivelog all.
Use backup until sequence instead of from sequence.
Use backup database followed with backup archivelog until time ‘sysdate’.


Real World OBIEE: Demystification of Variables Pt. 3

Rittman Mead Consulting - Tue, 2017-03-28 08:00
 Demystification of Variables  Pt. 3

In part two of this blog series, I went over using Repository, System and Presentation Variables to make reports dynamic for any series of time. In part three, I am going to talk about making reports dynamic for periods of time using built in functions within Answers itself.

Real World

While it's a lot more efficient to create Repository Variables to use in filters and prompts for time dimensions, sometimes it is simply not possible. Perhaps you are a front end developer for OBIEE and have no access to the RPD or the database . Perhaps you have no communication with the person in your organization who handles all of the RPD development and therefore can not submit any change requests.

Don't worry. We've got you covered.

There are several functions and tricks you can use within Answers itself to make reports dynamic and eliminate having to hardcode dates.

The Scenario

I am going to use the same scenario I used for part two of this series for the example one. To recap, here are the requirements:

I have been asked to create a report that is going to reside on a products dashboard. It needs to have the same product grouping as the report I used in part one of this series, needs to contain 'Gross Rev $', 'Net Rev $' and '# of Orders' and have a prompt that can select between the first and current day of the month and every day in-between. The person who requested the report wants the prompt to change dynamically with each month and does not want users to be able to select future dates.

In part two, I used a custom SQL statment which used Repository Variables I created to populate all the date values from the first of every month to the current date for Variable Prompts. There is a gap in the data loads for # of Orders in which data does not update until the 2nd or 3rd of each new month. The person who requested the report wanted a summary of the previous months '# of Orders' to be shown until the data is updated for the current month. I used a Repository Variable that returned the value of the previous month with the current year and used a CASE statement with along with Filter Expressions to switch between the Filter Expression using the Repository Variable (Prev_Month) if the date was <=2 or if # of Orders is null and the Filter Expression which contained the Start Date and End_Date Presentation Variable placeholders which were defined in my Variable Prompts.

Example One

In this example, I have to figure out a way to make the report dynamic with only the functions available within Answers. There are two parts to this example. First I need to use a function that will return the previous month's value for the Calendar Year Month column to use with the '# of Orders' column. This will replace the Repository Variable Prev_Month I used in part two of this series. Second I need to write a new SQL statment for the Start Date and End_Date prompts I created in part two and also define a new SQL statment for the default values in those prompts.

Part 1

I am going to start by creating a new statement to return the previous month's value for the Calendar Year Month column. I can use the TIMESTAMPADD function in conjunction with the extraction syntax 'YEAR' and 'MONTH' to return the desired results. Let's take a look at the entire statment and then I will break it down.

CAST(YEAR(timestampadd(SQL_TSI_YEAR, -1,CURRENT_DATE))*100+MONTH(timestampadd(SQL_TSI_MONTH, -1,CURRENT_DATE)) AS VARCHAR(6))

1. TIMESTAMPADD - This is what defines that I am going to use addition to return a date by adding or one date to another.

2. SQL_TSI_YEAR and SQL_TSI_MONTH - The first argument in the function. It defines what interval of time the function will work with.

3. -1. This the interval of time that is compared to the third argument.

4. CURRENT_DATE - The third argument in the function. This is what the second argument is compared against.

5. YEAR and MONTH - This is the extraction syntax that will return only the year and the month respectively.

Also notice that I have used VARCHAR(6) for the CAST argument. If I use VARCHAR, I can specify the exact number of characters I want returned.

Now I need to copy my column formula and paste it into the column formula that I created for '# of Orders' in part two of this series.

I am going to replace the Prev_Month Repository Variable with my statement, which will look like this.

CASE WHEN DAY(CURRENT_DATE)<=2 or="" "sales="" -="" fact="" sales"."measures"."#="" of="" orders"="" is="" null="" then="" filter("sales="" using="" ("sales="" sales"."periods"."calendar="" year="" month"="CAST(YEAR(timestampadd(SQL_TSI_MONTH," -1,current_date))*100+month(timestampadd(sql_tsi_month,="" -1,current_date))="" as="" varchar(6))))="" else="" ("periods"."day="" date"="" between="" @{pv_start_dt}{date="" '2015-10-01'}="" and="" @{pv_end_dt}{date="" '2015-10-15'}))="" end<="" code="">

 Demystification of Variables  Pt. 3

If I run the report, my results return as expected.

 Demystification of Variables  Pt. 3

Part 2

Now I need to write a new SQL statement for my Start Date and End Date prompts. In order to do this, I am going to need to use two functions: TIMESTAMPS and CURRENT_DATE. First, lets take a look at the TIMESTAMP function.

I am going to use the TIMESTAMP function to filter the Day Date column for the first day of the month. To demonstrate, I am going to create a new analysis and use the TIMESTAMP function in a column formula. My column formula looks like the following:

TIMESTAMPADD(SQL_TSI_DAY, -DAYOFMONTH(CURRENT_DATE) +1, CURRENT_DATE)

This formula can be broken down into four parts:

1. TIMESTAMPADD - This is what defines that I am going to use addition to return a date by adding or one date to another.

2. SQL_TSI_DAY - The first argument in the function. It defines what interval of time the function will work with (in this case days)

3. -DAYOFMONTH(CURRENT_DATE)+1 - This the interval of time that is compared to the third argument. In this case I am taking the negative value of the day of the month, adding 1 and then adding it with current date which always returns 1 or the first day of the month.

4. CURRENT_DATE - The third argument in the function. This is what the second argument is compared against.

This is only scratching the surface of what you can do with the TIMESTAMP function. If you would like more information, check out the blog on TIMESTAMPS written by Brian Hall.

I am going to add an additional column to the Criteria and use the CURRENT_DATE function in a column formula.

 Demystification of Variables  Pt. 3

 Demystification of Variables  Pt. 3

Now I am going to click on Results to show the results of the TIMESTAMP function and the CURRRENT_DATE function.

 Demystification of Variables  Pt. 3

From the results you can see that I have both the first day of the month and the current date. Now I need to convert this into a filter for the Day Date column so that I can get the logical SQL query for my Start Date and End Date prompts.

 Demystification of Variables  Pt. 3

In the New Filter window, I need to change the operator to is between and click on Add More Options to add a SQL Expression.

 Demystification of Variables  Pt. 3

In the SQL Expression box, I need to put the TIMESTAMP function for current date from the previous example. In addition I need to add another SQL Expression for the CURRENT_DATE function.

 Demystification of Variables  Pt. 3

 Demystification of Variables  Pt. 3

When I return to my Criteria, I can see the filter I created in the Filter window.

 Demystification of Variables  Pt. 3

I can click on Results to run the report. The results for the Day Date column return as expected.

 Demystification of Variables  Pt. 3

Now I can click on the Advanced tab and copy the logical SQL statement to use for my Start Date and End Date prompts.

 Demystification of Variables  Pt. 3

Now I am going to paste the following into my Start Date Variable Prompt

Choice List Values > SQL Results

SELECT "Sales - Fact Sales"."Periods"."Day Date" 
FROM "Sales - Fact Sales" WHERE ("Periods"."Day Date" BETWEEN TIMESTAMPADD(SQL_TSI_DAY, -DAYOFMONTH(CURRENT_DATE), CURRENT_DATE)  AND CURRENT_DATE) 
ORDER BY "Periods"."Day Date"

Default Selection > SQL Results

SELECT
TIMESTAMPADD(SQL_TSI_DAY, -DAYOFMONTH(CURRENT_DATE) +1, CURRENT_DATE)
FROM "Sales - Fact Sales"
FETCH FIRST 65001 ROWS ONLY

 Demystification of Variables  Pt. 3

For the default selection, I am using a SQL statment that is selecting the first day of the month using the same TIMESTAMP function used in the above query from my subject area "Sales - Fact Sales".

Now I need to change the SQL query for both the Choice List Values and Default Selection for my End Date Variable Prompt.

I am going to use the same SQL query for the Choice List Values in my End Date prompt as I did in my Start Date prompt. I am going to change the default selection to the following:

SELECT
CURRENT_DATE
FROM "Sales - Fact Sales"
FETCH FIRST 65001 ROWS ONLY

 Demystification of Variables  Pt. 3

If I go to the Display window, I can view the results of my changes.

 Demystification of Variables  Pt. 3

 Demystification of Variables  Pt. 3

Notice that the results are exactly the same as the results in part two of this series.

I can save the dashboard prompt and go to my dashboard and test the prompt.

 Demystification of Variables  Pt. 3

 Demystification of Variables  Pt. 3

 Demystification of Variables  Pt. 3

In Conclusion

In part one of this series, we looked at using Bins, CASE statements to create custom grouping for values and switch between those groups and values using Presentation Variables.

In part two of this series, we looked at creating Repository Variables to make reports dynamic using those Repository Variables in Variable Prompts and passing them into column formulas using Presentation Variables.

In the third and final part of this series, we looked at making reports dynamic by using built in functions within Answers such as TIMESTAMPS and CURRENT_DATE.

My hope is that you can take these examples and apply them in your own OBIEE development. If you would like to know more about front end or RPD development, please check out the variety of training courses we offer at Rittman Mead. Until next time.

Categories: BI & Warehousing

Oracle’s HCM World and Modern Finance Experience 2017 Deliver Foundation for Modern Business Success

Oracle Press Releases - Tue, 2017-03-28 07:00
Press Release
Oracle’s HCM World and Modern Finance Experience 2017 Deliver Foundation for Modern Business Success Complementary events for HR and finance professionals showcase the building blocks, tools and strategies companies need to drive digital business transformation

Redwood Shores, Calif.—Mar 28, 2017

Helping businesses embrace modern best practices, Oracle today announced HCM World 2017 and Modern Finance Experience 2017, taking place in Boston, April 11-13. Organizations across every industry are examining how they can use technology to innovate and gain a competitive advantage, all while enhancing employee engagement and performance. HCM World and Modern Finance Experience will bring together leading visionaries and business professionals to share real-world examples and strategies for modernizing business with technology to fuel digital growth strategies, engage and retain top talent, and maximize organizational efficiency and effectiveness.

Taking place concurrently, HCM World is being held at the Boston Marriott Copley Place and Modern Finance Experience at The Westin Copley Place.

“The need for digital transformation is challenging today’s companies to look ahead and redefine how they operate. Forward-looking companies are exploring new and emerging technologies, such as machine learning, predictive analytics, big data, and the Internet of Things (IoT) to optimize business processes. They also are looking for new ways that the cloud and data analytics can help them stay ahead of their competitors,” said Steve Miranda, executive vice president of applications development, Oracle. “The HCM World and Modern Finance Experience events are designed to deliver the practical roadmap to help companies future proof their business systems, while showcasing examples of those who have successfully navigated the transition.”

HCM World

HCM World is geared to provide HR professionals with the multi-faceted strategies they need to build and grow a modern, digital-based HR practice. More than 60 sessions throughout the three-day event are tailored to help elevate HR as a strategic partner in advancing business goals by:

  • Engaging employees with modern, customized, and easy-to-use systems and processes that help them be productive and happy at work.
  • Informing through predictive analytics to allow organizations to take action pro-actively and give HR the foresight to plan ahead.
  • Adapting and expanding an organization’s capabilities, both as a business and as a digital leader, by enabling the flexibility to personalize, brand, and extend the experience at every layer of the cloud in a secure and safe way.

To register for Oracle HCM World 2017, visit here and find more information on the event on the Oracle Modern HR in the Cloud blog, Facebook or Twitter.

Modern Finance Experience

The premier event for CFOs and finance professionals, Modern Finance Experience is designed to provide attendees with guidance on how to build a modern operating model for finance that supports digital business processes and new ways of working. Over 25 visionary keynotes and thought-leadership sessions, including an exclusive CFO Summit, during the three-day event will inspire, educate, and empower the finance function to assume a larger role in driving business success. Attendees will be able to:

  • Learn from hands-on sessions with practitioners and industry experts who will give finance leaders a roadmap to finance modernization and offer advice on how new technologies can reshape finance to be more predictive and extend their value across other business functions (such as HR).
  • Share how emerging technologies, such as big data analytics, artificial intelligence (AI), and machine learning, powered by the cloud, can optimize both the transactions and analytical capabilities required by finance to become more agile.
  • Craft high-impact strategies for improving the non-traditional finance skill sets now required of finance professionals to provide proactive guidance to the business.

Register for Modern Finance Experience 2017 here. For additional information on Oracle Cloud solutions for Finance, visit Oracle Enterprise Resource Planning (ERP) Cloud’s Facebook, and Twitter and Oracle Enterprise Performance Management (EPM) Cloud’s Facebook and Twitter or the Modern Finance Leader blog.

Contact Info
Simon Jones
PR for Oracle
+1.415.856.5155
sjones@blancandotus.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Simon Jones

  • +1.415.856.5155

connect / as sysoper generates ORA-1017

Laurent Schneider - Tue, 2017-03-28 06:42

Today I had the issue that I could not login as sysoper on one database, despite being in the dba and oper groups.

The problem is that the osoper group was -probably- not selected during installation

e.g. in 12c /AIX
it may be called differently on your system


$ sqlplus -L -s / as sysoper
ERROR:
ORA-01017: invalid username/password; logon denied
$ awk '/H.17.*, 3/,/End/{print}' $ORACLE_HOME/rdbms/lib/config.s
        .csect  H.17.NO_SYMBOL{RO}, 3
        .string ""
# End   csect   H.17.NO_SYMBOL{RO}

The string is empty. Let’s change this to oper or dba. Stopping any processes using that home is strongly recommended before relink.


$ vi $ORACLE_HOME/rdbms/lib/config.s
        .string "dba"
$ relink
$ sqlplus -s -L / as sysoper
sho user
USER is "PUBLIC"
quit
$

Cloning EBS 12.1.3 Environments Integrated with Oracle Access Manager

Steven Chan - Tue, 2017-03-28 02:05

We have documented procedures for cloning EBS 12.1.3 environments.  We also have documented procedures for integrating EBS 12.1.3 environments with Oracle Access Manager
(OAM) and Oracle Internet Directory (OID).  The next logical question
would be: do we have documented procedures for cloning EBS 12.1.3
environments that have been integrated with OAM and OID?

Yes, we have published this here:

EBS OAM architecture

This Note provides a certified process and detailed steps to:

  • Clone EBS using Rapid Clone
  • Deregister the cloned EBS instance from OAM and remove AccessGate
  • Remove OID from the cloned EBS instance
  • Integrate the cloned EBS instance with OID
  • Integrate the cloned EBS instance with OAM
  • Reconfigure SSL

Related Articles

Categories: APPS Blogs

Pages

Subscribe to Oracle FAQ aggregator