Feed aggregator

Partner Webcast – Oracle Application Development Cloud Platform: Proven Way for Supporting SDLC

As you build cloud first solutions, Oracle Cloud provides a platform to develop and deploy nearly any type of application, including enterprise apps, lightweight container apps, web apps, mobile...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Quarterly EBS Upgrade Recommendations: May 2017 Edition

Steven Chan - Tue, 2017-05-09 02:00

We've previously provided advice on the general priorities for applying EBS updates and creating a comprehensive maintenance strategy.   

Here are our latest upgrade recommendations for E-Business Suite updates and technology stack components.  These quarterly recommendations are based upon the latest updates to Oracle's product strategies, latest support timelines, and newly-certified releases

You can research these yourself using this Note:

Upgrade Recommendations for May 2017

  EBS 12.2  EBS 12.1  EBS 12.0  EBS 11.5.10 Check your EBS support status and patching baseline

Apply the minimum 12.2 patching baseline
(EBS 12.2.3 + latest technology stack updates listed below)

In Premier Support to September 30, 2023

Apply the minimum 12.1 patching baseline
(12.1.3 Family Packs for products in use + latest technology stack updates listed below)

In Premier Support to December 31, 2021

In Sustaining Support. No new patches available.

Upgrade to 12.1.3 or 12.2

Before upgrading, 12.0 users should be on the minimum 12.0 patching baseline

In Sustaining Support. No new patches available.

Upgrade to 12.1.3 or 12.2

Before upgrading, 11i users should be on the minimum 11i patching baseline

Apply the latest EBS suite-wide RPC or RUP

12.2.6
Sept. 2016

12.1.3 RPC5
Aug. 2016

12.0.6

11.5.10.2
Use the latest Rapid Install

StartCD 51
Feb. 2016

StartCD 13
Aug. 2011

12.0.6


11.5.10.2

Apply the latest EBS technology stack, tools, and libraries

AD/TXK Delta 9
Apr. 2017

FND
Aug. 2016

EBS 12.2.5 OAF Update 12
May 2017

EBS 12.2.4 OAF Update 15
Mar. 2017

ETCC
May 2017

Web Tier Utilities 11.1.1.9

Daylight Savings Time DSTv28
Nov. 2016

12.1.3 RPC5

OAF Bundle 5
Jun. 2016

JTT Update 4
Oct. 2016

Daylight Savings Time DSTv28
Nov. 2016

 

 

Apply the latest security updates

Apr. 2017 Critical Patch Update

SHA-2 PKI Certificates

SHA-2 Update for Web ADI & Report Manager

Migrate from SSL or TLS 1.0 to TLS 1.2

Sign JAR files

Apr. 2017 Critical Patch Update

SHA-2 PKI Certificates

SHA-2 Update for Web ADI & Report Manager

Migrate from SSL or TLS 1.0 to TLS 1.2

Sign JAR files

Oct. 2015 Critical Patch Update April 2016 Critical Patch Update Use the latest certified desktop components

Use the latest JRE 1.8, 1.7, or 1.6 release that meets your requirements.

Switch to Java Web Start

Upgrade to IE 11

Upgrade to Firefox ESR 52

Upgrade Office 2003 and Office 2007 to later Office versions (e.g. Office 2016)

Upgrade Windows XP and Vista and Win 10v1507 to later versions (e.g. Windows 10v1607)

Use the latest JRE 1.8, 1.7, or 1.6 release that meets your requirements

Switch to Java Web Start

Upgrade to IE 11

Upgrade to Firefox ESR 52

Upgrade Office 2003 and Office 2007 to later Office versions (e.g. Office 2016)

Upgrade Windows XP and Vista and Win 10v1507 to later versions (e.g. Windows 10v1607)

    Upgrade to the latest database Database 11.2.0.4 or 12.1.0.2 Database 11.2.0.4 or 12.1.0.2 Database 11.2.0.4 or 12.1.0.2 Database 11.2.0.4 or 12.1.0.2 If you're using Oracle Identity Management

Upgrade to Oracle Access Manager 11.1.2.3

Upgrade to Oracle Internet Directory 11.1.1.9

Migrate from Oracle SSO to OAM 11.1.2.3

Upgrade to Oracle Internet Directory 11.1.1.9

    If you're using Oracle Discoverer

Migrate to Oracle
Business Intelligence Enterprise Edition (OBIEE), Oracle Business
Intelligence Applications (OBIA).

Discoverer 11.1.1.7 reaches End of Life June 2017

Migrate to Oracle
Business Intelligence Enterprise Edition (OBIEE), Oracle Business
Intelligence Applications (OBIA).

Discoverer 11.1.1.7 reaches End of Life June 2017

    If you're using Oracle Portal Migrate to Oracle WebCenter  11.1.1.9 Migrate to Oracle WebCenter 11.1.1.9 or upgrade to Portal 11.1.1.6 (End of Life Jun. 2017).

 

 
Categories: APPS Blogs

Grid Infrastructure Installation on SLES 12 SP1

Yann Neuhaus - Tue, 2017-05-09 01:43

This week I needed to install Oracle Grid Infrastructure 12c release 1 in a SLES 12 SP1 environment. Everything worked fine until I ran the root.sh at the end of the installation. Here’s a quick description of the problem and the workaround.

The root.sh script ran into error and the installation was completely unsuccessfull:
oracle:/u00/app/grid/12.1.0.2 # /u00/app/grid/12.1.0.2/root.sh
Performing root user operation.
 
The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u00/app/grid/12.1.0.2
/u00/app/grid/12.1.0.2/install/utl/rootinstall.sh: line 39: [: -eq: unary operator expected
/u00/app/grid/12.1.0.2/install/utl/rootinstall.sh: line 100: [: too many arguments
   Copying dbhome to /usr/local/bin ...
/u00/app/grid/12.1.0.2/install/utl/rootinstall.sh: line 100: [: too many arguments
   Copying oraenv to /usr/local/bin ...
/u00/app/grid/12.1.0.2/install/utl/rootinstall.sh: line 100: [: too many arguments
   Copying coraenv to /usr/local/bin ...
 
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u00/app/grid/12.1.0.2/crs/install/crsconfig_params
LOCAL ADD MODE
Creating OCR keys for user 'grid', privgrp 'oinstall'..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node oracle_grid successfully pinned.
2017/03/31 09:56:43 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
 
PRCR-1006 : Failed to add resource ora.ons for ons
PRCR-1115 : Failed to find entities of type resource type that match filters (TYPE_NAME ends .type) and contain attributes
CRS-0184 : Cannot communicate with the CRS daemon.
2017/03/31 09:57:04 CLSRSC-180: An error occurred while executing the command 'srvctl add ons' (error code 0)
 
2017/03/31 09:57:55 CLSRSC-115: Start of resource 'ora.evmd' failed
 
2017/03/31 09:57:55 CLSRSC-202: Failed to start EVM daemon
 
The command '/u00/app/grid/12.1.0.2/perl/bin/perl -I/u00/app/grid/12.1.0.2/perl/lib -I/u00/app/grid/12.1.0.2/crs/install /u00/app/grid/12.1.0.2/crs/install/roothas.pl ' execution failed

When we run crsctl stat res –t :

grid@oracle_grid:/u00/app/grid/product/12.1.0.2/grid/bin> ./crsctl stat res -t 
 -------------------------------------------------------------------------------- 
 Name Target State Server State details 
 -------------------------------------------------------------------------------- 
 Cluster Resources 
 -------------------------------------------------------------------------------- 
 ora.cssd 
 1 OFFLINE OFFLINE STABLE 
 ora.diskmon 
 1 OFFLINE OFFLINE STABLE 
 ora.evmd 
 1 OFFLINE OFFLINE STABLE 
 --------------------------------------------------------------------------------

After trying multiple times with other Oracle Grid Infrastructure versions from 11.2.0.4 to 12.2.0.1, I has to open a service request at Oracle, and they furnished me the following workaround:

Once rot.sh has failed, we do not close the GUI installer windows because we will use it to complete the installation after the root.sh is complete, at first we have to deconfigure the failed installation:

oracle_grid:/u00/app/grid/product/12.1.0.2/grid/crs/install # ./roothas.pl -verbose -deconfig –force
oracle_grid:/u00/app/grid/product/12.1.0.2/grid/crs/install # . rootcrs.sh -deconfig –force

Then we modify the /etc/ld.so file by adding /lib64/noelision as first entry. The file should look like:

oracle@oracle_grid:/u00/app/oracle/product/12.1.0.2/dbhome_1/dbs/ [DORSTREA] cat /etc/ld.so.conf
/lib64/noelision
/usr/local/lib64
/usr/local/lib
include /etc/ld.so.conf.d/*.conf

Finally we create a symbolic link between $GI_HOME/lib/libpthread.so.0 and /lib64/noelision/libpthread-2.19.so

lrwxrwxrwx 1 root root            35 Apr 11 15:56 libpthread.so.0 -> /lib64/noelision/libpthread-2.19.so

We only have to try to run the root.sh, and finally it works fine:

oracle_grid:/u00/app/grid/product/12.1.0.2/grid # . root.sh
Performing root user operation.
 
The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u00/app/grid/product/12.1.0.2/grid
 
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
 
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u00/app/grid/product/12.1.0.2/grid/crs/install/crsconfig_params
LOCAL ADD MODE
Creating OCR keys for user 'grid', privgrp 'oinstall'..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node oracle_grid successfully pinned.
2017/04/11 15:56:37 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
 
oracle_grid    /u00/app/grid/product/12.1.0.2/grid/cdata/oracle_grid/backup_20170411_155653.olr     0    
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'oracle_grid'
CRS-2673: Attempting to stop 'ora.evmd' on 'oracle_grid'
CRS-2677: Stop of 'ora.evmd' on 'oracle_grid' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'oracle_grid' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2017/04/11 15:57:09 CLSRSC-327: Successfully configured Oracle Restart for a standalone server

After the root.sh is successfully completed, we continue with the Oracle Installer, and everything is correctly configured for the Oracle Grid Infrastructure.

I will keep you informed of the bug evolution, and I will test ASAP the Oracle Grid Infrastructure installation  under SLES 12 SP2 …

 

 

 

Cet article Grid Infrastructure Installation on SLES 12 SP1 est apparu en premier sur Blog dbi services.

DBFS and XAG for Goldengate P3

Michael Dinh - Mon, 2017-05-08 19:47

Start Pump Extract at source failed as shown below.

INFO OGG-00993 Oracle GoldenGate Capture for Oracle, p_test.prm: EXTRACT P_TEST started.
ERROR OGG-01224 Oracle GoldenGate Capture for Oracle, p_test.prm: TCP/IP error 111 (Connection refused), endpoint: ggvip_hawk.local:7809.
ERROR OGG-01668 Oracle GoldenGate Capture for Oracle, p_test.prm: PROCESS ABENDING.

Hypothesis is ports are not opened from source to target and let’s verify configuration.

VIP name and address from DNS.

Name:    ggvip_hawk.local
Address: 10.10.10.201

Recall this is how goldengate instance was created.

# $GRID_HOME/bin/agctl add goldengate gg_xx \
--instance_type target \
--oracle_home /u01/app/oracle/product/12.1.0/db_1 \
--nodes hawk1,hawk2 \
--network 1 --ip 10.10.10.201 \
--user ggsuser --group dba \
--filesystems dbfs_mount \
--gg_home /u03/gg/12.2.0 

From Goldengate instance gg_xx,
Application VIP (gg_xx-vip) is created
using address (10.10.10.201).

Check for xag resource.

ggsuser@hawk1 ~ $ $GRID_HOME/bin/crsctl stat res -t|grep -A2 xag
xag.gg_xx-vip.vip
      1        ONLINE  ONLINE       hawk1                STABLE
xag.gg_xx.goldengate
      1        ONLINE  ONLINE       hawk1                STABLE

Verify Application VIP address assigned.

ggsuser@hawk1 /u03/app/gg/12.2.0 
$ $GRID_HOME/bin/crsctl stat res xag.gg_xx-vip.vip -p|grep USR_ORA_VIP
GEN_USR_ORA_VIP=
USR_ORA_VIP=10.10.10.201
ggsuser@hawk1 /u03/app/gg/12.2.0 
$ nslookup 10.10.10.201
Server:		10.80.107.101
Address:	10.80.107.101#53

name = ggvip_hawk.local
ggsuser@hawk1 /u03/app/gg/12.2.0 $ 
$GRID_HOME/bin/crsctl stat res xag.gg_xx-vip.vip -p
NAME=xag.gg_xx-vip.vip
TYPE=app.appvipx.type
ACL=owner:root:rwx,pgrp:root:r-x,other::r--,group:dba:r-x,user:ggsuser:r-x
ACTIONS=
ACTION_SCRIPT=
ACTION_TIMEOUT=60
ACTIVE_PLACEMENT=0
AGENT_FILENAME=%CRS_HOME%/bin/orarootagent%CRS_EXE_SUFFIX%
APPSVIP_FAILBACK=0
AUTO_START=restore
CARDINALITY=1
CHECK_INTERVAL=1
CHECK_TIMEOUT=0
CLEAN_TIMEOUT=60
DEGREE=1
DELETE_TIMEOUT=60
DESCRIPTION=Application VIP
ENABLED=1
FAILOVER_DELAY=0
FAILURE_INTERVAL=0
FAILURE_THRESHOLD=0
GEN_USR_ORA_STATIC_VIP=
GEN_USR_ORA_VIP=
HOSTING_MEMBERS=
INSTANCE_FAILOVER=1
INTERMEDIATE_TIMEOUT=0
LOAD=1
LOGGING_LEVEL=1
MODIFY_TIMEOUT=60
NLS_LANG=
OFFLINE_CHECK_INTERVAL=0
PLACEMENT=balanced
RELOCATE_BY_DEPENDENCY=1
RESTART_ATTEMPTS=0
SCRIPT_TIMEOUT=60
SERVER_CATEGORY=
SERVER_POOLS=*
START_CONCURRENCY=0
START_DEPENDENCIES=hard(ora.net1.network) pullup(ora.net1.network)
START_TIMEOUT=0
STOP_CONCURRENCY=0
STOP_DEPENDENCIES=hard(ora.net1.network)
STOP_TIMEOUT=0
TYPE_VERSION=1.1
UPTIME_THRESHOLD=7d
USER_WORKLOAD=no
USE_STICKINESS=0
USR_ORA_ENV=
USR_ORA_VIP=10.10.10.201
VERSION=12.1.0.1.0

Passing Business Object Values to Custom UI Components in ABCS

Shay Shmeltzer - Mon, 2017-05-08 18:54

This quick one is based on a customer question about Oracle Application Builder Cloud Service. The scenario is that we have a business object that has a field that contains the URL to an image. We want to be able to show that image on a page in Oracle Application Builder Cloud Service.

Image showing up

To do that I add a custom UI component object to the details (or edit) page of a record - then I switched the HTML of that object to be: <img id="logoimg"/>

custom code

 

I then added a button to the page and added a bit of custom JavaScript code in its action as follow:

var img = document.getElementById('logoimg'); img.src=$Company.getValue('Logo'); resolve();

This code simply locates the custom object on the page using the object id and then sets the src property of the img html tag to match the value of the field in the business object.

Code in Button

 

 

Categories: Development

Having a mid-life crisis on top-of-the-range hardware

The Anti-Kyte - Mon, 2017-05-08 17:31

I’ve recently begun to realise that I’m not going to live forever.
“Surely not”, you’re thinking, “look at that young scamp in the profile picture, he’s not old enough to be having a mid-life crisis”.

Well, five minutes ago, that was a recent picture. Suddenly, it’s more than 10 years old. As Terry Pratchett once observed, “Inside every old person is a young person wondering what happened”.

Fortunately, with age comes wisdom…or a sufficiently good credit rating with which to be properly self-indulgent.
Now, from what I’ve observed, men who get to my stage in life seem to seek some rather fast machinery as a cure for the onset of morbid reflections on the nature of their own mortality.
In this case however, it’s not the lure of a fast care that I’ve succumbed to. First and foremost, I am a geek. And right now, I’m a geek with a budget.

Time then to draw up the wish list for my new notebook. It will need to…

  • be bigger than my 10-inch netbook but small enough to still be reasonably portable
  • have a fast, cutting-edge processor
  • have an SSD with sufficient storage for all my needs
  • have large quantities of RAM
  • come with a Linux Operating System pre-installed

For any non-technical readers who’ve wandered down this far, the rough translation is that I want something with more silicon in it than one of those hour-glasses for measuring the time left before Brexit that have been on the telly recently.
It’s going to have to be so fast that it will, at the very least, offer Scotty the prospect of changing the Laws of Physics.
Oh, and I should still be able to use it on the train.

The requirement for a pre-installed Linux OS may be a factor which limits my choices.
Usually, I’m happy enough to purchase a machine with Windows pre-installed and then replace it with a Linux Distro of my choice.
Yes, this may involve some messing about with drivers and – in some cases – a kernel upgrade, but the process is generally fairly painless.
This time though, I’m going to be demanding. However much of a design classic a Mac may be, OSX just isn’t going to cut it. Linux is my OS of choice.
Furthermore, if I’m going to be paying top dollar for top-of-the range then I want everything to work out of the box.
Why? (pause to flick non-existent hair) Because I’m worth it.

Oh, as a beneficial side-effect it does also mean that I’ll save myself a few quid because I won’t have to fork out for a Windows License.

In the end, a combination of my exacting requirements and the advice and guidance of my son, who knows far more about this sort of thing, lead me to my final choice – the Dell XPS13

What follows is in the style of an Apple fanboy/fangirl handling their latest iThing…

Upon delivery, the package was carried to the kitchen table where it lay with all it’s promise of untold joy…

Yea, and there followed careful unwrapping…

It’s a….box

…Russian doll-like…

…before finally…

If Geekgasm isn’t a thing, it jolly well should be.

Now to setup the OS…

…before finally re-starting.

The first re-boot of a machine usually takes a little while as it sorts itself out so I’ll go and make a cof… oh, it’s back.
Yep, Ubuntu plus SSD ( 512GB capacity) plus a quad-core i7-7560 CPU equals “are you sure you actually pressed the button ?”

Ubuntu itself wasn’t necessarily my Linux distro of choice. That doesn’t matter too much however.
First of all, I’m quite happy to get familiar with Unity if it means I can still access all of that Linux loveliness.
Secondly, with the insane amount of system resources available( 16GB RAM to go with that CPU), I can simply spin up virtual environments with different linux distros, all sufficiently fast to act as they would if being run natively.
For example…

Right, now I’ve got that out of my system, I can wipe the drool off the keyboard and start to do something constructive…like search for cat videos.


Filed under: Uncategorized Tagged: xps13

Issue Frequent COMMIT Statements

Tom Kyte - Mon, 2017-05-08 16:06
Tom : I' ve recently read an article about Performance on Dbasupport.com an i couldn't believe what i was reading about commits I usually read your answers on this site and I' ve read your book as well and you always suggest to not commit fequently....
Categories: DBA Blogs

Performance tuning of delete from many tables

Tom Kyte - Mon, 2017-05-08 16:06
Hello, These are my tables: <code> -- This is the main container CREATE TABLE CONTAINER( Id VARCHAR2(18 CHAR) NOT NULL, ContainerName VARCHAR2(100 CHAR) NOT NULL, ContainerDescription VARCHAR2(500 CHAR) NULL ) NOCACHE PARAL...
Categories: DBA Blogs

Oracle Security 12cR2 and Oracle Security Training Dates

Pete Finnigan - Mon, 2017-05-08 16:06
I am going to be teaching my two day class "How to perform a security audit of an Oracle database" in Athens, Greece on the 30th and 31st May 2017. This is advertised on Oracle University website and you can....[Read More]

Posted by Pete On 08/05/17 At 03:51 PM

Categories: Security Blogs

What is in a transportable tablespace dumpfile?

Yann Neuhaus - Mon, 2017-05-08 15:20

On 31st of May in Düsseldorf, at DOAG Datenbank, I’ll talk about transportable tablespaces and pluggable databases. Both methods are transporting data physically, the difference is in the transport of the metadata, which can be more flexible when transported logically, as with TTS, but faster when transported physically with PDB. I have a lot of demos to show transportable tablespaces with RMAN, and the different cloning features available in 12cR2. If I have time I’ll show what is inside the dumpfile when using Data Pump to export the metadata. Here is the idea.

expdp transport_tablespaces

Here is how we export metadata with Data Pump for transportable tablespaces.


expdp system/oracle@//localhost/PDB1 directory=VAR_TMP dumpfile=expdat.tmp transport_tablespaces=USERS exclude=table_statistics,index_statistics;
 
...
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT
Master table "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TRANSPORTABLE_01 is:
/u01/oradata/tmp/expdat.dmp
******************************************************************************
Datafiles required for transportable tablespace USERS:
/u01/oradata/CDB1/PDB1/users01.dbf

The metadata is exported into expdata.dmp and the data resides in the original datafile. The dumpfile is a binary file but there is a way to extract metadata as DDL using impdp

impdp sqlfile

Here I run impdp with sqlfile to generate all DDL into this file. Nothing is imported and the datafiles are not read, reason why I’ve just put something wrong to transport_datafiles:


impdp system/oracle@//localhost/PDB1 directory=VAR_TMP transport_datafiles=blahblahblah sqlfile=sqlfile.sql ;

No error. Only the dumpfile has been read and here is an extract of the DDP in sqlfile.sql concerning the PK_DEPT and PK_EMP indexes:


-- new object type path: TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
ALTER TABLE "SCOTT"."DEPT" ADD CONSTRAINT "PK_DEPT" PRIMARY KEY ("DEPTNO")
USING INDEX (CREATE UNIQUE INDEX "SCOTT"."PK_DEPT" ON "SCOTT"."DEPT" ("DEPTNO")
PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE(SEG_FILE 26 SEG_BLOCK 138 OBJNO_REUSE 73197
INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "USERS" ) ENABLE;
ALTER TABLE "SCOTT"."EMP" ADD CONSTRAINT "PK_EMP" PRIMARY KEY ("EMPNO")
USING INDEX (CREATE UNIQUE INDEX "SCOTT"."PK_EMP" ON "SCOTT"."EMP" ("EMPNO")
PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE(SEG_FILE 26 SEG_BLOCK 154 OBJNO_REUSE 73205
INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "USERS" ) ENABLE;
-- new object type path: TRANSPORTABLE_EXPORT/INDEX_STATISTICS
-- new object type path: TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT
ALTER TABLE "SCOTT"."EMP" ADD CONSTRAINT "FK_DEPTNO" FOREIGN KEY ("DEPTNO")
REFERENCES "SCOTT"."DEPT" ("DEPTNO") ENABLE;
-- new object type path: TRANSPORTABLE_EXPORT/TABLE_STATISTICS
-- new object type path: TRANSPORTABLE_EXPORT/STATISTICS/MARKER
-- new object type path: TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK

This looks like the DDL used to re-create the same table except that we can see two storage attributes that are not documented:

  • SEG_FILE and SEG_BLOCK
  • OBJNO_REUSE
SEG_FILE and SEG_BLOCK

When you create an empty table, you just provide the tablespace name and Oracle will allocate the first extent, with the segment header. You don’t choose the data placement within the tablespace. But here we are in a different case: the extents already exist in the datafiles that we transport, and the DDL must just map to it. This is why in this case the segment header file number and block number is specified. The remaining extent allocation information is stored within the datafiles (Locally Managed Tablespace), only the segment header must be known by the dictionary.

As an example, when I look at the database where the export comes from, I can see that the attributes for PK_EMP (SEG_FILE 26 SEG_BLOCK 154) are the relative file number and header block number of the PK_EMP segment:


10:49:10 SQL> select owner,segment_name,header_file,header_block,blocks,extents,tablespace_name,relative_fno from dba_segments where owner='SCOTT';
 
OWNER SEGMENT_NAME HEADER_FILE HEADER_BLOCK BLOCKS EXTENTS TABLESPACE_NAME RELATIVE_FNO
----- ------------ ----------- ------------ ------ ------- --------------- ------------
SCOTT DEPT 31 130 8 1 USERS 26
SCOTT EMP 31 146 8 1 USERS 26
SCOTT SALGRADE 31 162 8 1 USERS 26
SCOTT PK_DEPT 31 138 8 1 USERS 26
SCOTT PK_EMP 31 154 8 1 USERS 26

This file identifier is a relative file number within the tablespace, which means that there is no need to change it when a tablespace is transported.

You will see exactly the same information in the database where you import the tablespace (except for HEADER_FILE which is the absolute file number).

OBJNO_REUSE

Each segment has a DATA_OBJECT_ID, which is referenced in each block, the ROWIDs. This must not change when we transport a tablespace because the goal is that nothing has to be modified in the datafiles. For this reason, the data object id is exported with the metadata, as we can see for this PK_EMP example (OBJNO_REUSE 73205), and set to the same in the target dictionary. Here are the data object IDs for the objects exported here:


10:49:20 SQL> select owner,object_name,object_type,object_id,data_object_id from dba_objects where owner='SCOTT';
 
OWNER OBJECT_NAME OBJECT_TYPE OBJECT_ID DATA_OBJECT_ID
----- ----------- ----------- --------- --------------
SCOTT DEPT TABLE 73196 73196
SCOTT PK_DEPT INDEX 73197 73197
SCOTT EMP TABLE 73198 73206
SCOTT PK_EMP INDEX 73199 73205
SCOTT BONUS TABLE 73200 73200
SCOTT SALGRADE TABLE 73201 73201

The OBJECT_ID will be different in the target, assigned in the same way as when we create an object, but this one is not referenced anywhere within the datafiles.

So what?

Usually, the metadata precedes the data. With transportable tablespaces, it is the opposite: data is there and metadata is re-created to map the data. This metadata is what is stored into the dumpfile exported to transport tablespaces.
From what you have seen, you can understand now that the RELATIVE_FNO and the DATA_OBJECT_ID are not unique within a database, but only within a tablespace. You can understand also that Transportable Tablespace import duration does not depend on the size of data, but is proportional to the number of objects (metadata). This is where Pluggable Databases is more efficient: metadata is transported physically and import duration does not depend on the number of objects, especially when it does not involve an upgrade to new version and object recompilation.

 

Cet article What is in a transportable tablespace dumpfile? est apparu en premier sur Blog dbi services.

BIP and Mapviewer Mash Up V

Tim Dexter - Mon, 2017-05-08 11:38

The last part on maps, I promise ... its been a fun ride for me at least :0) If you need to catch up on previous episodes:

In this post we're looking at map quality. On the left a JPG map, to the right an SVG output.

If we ignore the fact that they have different levels of features or layers. Imagine getting the maps into a PDF and then printing them. Its pretty clear that the SVG version of the map is going to render better on paper compared to JPG.

Getting the SVG output from mapviewer is pretty straightforward, getting BIP to render it requires a little bit of effort. I have mentioned the XML request that we construct and then do a variable substitution in our servlet. All we need do is add another option to the requested output. Mapviewer supports several flavors of SVG:

  • If you specify SVG_STREAM, the stream of the image in SVG Basic (SVGB) format is returned directly;
  • If you specify SVG_URL, a URL to an SVG Basic image stored on the MapViewer host system is returned.
  • If you specify SVGZ_STREAM, the stream of the image in SVG Compressed (SVGZ) format is returned directly;
  • If you specify SVGZ_URL, a URL to an SVG Compressed image stored on the MapViewer host system is returned. SVG Compressed format can effectively reduce the size of the SVG map by 40 to 70 percent compared with SVG Basic format, thus providing better performance.
  • If you specify SVGTINY_STREAM, the stream of the image in SVG Tiny (SVGT) format is returned directly;
  • If you specify SVGTINY_URL, a URL to an SVG Tiny image stored on the MapViewer host system is returned. (The SVG Tiny format is designed for devices with limited display capabilities, such as cell phones.)

Dont panic, Ive looked at them all for you and we need to use SVGTINY_STREAM. This sends back a complete XML file representation of the map in SVG format. We have a couple of issues:

  1. We need to strip the XML declaration from the top of the file: <?xml version="1.0" encoding="utf-8"?> If we don't BIP will choke on the SVG. Being lazy I just used a string function to strip the line out in my servlet:

    dd

  2. We need to stream the SVG back as text. So we need to set the CONTENT_TYPE for the servlet as 'text/javascript'
  3. We need to handle the SVG when it comes back to the template. We do not use the




Categories: BI & Warehousing

Oracle 12cR2 : Optimizer Statistics Advisor

Yann Neuhaus - Mon, 2017-05-08 10:47

The Optimizer Statistics Advisor is a new Advisor in Oracle 12.2.
The goal of this Advisor is to check the way you gather the statistics on your database, and depending on what is found, it will makes some recommendations on how you can improve the statistics gathering strategy in order to provide more efficient statistics to the CBO.
This Advisor is also able to generate remediation scripts to apply the statistics gathering “best practices”.
adv
The recommendations are based on 23 predefined rules :

SQL> select rule_id, name, rule_type, description from v$stats_advisor_rules;


RULE_ID NAME RULE_TYPE DESCRIPTION
---------- ----------------------------------- --------- -------------------------------------------------------------------------------------
0 SYSTEM
1 UseAutoJob SYSTEM Use Auto Job for Statistics Collection
2 CompleteAutoJob SYSTEM Auto Statistics Gather Job should complete successfully
3 MaintainStatsHistory SYSTEM Maintain Statistics History
4 UseConcurrent SYSTEM Use Concurrent preference for Statistics Collection
5 UseDefaultPreference SYSTEM Use Default Preference for Stats Collection
6 TurnOnSQLPlanDirective SYSTEM SQL Plan Directives should not be disabled
7 AvoidSetProcedures OPERATION Avoid Set Statistics Procedures
8 UseDefaultParams OPERATION Use Default Parameters in Statistics Collection Procedures
9 UseGatherSchemaStats OPERATION Use gather_schema_stats procedure
10 AvoidInefficientStatsOprSeq OPERATION Avoid inefficient statistics operation sequences
11 AvoidUnnecessaryStatsCollection OBJECT Avoid unnecessary statistics collection
12 AvoidStaleStats OBJECT Avoid objects with stale or no statistics
13 GatherStatsAfterBulkDML OBJECT Do not gather statistics right before bulk DML
14 LockVolatileTable OBJECT Statistics for objects with volatile data should be locked
15 UnlockNonVolatileTable OBJECT Statistics for objects with non-volatile should not be locked
16 MaintainStatsConsistency OBJECT Statistics of dependent objects should be consistent
17 AvoidDropRecreate OBJECT Avoid drop and recreate object seqauences
18 UseIncremental OBJECT Statistics should be maintained incrementally when it is beneficial
19 NotUseIncremental OBJECT Statistics should not be maintained incrementally when it is not beneficial
20 AvoidOutOfRange OBJECT Avoid Out of Range Histogram endpoints
21 UseAutoDegree OBJECT Use Auto Degree for statistics collection
22 UseDefaultObjectPreference OBJECT Use Default Object Preference for statistics collection
23 AvoidAnalyzeTable OBJECT Avoid using analyze table commands for statistics collection


24 rows selected.


SQL>

You can have a look at this blog if you want a little bit more informations about these rules.
If you want to exclude some rules or some database objects of the Advisor’s recommandation, you can define multiple filters. (I will do that below.)

Well, let’s see how to use the Advisor. The first step is to create a task which will run it :

DECLARE
tname VARCHAR2(32767);
ret VARCHAR2(32767);
BEGIN
tname := 'stat_advisor_1';
ret := DBMS_STATS.CREATE_ADVISOR_TASK(tname);
END;
/

The task is created :

SQL> select task_name, advisor_name, created, status from dba_advisor_tasks where advisor_name = 'Statistics Advisor';


TASK_NAME ADVISOR_NAME CREATED STATUS
------------------------------ ------------------------------ ------------------- -----------
STAT_ADVISOR_1 Statistics Advisor 04.05.2017-11:19:25 INITIAL


SQL>

Now, I want to define some filters.
The first one will disable the Advisor for all objects, the 2nd will enable it only on a specific table and the 3th and 4th will exclude two rules :

DECLARE
filter1 CLOB;
filter2 CLOB;
filter3 CLOB;
filter4 CLOB;
BEGIN
filter1 := DBMS_STATS.CONFIGURE_ADVISOR_OBJ_FILTER(
task_name => 'STAT_ADVISOR_1',
stats_adv_opr_type => 'EXECUTE',
rule_name => NULL,
ownname => NULL,
tabname => NULL,
action => 'DISABLE' );


filter2 := DBMS_STATS.CONFIGURE_ADVISOR_OBJ_FILTER(
task_name => 'STAT_ADVISOR_1',
stats_adv_opr_type => 'EXECUTE',
rule_name => NULL,
ownname => 'JOC',
tabname => 'T2',
action => 'ENABLE' );


filter3 := DBMS_STATS.CONFIGURE_ADVISOR_RULE_FILTER(
task_name => 'STAT_ADVISOR_1',
stats_adv_opr_type => 'EXECUTE',
rule_name => 'AvoidDropRecreate',
action => 'DISABLE' );


filter4 := DBMS_STATS.CONFIGURE_ADVISOR_RULE_FILTER(
task_name => 'STAT_ADVISOR_1',
stats_adv_opr_type => 'EXECUTE',
rule_name => 'UseGatherSchemaStats',
action => 'DISABLE' );
END;
/

All is ready, let’s run the task…

DECLARE
tname VARCHAR2(32767);
ret VARCHAR2(32767);
BEGIN
tname := 'stat_advisor_1';
ret := DBMS_STATS.EXECUTE_ADVISOR_TASK(tname);
END;
/

…and generate the report :

SQL> select dbms_stats.report_advisor_task('stat_advisor_1',null,'text','all','all') as report from dual;
GENERAL INFORMATION
-------------------------------------------------------------------------------
Task Name : STAT_ADVISOR_1
Execution Name : EXEC_2172
Created : 05-04-17 11:34:51
Last Modified : 05-04-17 11:35:10
-------------------------------------------------------------------------------
SUMMARY
-------------------------------------------------------------------------------
For execution EXEC_2172 of task STAT_ADVISOR_1, the Statistics Advisor has no
findings.

-------------------------------------------------------------------------------
SQL>

Cool ! Nothing to report regarding statistics gathering on my the table JOC.T2 (see filter2 above).
But how does the Advisor reacts when I run it after having deleted the statistics on this table ?
SQL> exec dbms_stats.delete_table_stats(ownname=>'JOC',tabname=>'T2');


PL/SQL procedure successfully completed.


SQL> DECLARE
2 tname VARCHAR2(32767);
3 ret VARCHAR2(32767);
4 BEGIN
5 tname := 'stat_advisor_1';
6 ret := DBMS_STATS.EXECUTE_ADVISOR_TASK(tname);
7 END;
8 /


PL/SQL procedure successfully completed.


SQL> select dbms_stats.report_advisor_task('stat_advisor_1',null,'text','all','all') as report from dual;
GENERAL INFORMATION
-------------------------------------------------------------------------------
Task Name : STAT_ADVISOR_1
Execution Name : EXEC_2182
Created : 05-04-17 11:34:51
Last Modified : 05-04-17 11:44:22
-------------------------------------------------------------------------------
SUMMARY
-------------------------------------------------------------------------------
For execution EXEC_2182 of task STAT_ADVISOR_1, the Statistics Advisor has 1
finding(s). The findings are related to the following rules: AVOIDSTALESTATS.
Please refer to the finding section for detailed information.

-------------------------------------------------------------------------------
FINDINGS
-------------------------------------------------------------------------------
Rule Name: AvoidStaleStats
Rule Description: Avoid objects with stale or no statistics
Finding: There are 1 object(s) with no statistics.
Schema:
JOC
Objects:
T2


Recommendation: Gather Statistics on those objects with no statistics.
Example:
-- Gathering statistics for tables with stale or no statistics in schema, SH:
exec dbms_stats.gather_schema_stats('SH', options => 'GATHER AUTO')
Rationale: Stale statistics or no statistics will result in bad plans.
-------------------------------------------------------------------------------

It looks to work well. The Advisor detected that there is no stats on the table, and a rule were triggered.
And what about the remediation scripts ? Firstly, we have to generate them :

VARIABLE script CLOB
DECLARE
tname VARCHAR2(32767);
BEGIN
tname := 'stat_advisor_1';
:script := DBMS_STATS.SCRIPT_ADVISOR_TASK(tname);
END;
/


PL/SQL procedure successfully completed.

And then display them :

set linesize 3000
set long 500000
set pagesize 0
set longchunksize 100000
set serveroutput on


DECLARE
v_len NUMBER(10);
v_offset NUMBER(10) :=1;
v_amount NUMBER(10) :=10000;
BEGIN
v_len := DBMS_LOB.getlength(:script);
WHILE (v_offset < v_len)
LOOP
DBMS_OUTPUT.PUT_LINE(DBMS_LOB.SUBSTR(:script,v_amount,v_offset));
v_offset := v_offset + v_amount;
END LOOP;
END;
13 /
-- Script generated for the recommendations from execution EXEC_2182
-- in the statistics advisor task STAT_ADVISOR_1
-- Script version 12.2
-- No scripts will be provided for the rule USEAUTOJOB.Please check the report for more details.
-- No scripts will be provided for the rule COMPLETEAUTOJOB.Please check the report for more details.
-- No scripts will be provided for the rule MAINTAINSTATSHISTORY.Please check the report for more details.
-- No scripts will be provided for the rule TURNONSQLPLANDIRECTIVE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDSETPROCEDURES.Please check the report for more details.
-- No scripts will be provided for the rule USEDEFAULTPARAMS.Please check the report for more details.
-- No scripts will be provided for the rule USEGATHERSCHEMASTATS.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDINEFFICIENTSTATSOPRSEQ.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDUNNECESSARYSTATSCOLLECTION.Please check the report for more details.
-- No scripts will be provided for the rule GATHERSTATSAFTERBULKDML.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDDROPRECREATE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDOUTOFRANGE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDANALYZETABLE.Please check the report for more details.
-- No scripts will be provided for the rule USEAUTOJOB.Please check the report for more details.
-- No scripts will be provided for the rule COMPLETEAUTOJOB.Please check the report for more details.
-- No scripts will be provided for the rule MAINTAINSTATSHISTORY.Please check the report for more details.
-- No scripts will be provided for the rule TURNONSQLPLANDIRECTIVE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDSETPROCEDURES.Please check the report for more details.
-- No scripts will be provided for the rule USEDEFAULTPARAMS.Please check the report for more details.
-- No scripts will be provided for the rule USEGATHERSCHEMASTATS.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDINEFFICIENTSTATSOPRSEQ.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDUNNECESSARYSTATSCOLLECTION.Please check the report for more details.
-- No scripts will be provided for the rule GATHERSTATSAFTERBULKDML.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDDROPRECREATE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDOUTOFRANGE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDANALYZETABLE.Please check the report for more details.
-- Scripts for rule USECONCURRENT
-- Rule Description: Use Concurrent preference for Statistics Collection
-- No scripts will be provided for the rule USEAUTOJOB.Please check the report for more details.
-- No scripts will be provided for the rule COMPLETEAUTOJOB.Please check the report for more details.
-- No scripts will be provided for the rule MAINTAINSTATSHISTORY.Please check the report for more details.
-- No scripts will be provided for the rule TURNONSQLPLANDIRECTIVE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDSETPROCEDURES.Please check the report for more details.
-- No scripts will be provided for the rule USEDEFAULTPARAMS.Please check the report for more details.
-- No scripts will be provided for the rule USEGATHERSCHEMASTATS.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDINEFFICIENTSTATSOPRSEQ.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDUNNECESSARYSTATSCOLLECTION.Please check the report for more details.
-- No scripts will be provided for the rule GATHERSTATSAFTERBULKDML.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDDROPRECREATE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDOUTOFRANGE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDANALYZETABLE.Please check the report for more details.
-- Scripts for rule USEDEFAULTPREFERENCE
-- Rule Description: Use Default Preference for Stats Collection
-- Set global preferenes to default values.
-- Scripts for rule USEDEFAULTOBJECTPREFERENCE
-- Rule Description: Use Default Object Preference for statistics collection
-- Setting object-level preferences to default values
-- setting CASCADE to default value for object level preference
-- setting ESTIMATE_PERCENT to default value for object level preference
-- setting METHOD_OPT to default value for object level preference
-- setting GRANULARITY to default value for object level preference
-- setting NO_INVALIDATE to default value for object level preference
-- Scripts for rule USEINCREMENTAL
-- Rule Description: Statistics should be maintained incrementally when it is beneficial
-- Turn on the incremental option for those objects for which using incremental is helpful.
-- Scripts for rule UNLOCKNONVOLATILETABLE
-- Rule Description: Statistics for objects with non-volatile should not be locked
-- Unlock statistics for objects that are not volatile.
-- Scripts for rule LOCKVOLATILETABLE
-- Rule Description: Statistics for objects with volatile data should be locked
-- Lock statistics for volatile objects.
-- Scripts for rule NOTUSEINCREMENTAL
-- Rule Description: Statistics should not be maintained incrementally when it is not beneficial
-- Turn off incremental option for those objects for which using incremental is not helpful.
-- Scripts for rule USEAUTODEGREE
-- Rule Description: Use Auto Degree for statistics collection
-- Turn on auto degree for those objects for which using auto degree is helpful.
-- Scripts for rule AVOIDSTALESTATS
-- Rule Description: Avoid objects with stale or no statistics
-- Gather statistics for those objcts that are missing or have no statistics.
-- Scripts for rule MAINTAINSTATSCONSISTENCY
-- Rule Description: Statistics of dependent objects should be consistent
-- Gather statistics for those objcts that are missing or have no statistics.
declare
obj_filter_list dbms_stats.ObjectTab;
obj_filter dbms_stats.ObjectElem;
obj_cnt number := 0;
begin
obj_filter_list := dbms_stats.ObjectTab();
obj_filter.ownname := 'JOC';
obj_filter.objtype := 'TABLE';
obj_filter.objname := 'T2';
obj_filter_list.extend();
obj_cnt := obj_cnt + 1;
obj_filter_list(obj_cnt) := obj_filter;
dbms_stats.gather_database_stats(
obj_filter_list=>obj_filter_list);
end;
/

PL/SQL procedure successfully completed.
SQL>

It was a very simple demo, but as you can see above, the Advisor provides a small script to adjust what is wrong or what is missing concerning the statistics of the table.

Conclusion :
Once you have upgraded your database to Oracle 12.2, don’t hesitate to set up the new Statistics Advisor. It is easy to deploy and can be fully personalized depending on what you want to check (which objects ? which rules ?). Moreover, it has been developped by the same team who develops and maintains the CBO. Therefore, they know which statistics the Optimizer needs !

 

Cet article Oracle 12cR2 : Optimizer Statistics Advisor est apparu en premier sur Blog dbi services.

12cR1 RAC Posts -- 10 : Video on Database Startup

Hemant K Chitale - Mon, 2017-05-08 09:38
I've created a short video on manually starting up a RAC database that has a PDB with two custom services.  If a database is shutdown before the cluster is shutdown, the cluster startup does not automatically startup the database instance, so I demonstrate a manual startup.

See the video at:  https://youtu.be/saFvo9QhYSI


.
.
.
Categories: DBA Blogs

Lenox Deploys Oracle Commerce Cloud to Modernize Shopping Experience

Oracle Press Releases - Mon, 2017-05-08 08:30
Press Release
Lenox Deploys Oracle Commerce Cloud to Modernize Shopping Experience Leader in Quality Tabletop, Giftware and Collectibles Engages Shoppers with Personalization

Redwood Shores, Calif.—May 8, 2017

Today Oracle announced that Lenox is using Oracle Commerce Cloud to modernize its omnichannel shopping experience and deliver a steady stream of new products and features at its namesake Lenox and Reed & Barton brands. By leveraging Oracle Commerce Cloud’s configurable, customizable storefront, Lenox can now personalize what customers see based on preferences and shopping patterns, and streamline business operations while supporting its mobile and desktop sales.

“Our focus is providing customers enduring luxury products and trend-setting brands with a fresh, engaging shopping experience. Oracle Commerce Cloud keeps us current in a rapidly changing consumer environment across online and mobile channels with over 25 percent improvement in year-over-year mobile sales,” said Ravi Kurumety, CIO, Lenox Corporation. “Consumers are inspired and empowered by a more visual, intuitive shopping experience where they can easily discover new complementary items and designs to patterns they already own.”

Oracle Commerce Cloud enables Lenox to better support and promote exclusive designs with leading brands and designers like Dansk, Marchesa, and Kate Spade New York, as well as its own popular Lenox dinnerware and giftware. Lenox selected Oracle Commerce Cloud for its speed to market and its retail-rich functionality. For Lenox, the results are a reduction in total cost of ownership, faster access to new features and payment options, empowerment of the business teams, and a more connected shopping experience when customers shop using a mix of mobile, call center and desktop.

Oracle Commerce Cloud was implemented by Oracle Commerce Cloud Consulting (OC3), a team that specializes in e-commerce and is at the forefront of delivering innovation in a cloud services framework. 

“With fantastic support from our executive team and Oracle, we were able to deploy Oracle Commerce Cloud in just about 6 months. The cloud service allows Lenox to personalize customer promotions, introduce new products faster, and deliver more effective search results and recommendations, driving increased visibility and productivity,” said Kurumety.

“With compelling offers from our consulting team, we were able to deliver a rapid deployment in partnership with Lenox in time for the peak trading season,” said Ray Carlin, Senior Vice President and General Manager, Oracle Retail. “Our cloud first strategy allows us to deliver new innovations more frequently to serve the retail market.”

 
Contact Info
Matt Torres
Oracle
+1 415-595-1584
matt.torres@oracle.com
About Oracle Retail

Oracle provides retailers with a complete, open, and integrated suite of best-of-breed business applications, cloud services, and hardware that are engineered to work together and empower commerce. Leading fashion, grocery, and specialty retailers use Oracle solutions to anticipate market changes, simplify operations and inspire authentic brand interactions. For more information, visit our website at www.oracle.com/retail.

About Oracle

The Oracle Cloud delivers hundreds of SaaS applications and enterprise-class PaaS and IaaS services to customers in more than 195 countries while processing 55 billion transactions a day. For more information about Oracle (NYSE:ORCL), please visit us at cloud.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Matt Torres

  • +1 415-595-1584

PostgreSQL on a Pure Storage All Flash Array – Populating PostgreSQL

Yann Neuhaus - Mon, 2017-05-08 07:51

In the last post we explained on how we did the setup for connecting a PostgreSQL server to the Pure Storage array. In this post we’ll actually give the array something to work with. What we use for the tests is pgbench which is included with PostgreSQL and can be used to load a database and then perform a TPC-B like performance test against the loaded data. Btw: pgbench is a great tool when you want to make the same tests against different configurations of PostgreSQL or the same configurations of PostgreSQL against different physical or virtual hardware configurations.

To begin with lets create a dedicated database for loading the data into:

postgres@pgpurestorage:/home/postgres/ [pgpure] psql -c "create database purestorage" postgres
CREATE DATABASE
Time: 552.693 ms

We’ll use pgbench in initialization (“-i”) mode with a scale factor (“-s”) of 10000 for populating the database. The will create the pgbench_accounts, pgbench_branches, pgbench_history, and pgbench_tellers and will result in 1000000000 rows in total for the pgbench_accounts table.

postgres@pgpurestorage:/home/postgres/ [pgpure] pgbench -i -s 10000 purestorage

The total run-time (including the vacuum at the end) was around 2,5 hours (this is including the index creation and the vacuum at the end) and this is the size of the database:

pgpurestorage/postgres MASTER (postgres@5432) # \l+
                                                                      List of databases
┌─────────────┬──────────┬──────────┬─────────────┬─────────────┬───────────────────────┬─────────┬────────────┬────────────────────────
│    Name     │  Owner   │ Encoding │   Collate   │    Ctype    │   Access privileges   │  Size   │ Tablespace │                Descript
├─────────────┼──────────┼──────────┼─────────────┼─────────────┼───────────────────────┼─────────┼────────────┼────────────────────────
│ postgres    │ postgres │ UTF8     │ en_US.UTF-8 │ en_US.UTF-8 │                       │ 7629 kB │ pg_default │ default administrative 
│ purestorage │ postgres │ UTF8     │ en_US.UTF-8 │ en_US.UTF-8 │                       │ 146 GB  │ pg_default │                        
│ template0   │ postgres │ UTF8     │ en_US.UTF-8 │ en_US.UTF-8 │ =c/postgres          ↵│ 7497 kB │ pg_default │ unmodifiable empty data
│             │          │          │             │             │ postgres=CTc/postgres │         │            │                        
│ template1   │ postgres │ UTF8     │ en_US.UTF-8 │ en_US.UTF-8 │ =c/postgres          ↵│ 7497 kB │ pg_default │ default template for ne
│             │          │          │             │             │ postgres=CTc/postgres │         │            │                        
└─────────────┴──────────┴──────────┴─────────────┴─────────────┴───────────────────────┴─────────┴────────────┴────────────────────────
(4 rows)

The biggest table is the pgbench_accounts table:

pgpurestorage/purestorage MASTER (postgres@5432) # \d+
                           List of relations
┌────────┬──────────────────┬───────┬──────────┬─────────┬─────────────┐
│ Schema │       Name       │ Type  │  Owner   │  Size   │ Description │
├────────┼──────────────────┼───────┼──────────┼─────────┼─────────────┤
│ public │ pgbench_accounts │ table │ postgres │ 125 GB  │             │
│ public │ pgbench_branches │ table │ postgres │ 392 kB  │             │
│ public │ pgbench_history  │ table │ postgres │ 0 bytes │             │
│ public │ pgbench_tellers  │ table │ postgres │ 4360 kB │             │
└────────┴──────────────────┴───────┴──────────┴─────────┴─────────────┘
(4 rows)

.. which contains 1000000000 rows:

pgpurestorage/purestorage MASTER (postgres@5432) # select count(*) from pgbench_accounts;
┌────────────┐
│   count    │
├────────────┤
│ 1000000000 │
└────────────┘
(1 row)

Time: 219538.051 ms (03:39.538)

Here are the screenshots from the Pure Storage management console for the duration of the run (which started some minutes before 20:00). Orange are write operations, blue is read:

purestorage-pg-load-1
purestorage-pg-load-2

As you can see we are fare away from saturating the storage. So, lets do the same load again but this time with two sessions into two databases in parallel:

# session one 
postgres@pgpurestorage:/home/postgres/ [pgpure] psql -c "drop database if exists purestorage" postgres
DROP DATABASE
Time: 983.297 ms
postgres@pgpurestorage:/home/postgres/ [pgpure] psql -c "create database purestorage" postgres
CREATE DATABASE
Time: 514.140 ms
postgres@pgpurestorage:/home/postgres/ [pgpure] pgbench -i -s 10000 purestorage

# session two
postgres@pgpurestorage:/home/postgres/ [pgpure] psql -c "drop database if exists purestorage2" postgres
DROP DATABASE
Time: 0.110 ms
postgres@pgpurestorage:/home/postgres/ [pgpure] psql -c "create database purestorage2" postgres
CREATE DATABASE
Time: 274.576 ms
postgres@pgpurestorage:/home/postgres/ [pgpure] pgbench -i -s 10000 purestorage2

This screenshots from the Pure Storage console:

purestorage-pg-load-2-1
purestorage-pg-load-2-2

It took more than double as long to load the 2000000000 rows but the limit is not on the storage. The average bandwidth went up from around 65 mb/sec to around 90 mb/sec. What is interesting to see is that we need only around 45GB real storage:

purestorage-pg-load-2-reduction

This means we have a almost 9 times compression/de-duplication on the storage layer. Remember that each database has a size of 146GB ( ( 146 * 2 = 292 ) / 9 = 32, plus the WAL files ):

                                                                     List of databases
┌──────────────┬──────────┬──────────┬─────────────┬─────────────┬───────────────────────┬─────────┬────────────┬──────────────────────────────────────
│     Name     │  Owner   │ Encoding │   Collate   │    Ctype    │   Access privileges   │  Size   │ Tablespace │                Description           
├──────────────┼──────────┼──────────┼─────────────┼─────────────┼───────────────────────┼─────────┼────────────┼──────────────────────────────────────
│ postgres     │ postgres │ UTF8     │ en_US.UTF-8 │ en_US.UTF-8 │                       │ 7629 kB │ pg_default │ default administrative connection dat
│ purestorage  │ postgres │ UTF8     │ en_US.UTF-8 │ en_US.UTF-8 │                       │ 146 GB  │ pg_default │                                      
│ purestorage2 │ postgres │ UTF8     │ en_US.UTF-8 │ en_US.UTF-8 │                       │ 146 GB  │ pg_default │                                      
│ template0    │ postgres │ UTF8     │ en_US.UTF-8 │ en_US.UTF-8 │ =c/postgres          ↵│ 7497 kB │ pg_default │ unmodifiable empty database          
│              │          │          │             │             │ postgres=CTc/postgres │         │            │                                      
│ template1    │ postgres │ UTF8     │ en_US.UTF-8 │ en_US.UTF-8 │ =c/postgres          ↵│ 7497 kB │ pg_default │ default template for new databases   
│              │          │          │             │             │ postgres=CTc/postgres │         │            │                                      
└──────────────┴──────────┴──────────┴─────────────┴─────────────┴───────────────────────┴─────────┴────────────┴──────────────────────────────────────

What happens then when the file system gets full on the host but there still is storage available in the back-end? Lets copy one of the test databases: That should almost fill the file system:

pgpurestorage/postgres MASTER (postgres@5432) # \! df -h
Filesystem                         Size  Used Avail Use% Mounted on
/dev/mapper/cl_pgpurestorage-root   26G  2.2G   24G   9% /
devtmpfs                           3.9G     0  3.9G   0% /dev
tmpfs                              3.9G   16K  3.9G   1% /dev/shm
tmpfs                              3.9G   41M  3.8G   2% /run
tmpfs                              3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/sda1                         1014M  183M  832M  19% /boot
tmpfs                              781M     0  781M   0% /run/user/1000
/dev/mapper/vgpure-lvpure          450G  302G  149G  68% /u02/pgdata
pgpurestorage/postgres MASTER (postgres@5432) # create database test template = purestorage;

CREATE DATABASE
Time: 3312062.975 ms (55:12.063)
pgpurestorage/postgres MASTER (postgres@5432) # 
pgpurestorage/postgres MASTER (postgres@5432) # \! df -h
Filesystem                         Size  Used Avail Use% Mounted on
/dev/mapper/cl_pgpurestorage-root   26G  2.2G   24G   9% /
devtmpfs                           3.9G     0  3.9G   0% /dev
tmpfs                              3.9G   16K  3.9G   1% /dev/shm
tmpfs                              3.9G   41M  3.8G   2% /run
tmpfs                              3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/sda1                         1014M  183M  832M  19% /boot
tmpfs                              781M     0  781M   0% /run/user/1000
/dev/mapper/vgpure-lvpure          450G  449G  1.7G 100% /u02/pgdata
pgpurestorage/postgres MASTER (postgres@5432) # 

Almost full, so lets create another database:

pgpurestorage/postgres MASTER (postgres@5432) # create database test2 template = purestorage;
ERROR:  could not write to file "base/16562/16534": No space left on device
Time: 34449.626 ms (00:34.450)

So, behavior is what has been expected.

In the next post we’ll do some pgbench standard benchmarks against the data set although the numbers will probably not be very good as we are running in a ESX test environment and this seems to be the bottle neck in our case. We’ll have to analyze the sar statistics for that, but no time yet.

What I can say about the storage system so far: I like the simplicity of the management console. There is not much more you can do than creating hosts, volumes and connections between them, that’s it. Reduction is happening automatically and you have the option to replicate to another storage system. Snapshots are available as well, but not yet tested.

 

Cet article PostgreSQL on a Pure Storage All Flash Array – Populating PostgreSQL est apparu en premier sur Blog dbi services.

Microservces, Database, SOA Cloud: Most-Watched 2 Minute Tech Tips - May 1-7, 2017

OTN TechBlog - Mon, 2017-05-08 07:38

Here they are -- the most-watched 2 Minute Tech Tip videos for the week of May 1-7, 2017. The oldest tip on this week's list is Oracle ACE Dan Atwood's BPM tip (#8) originally published in 2014. The newest tip on the list is Chris Saxon's look at new features in Oracle Database 12.2 (#3), published in February of this year. Chris once again earns the distinction of having two tips in this week's list, as does Oracle ACE Atul Kumar. And Atul's streak continues, as his tip on EBS integration with OAM makes its 57th top 10 appearance.

Congrats to all of this week's Top 10 tipsters.

If you have a tip of your own to share, let's make it happen. Contact me: bob.rhubart@oracle.com

1 red-arrow-up-sml.png Orchestration vs. Choreography | Kelly Goetsch
Published on Aug 17, 2016
Last week: #9
10th Top 10 appearance 2 red-arrow-up-sml.png Heap Tables and Index Organized Tables in Oracle | Chris Saxon
Published on Mar 4, 2015
Last week: #5
44th Top 10 appearance 3 red-arrow-down-sml.png New Features in Oracle Database 12.2 | Chris Saxon
Published on Feb 22, 2017
Last week: #3
11th Top 10 appearance 4 red-arrow-up-sml.png Are You Ready for Microservices? | Carrasco and Gonzalez
Published on Nov 2, 2016
Last Top 10 appearance: November 21, 2016
3rd Top 10 appearance 5 red-arrow-up-sml.png When to Use Oracle SOA Cloud Service | Robert van Molken
Published on Dec 1, 2016
Last Top 10 appearance: April 17, 2017
15th Top 10 appearance 6 red-arrow-up-sml.png Oracle EBS Integration with OAM | Atul Kumar
Published on Feb 25, 2016
Last week: #2
57th Top 10 appearance 7 red-arrow-up-sml.png Oracle EBS integration with Identity and Access Management for SSO | Atul Kumar
Published on Feb 14, 2017
Last week: #6
5th Top 10 appearance 8 red-arrow-up-sml.png BPM Process Patterns using BPMN | Dan Atwood
Published on Sep 24, 2014
1st Top 10 appearance 9 red-arrow-up-sml.png On Demand Rules in HFM | Erich Ranz
Published on Dec 9, 2015
Last Top 10 appearance: June 6, 2016
2nd Top 10 appearance 10 red-arrow-up-sml.png JMX and WebLogic MBean Objects | Fevzi Korkutata
Published on Feb 7, 2017
Last Top 10 appearance: April 3, 2017
8th Top 10 appearance

 

Oracle Optimizes Global Transportation and Trade Compliance Processes

Oracle Press Releases - Mon, 2017-05-08 07:30
Press Release
Oracle Optimizes Global Transportation and Trade Compliance Processes New cloud trade and transportation management capabilities help organizations modernize logistics processes, increase perfect order performance and lower supply chain costs

Redwood Shores, Calif.—May 8, 2017

Oracle today announced a broad set of market-driven enhancements to Oracle Logistics Cloud that empower organizations to streamline global transportation and trade compliance processes. Under constant pressure to deliver goods faster, organizations can take advantage of the latest release of Oracle Logistics Cloud to streamline transportation processes, increase the occurrence of perfect order performance and reduce order cycle times. The new capabilities will also help organizations speed up delivery across all modes of domestic and international transport in order to help customers receive goods faster.

The latest release of Oracle Logistics Cloud includes new capabilities in both Oracle Transportation Management (OTM) Cloud and Oracle Global Trade Management (GTM) Cloud. The latest innovations help logistics, global trade and supply chain professionals achieve regulatory compliance by creating a more efficient logistics network that provides complete visibility and control over all orders and shipments. Built as a native trade and transportation platform, Oracle Logistics Cloud provides a unified, cloud-based solution for managing all aspects of a global logistics network.

“It’s challenging to manage the complexity involved with transporting goods to different regions and countries, and we’re committed to innovation in logistics cloud applications to make it as easy as possible for organizations to comply with trade and customs regulations,” said Derek Gittoes, vice president of supply chain management product strategy at Oracle. “The latest release of Oracle Transportation Management Cloud and Oracle Global Trade Management Cloud helps organizations streamline global transportation and trade compliance processes by providing a new modern user interface that maximizes user productivity.” 

Built for the cloud, the latest releases of Oracle Transportation Management Cloud and Oracle Global Trade Management Cloud introduce intuitive user experience enhancements that provide users with the vital information needed for decision making on the device of their choice. With the Logistics Adaptor in Oracle Integration Cloud Services (ICS), customers can deploy Logistics Cloud quicker and easier with products such as Oracle Supply Chain Management (SCM) Cloud and Oracle Enterprise Resource Planning (ERP) Cloud to enable an end-to-end order-to-cash solution in the cloud.

Additional features include:

  • Broad support for import, export, and transit declarations to ease the process of preparing customs declarations for users
  • New fleet management capabilities effectively manage private and dedicated fleet and carrier resources, allowing increased utilization and cost savings
  • New rate maintenance features enable customers to insert and update freight rates in a simpler manner than ever before through spreadsheet importing and exporting capabilities
  • Advances in 3D load configuration, which help organizations optimize the packing of trailers and containers, improve equipment utilization and reduce the number of trucks needed to move goods
  • Enhancements in dock scheduling capabilities streamline and simplify the dock appointment process
  • Support for license reservations ensure that the license will be guaranteed to be available when the goods are shipped. This is part of the License Management functionality in Oracle Trade Compliance, which reduces delays in delivering goods to customers.

The latest releases of Oracle Transportation Management Cloud and Oracle Global Trade Management Cloud are available now for existing and new customers. Organizations can choose flexible deployment options to best fit their needs.

For additional information on Oracle Supply Chain Management (SCM) Cloud, visit Facebook, Twitter or the Oracle SCM blog.

Contact Info
Joann Wardrip
Oracle
+1.650.607.1343
joann.wardrip@oracle.com
About Oracle

The Oracle Cloud delivers hundreds of SaaS applications and enterprise-class PaaS and IaaS services to customers in more than 195 countries while processing 55 billion transactions a day. For more information about Oracle (NYSE:ORCL), please visit us at cloud.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Joann Wardrip

  • +1.650.607.1343

opt_estimate

Jonathan Lewis - Mon, 2017-05-08 02:04

The opt_estimate hint is one of many that shouldn’t be used in end-user code and isn’t officially documented. Nevertheless – like so many other hints – it’s a hint that is hard to ignore when you see it floating around the code generated by the Oracle software. This note is prompted by a twitter question from fellow Oak Table member Stefan Koehler asking the about working of the hint’s index_filter parameter. Checking my library I knew the answer was yes – so after a quick exchange on twitter I said I’d write up a short note about my example, and this is it.

Although the hint is not one that you should use it’s worth writing this note as a reminder of the significance to index range scans of the access predicates and filter predicates that Oracle reports in the predicate section of an execution plan.

When a query does an index range scan it’s going to walk through a (logically) consecutive set of index leaf blocks, looking at each individual index entry in turn (and those index entries will be correctly “sorted” within the leaf block) to see if it should use the rowid it finds there to visit the table. For “perfect” use of an index Oracle may be able to identify the starting and ending positions it needs in the index and know that it should use every rowid in between to visit the table – there will no “wasted”examinations of index entries on the way; however in a query involving a multi-column index and multiple predicates Oracle might have to use predicates on the first column(s) of the index to identify the starting and ending positions, but use further predicates on later columns in the index to decide whether or not to use each index entry to visit the table.

The predicates that Oracle can use to identify the range of leaf blocks it should visit are called access predicates, and the predicates that Oracle can use to further eliminate rowids as it walks along the leaf blocks are called filter predicates.

The simplest way to demonstrate this is with a query of the form: “Index_Column1 = … and Index_Column3 = …”, and that’s what I’ll be using in my model:


rem
rem     Script:         opt_est_ind_filter.sql
rem     Author:         Jonathan Lewis
rem
rem     Last tested
rem             11.2.0.4
rem             10.2.0.5
rem

create table t1
nologging
as
with generator as (
        select
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        rownum                          id,
        mod(rownum - 1,100)             n1,
        rownum                          n2,
        mod(rownum - 1, 100)            n3,
        lpad(rownum,10,'0')             v1,
        lpad('x',100,'x')               padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6 -- > comment to bypass WordPress formatting issue
;

create index t1_i1 on t1(n1,n2,n3) nologging;

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          => 'T1',
                method_opt       => 'for all columns size 1'
        );
end;
/

select leaf_blocks from user_indexes where index_name = 'T1_I1';

The number of leaf blocks in the index was 3,062.

I’ve defined n1 and n3 to match, and for any value between 0 and 99 there are 10,000 rows in the table where n1 and n3 hold that value. However, in the absence of a column group defined on (n1, n3), the optimizer is going to use its standard “no correlation” arithmetic to decide that there are 10,000 possible combinations of n1 and n3, and 100 rows per combination. Let’s see what this does for a simple query:


set autotrace traceonly explain

select  count(v1)
from    t1
where   n1 = 0 and n3 = 0
;

set autotrace off


--------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |     1 |    17 |   134   (1)| 00:00:01 |
|   1 |  SORT AGGREGATE              |       |     1 |    17 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID| T1    |   100 |  1700 |   134   (1)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN          | T1_I1 |   100 |       |    34   (3)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("N1"=0 AND "N3"=0)
       filter("N3"=0)

The plan shows an index range scan where n3=0 is used as a filter predicate and n1=0 (with a tiny bit of extra accuracy from the n3=0) predicate is used as the access predicate, and the optimizer has calculated that 100 rowids will be retrieved from the index and used to find 100 rows in the table.

The cost of the range scan is 34: The optimizer’s estimate is that the scale of the initial access to the index will be due to the predicate n1 = 0 which is responsible for 1% of the index – giving us 3,062/100 leaf blocks (rounded up). Added to that there will be a little extra cost for the trip down the blevel of the index and a little extra for the CPU usage.

Now let’s tell the optimizer that its cardinality estimate is out by a factor of 25 (rather than 100 we actually know it to be) in one of two different ways:

prompt  ============================
prompt  index_scan - scale_rows = 25
prompt  ============================

select
        /*+
                qb_name(main)
                index(@main t1(n1, n2, n3))
                opt_estimate(@main index_scan   t1, t1_i1, scale_rows=25)
        */
        count(v1)
from    t1 
where   n1 = 0 and n3 = 0
;

prompt  ==============================
prompt  index_filter - scale_rows = 25
prompt  ==============================

select
        /*+
                qb_name(main)
                index(@main t1(n1, n2, n3))
                opt_estimate(@main index_filter t1, t1_i1, scale_rows=25)
        */
        count(v1)
from    t1 
where   n1 = 0 and n3 = 0
;

In both examples I’ve hinted the index to stop the optimizer from switching to a tablescan; but in the first case I’ve told Oracle that the entire index range scan has to be scaled up by a factor of 25 while in the second case I’ve told Oracle that its estimate due to the final filter has to be scaled up by a factor of 25. How does this affect the costs and cardinalities of the plans:


============================
index_scan - scale_rows = 25
============================
--------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |     1 |    17 |  3285   (1)| 00:00:17 |
|   1 |  SORT AGGREGATE              |       |     1 |    17 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID| T1    |   100 |  1700 |  3285   (1)| 00:00:17 |
|*  3 |    INDEX RANGE SCAN          | T1_I1 |  2500 |       |   782   (2)| 00:00:04 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("N1"=0 AND "N3"=0)
       filter("N3"=0)



==============================
index_filter - scale_rows = 25
==============================
--------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |     1 |    17 |  2537   (1)| 00:00:13 |
|   1 |  SORT AGGREGATE              |       |     1 |    17 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID| T1    |   100 |  1700 |  2537   (1)| 00:00:13 |
|*  3 |    INDEX RANGE SCAN          | T1_I1 |  2500 |       |    34   (3)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("N1"=0 AND "N3"=0)
       filter("N3"=0)

In both cases the cardinality estimate has gone up by a factor of 25 for the index range scan. Notice, though, that the optimizer is now suffering from cognitive dissonance – it “knows” that it’s got 2,500 rowids to use to visit the table, it “knows” there are no extra predicates to eliminate rows from the table when it gets there, but it also “knows” that it’s going to find only 100 rows. Messing around with opt_estimate() and cardinality() hints is difficult to get right.

More significantly for the purposes of this note, are the costs. When we use the index_filter parameter the optimizer still thinks it’s going to access the same number of leaf blocks and the only correction it has to make is the number of rowids it finds in those blocks – so the index range scan cost hasn’t changed (though I supposed in some cases it might change slightly due to increased CPU costs). When we use the index_scan parameter the optimizer scales up its estimate of the number of leaf blocks (hence cost), which we can see in the figures 782 / 25 = 31.28. (Without going into the trace file and checking exact details that’s close enough to the previously reported 34 for me to think it’s allowing for 25 times the number of leaf blocks plus a chunk more CPU)

Conclusion

As I said at the outset, opt_estimate() really isn’t a hint you should be playing with, but I hope that this note has helped shed some light on the significance of access predicates and filter predicates in relation to the costs of index range scans.

Footnote

There were two significant details in the notes I had in my script. First was the frequency of the expression “it looks as if” – which is my shorthand for “I really ought to do some more tests before I publish any conclusions”; second was that my most recent testing had been on 10.2.0.5 (where the results were slightly different thanks to sampling in the statistics). Given that Stefan Koehler had mentioned 11.2.0.3 as his version I ran up an instance of 11.1.0.7 – and found that the index_filter example didn’t scale up the cardinality – so maybe his problem is a version problem.

 


New OA Framework 12.2.5 Update 12 Now Available

Steven Chan - Mon, 2017-05-08 02:00

Web-based content in Oracle E-Business Suite Release 12 runs on the Oracle Application Framework (also known as OA Framework, OAF, or FWK) user interface libraries and infrastructure. Since the initial release of Oracle E-Business Suite Release 12.2 in 2013, we have released a number of cumulative updates to Oracle Application Framework to fix performance, security, and stability issues.

These updates are provided in cumulative Release Update Packs, and cumulative Bundle Patches that can be applied on top of the Release Update Packs. In this context, cumulative means that the latest RUP or Bundle Patch contains everything released earlier.

The latest OAF update for Oracle E-Business Suite Release 12.2.5 is now available:

Where is this update documented?

Instructions for installing this OAF Release Update Pack are in the following My Oracle Support knowledge document:

Who should apply this patch?

All Oracle E-Business Suite Release 12.2.5 users should apply this patch.  Future OAF patches for EBS Release 12.2.5 will require this patch as a prerequisite. 

What's new in this update?

This bundle patch is cumulative: it includes 39 fixes in total, including all fixes released in previous EBS Release 12.2.5 bundle patches.

This latest bundle patch includes fixes for following bugs/issues:

  • An added favorite link outside Oracle E-Business Suite to open in a browser's new window is opening in the same window from Framework Simplified Home page.
  • The trusted domain URL such as UIX/Cabo URL redirecting to untrusted site when a malicious URL is framed.

Related Articles

Categories: APPS Blogs

May 2017 Update to E-Business Suite Technology Codelevel Checker (ETCC)

Steven Chan - Mon, 2017-05-08 02:00

The E-Business Suite Technology Codelevel Checker (ETCC) tool helps you identify application or database tier bugfixes that need to be applied to your Oracle E-Business Suite Release 12.2 system. ETCC maps missing bugfixes to the default corresponding patches, and displays them in a patch recommendation summary.

What’s New

ETCC has been updated to include bug fixes and patching combinations for the following software:

Recommended Versions

  • April 2017 Database 12.1.0.2 PSU and Proactive Bundle Patch
  • April 2017 Database 11.2.0.4 PSU and Engineered Systems Patch
  • Microsoft Windows Bundle Patch 12.1.0.2.170228

Minimum Versions

  • January 2017 Database 12.1.0.2 PSU and Proactive Bundle Patch
  • October 2016 Database 11.2.0.4 PSU and Engineered Systems Patch

Obtaining ETCC

We recommend always using the latest version of ETCC, as new bugfixes will not be checked by older versions of the utility. The latest version of the ETCC tool can be downloaded via Patch 17537119 from My Oracle Support.

Related Articles

References

Related Articles

Categories: APPS Blogs

Pages

Subscribe to Oracle FAQ aggregator