Feed aggregator

Model and Match Recognoze

Tom Kyte - Tue, 2017-05-09 10:26
Hi Tom, I tried searching your site for Model clause but lot of unwanted discussion where just Model keyword is used are coming in search list. Is there any way to just search Model Clause related stuff.
Categories: DBA Blogs

Question on materialized view when concatenating several columns as a new column - cannot REFRESH FAST ON COMMIT

Tom Kyte - Tue, 2017-05-09 10:26
There is an error when my view is when concatenating several columns as a new column, it cannot REFRESH FAST ON COMMIT. CREATE MATERIALIZED VIEW LOG ON EXP_DC_HST WITH SEQUENCE, ROWID (ACTIVITY_DESCRIPTION, ACTIVITY_TYPE, CD1, CD2, CD3, CD4, C...
Categories: DBA Blogs

Rollback in trigger

Tom Kyte - Tue, 2017-05-09 10:26
Hi Tom, we are trying to create table/column value(count) constraint by using trigger. what i am trying is, for table A, column name:'value'. this column should not allow more than two 'YES' values. if it encounter any third 'YES' value it has to...
Categories: DBA Blogs

Session Concurrency Issue

Tom Kyte - Tue, 2017-05-09 10:26
Hi, We are maintaining unique session id in a table. once a session id pick as random we will be change the status as using mode until get response from third party service,then again simultaneously reusable.My scenario is parallely giving same s...
Categories: DBA Blogs

How to create tables automatically from multiple csv files in oracle database?

Tom Kyte - Tue, 2017-05-09 10:26
How to create tables automatically from multiple csv files in oracle database? For Example: I have a client requirement that load data from several csv files into an oracle database.I have 50 files with different structures.so ,i want create table ...
Categories: DBA Blogs

Storing refcursor to physical table

Tom Kyte - Tue, 2017-05-09 10:26
Best way to store refcursor to physical table without impacting performance. I do have a code built, but it takes too much time when data is huge. Also this code doesn't seems to be scalable as it continuously creates temporary tables for each differ...
Categories: DBA Blogs

Oracle Cloud Platform Adds New Levels of Performance, Availability, and Access for Oracle Database Applications

Oracle Press Releases - Tue, 2017-05-09 10:00
Press Release
Oracle Cloud Platform Adds New Levels of Performance, Availability, and Access for Oracle Database Applications Oracle Real Application Clusters, Oracle FastConnect, and Microsoft Windows Server available on Oracle’s next gen IaaS, providing customers an expanded range of infrastructure capabilities

Redwood Shores, Calif.—May 9, 2017

Continuing its Oracle Cloud Platform innovation, Oracle today announced major enhancements that make it easier for organizations to move enterprise database applications to the cloud. Oracle now offers Oracle Real Application Clusters (RAC), Oracle FastConnect private networking capabilities, and Microsoft Windows Server support for Oracle’s high performance next generation IaaS. These offerings enable a new range of customers to deploy and access Oracle Database applications in the cloud with performance and availability on par or exceeding on-premises, and demonstrate performance of up to 20 times faster on benchmark tests compared with Oracle running on a competitor’s cloud.* 

Oracle is the only cloud vendor to provide customers with complete and highly available enterprise application environments in the cloud. With this announcement, Oracle is delivering unique, integrated platform services to mid-sized and large enterprises that don’t have the expertise or capital to maintain comparable systems on-premises. Customers can now use Oracle’s next generation IaaS to deploy applications based on Microsoft Windows, Oracle Linux, and other Linux distributions, and scale to millions of operations per second, connect to Oracle RAC Databases on bare metal servers that support hundreds of thousands of transactions per second, and use Oracle FastConnect to connect these applications to on-premises networks at costs as low as one-twentieth of a competitor’s comparable offerings for high bandwidth users.** Customers can either use all the services and pay an hourly charge, or purchase a subscription for consistent billing.

“Enterprise customers and ISVs have high expectations for performance and availability for their database applications. At the same time, they want to preserve hard-won experience and best practices, particularly from on-premises deployments,” said Kash Iftikhar, vice president of product management, Oracle. “We introduced our next generation IaaS to address all of these requirements. With these database, compute, and networking enhancements, Oracle Cloud Platform offers a complete, high performance solution for business-critical database applications in the cloud.”

Oracle Removes Roadblocks, Frustration, and Guesswork for Infrastructure Customers

ICAT provides property insurance protection to over 65,000 homeowners and businesses located in hurricane- and earthquake-exposed regions of the United States.

“ICAT is in the catastrophe insurance business, so we’re very sensitive to risk and business continuity. We’ve run our mission-critical online quoting application on-premises with Oracle Database for years, but keeping up with business growth was a challenge,” said Mike Ferber, CIO, ICAT. “The Oracle Database Cloud Service on bare metal exceeded our performance requirements and made a move to the Oracle Cloud feasible. The ability to quickly scale up processing power, as well as leverage Oracle RAC in the cloud, gives us great confidence that we will be able to offer our customers the experience and reliability necessary with our new cloud-based system.”

Darling Ingredients Inc. is a world leader in organic ingredients and specialty products for customers in the pharmaceutical, food, pet food, feed, and other industries.

“Darling Ingredients has had an aggressive plan to move all of our key IT applications into the cloud. We have a number of critical Oracle and non-Oracle applications, many of which rely on Oracle Database,” said Tom Morgan, Oracle Apps DBA Manager at Darling Ingredients. “Oracle Database Cloud Services on bare metal met our stringent performance requirements. Having predictable, high bandwidth connectivity to our end users is critical, and Oracle FastConnect was a great solution.”

Penn State University’s Institute for CyberScience (ICS) provides High Performance Computing (HPC) resources and capabilities to the research community, both internal and external to the university through its Advanced CyberInfrastructure (ICS-ACI).

“HPC is typically the most demanding workload from a CPU, storage, and networking perspective,” said Chuck Gilbert, Technical Director and Chief Architect for Advanced Cyberinfrastructure at Penn State University. “Oracle Cloud Platform, with its Bare Metal Cloud Compute Service, was our choice for HPC, equaling our on-premises resources and meeting our performance requirements. The addition of Oracle FastConnect makes it possible for our customers to move huge amounts of data in and out of the cloud securely and consistently. We’re leveraging Oracle Cloud to support our exponentially growing demand for HPC projects, and help researchers get resources and results faster. By leveraging a Hybrid Cloud Bursting Model, ICS-ACI will be able to service all workload demands from our researchers above and beyond existing on premises capacities.”

Oracle Enhances Oracle Cloud Platform Services

Oracle Cloud Platform offers the broadest range of integrated services to enable customers of any size and at any price point to easily develop, test, and securely and cost-effectively deploy their business-critical applications in the cloud. Oracle Cloud Platform’s next generation IaaS capabilities have been expanded to include:

  • Support for two-node Oracle RAC on bare metal compute, providing proven, continuous database availability through failures or software maintenance. Two-node RAC can scale on-demand from 4-72 OCPUs, with 1TB of RAM and 24TB of storage.

  • Oracle FastConnect on next generation IaaS delivers customers an on-demand private network connection that offers flexible bandwidth options (increments of 1Gb or 10Gb), can scale up and down with their business needs, and can be provisioned in minutes via console and API. Current global partner providers are Equinix and Megaport.

  • Support for Microsoft Windows Server on both 1, 2, 4, 8, 16-OCPU virtual machine compute and three 36-OCPU bare metal compute shapes provides users a range of compute, memory, and storage resources, with image portability across instance types, all on the same, high performance virtual cloud network.

  • Support for CentOS, RHEL, and Ubuntu, which along with Oracle Linux and Microsoft Windows Server distributions covers the vast majority of enterprise operating systems.

*Performance compares internal benchmarks of Oracle RAC on Oracle Database Cloud Service on bare metal versus Oracle Database 12.1 on Amazon Web Services M4.10xlarge EBS optimized instances (www.oracle.com/corporate/pressrelease/database-benchmarking-092016.html and docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html).

**Price compares Oracle FastConnect versus Microsoft Azure Express Route in high bandwidth scenario (10 Gbps, 500 TB/month) (https://cloud.oracle.com/bare-metal-network/pricing and azure.microsoft.com/en-us/pricing/details/expressroute/).

Contact Info
Nicole Maloney
Oracle
+1.650.506.0806
nicole.maloney@oracle.com
Jesse Caputo
Oracle
+1.650.506.5967
jesse.caputo@oracle.com
About Oracle

The Oracle Cloud delivers hundreds of SaaS applications and enterprise-class PaaS and IaaS services to customers in more than 195 countries while processing 55 billion transactions a day. For more information about Oracle (NYSE:ORCL), please visit us at cloud.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Nicole Maloney

  • +1.650.506.0806

Jesse Caputo

  • +1.650.506.5967

Oracle E-Business Suite APPS_NE Security Risks

The most recent version of the Oracle E-Business Suite, Release 12.2, introduces on-line patching to reduce downtime requirements. This new technical functionality is based on Edition-based redefinition provided by the Oracle 11gR2 database. For the E-Business Suite to make use of Editioning, Oracle has added a new schema to the ‘APPS’ family – the APPS_NE schema.

The APPS_NE schema is the owner of those objects previously owned by APPS that cannot be Editioned or in other words; the APPS_NEW is the APPS schema for the non-editioned database objects.  

There are several security implications with regard to APPS_NE:

  • The same password must be shared among APPLSYS, APPS, and APPS_NE. The default password for APPS_NE is 'APPS.'
  • APPS_NE has similar elevated system privileges to APPS (e.g. SELECT ANY TABLE), but is not identical. See the listing below for the 56 privileges granted to APPS_NE.
  • APPS_NE must be logged, audited and monitored APPS_NE as you do APPS. APPS_NE needs to be added to your audit scripts and procedures as well as monitoring solutions

The following lists summarize the system privilege differences between APPS and APPS_NE

-- APPS_NE has 3 privileges APPS does not            
CREATE MATERIALIZED VIEW
CREATE SEQUENCE
DROP ANY TYPE

 

-- APPS has 18 privileges that APPS_NE does not
ALTER ANY PROCEDURE
ALTER DATABASE
ANALYZE ANY DICTIONARY
CHANGE NOTIFICATION
CREATE ANY DIRECTORY
CREATE ANY EDITION
CREATE ANY PROCEDURE
CREATE EXTERNAL JOB
CREATE JOB
CREATE PUBLIC DATABASE LINK
CREATE PUBLIC SYNONYM
DEQUEUE ANY QUEUE
DROP ANY EDITION
DROP ANY PROCEDURE
DROP PUBLIC SYNONYM
ENQUEUE ANY QUEUE
EXECUTE ANY TYPE
MANAGE ANY QUEUE

 

-- APPS_NE has 56 system privileges
ALTER ANY CLUSTER
ALTER ANY INDEX
ALTER ANY MATERIALIZED VIEW
ALTER ANY OUTLINE
ALTER ANY ROLE
ALTER ANY SEQUENCE
ALTER ANY TABLE
ALTER ANY TRIGGER
ALTER ANY TYPE
ALTER SESSION
ALTER SYSTEM
ANALYZE ANY
COMMENT ANY TABLE
CREATE ANY CLUSTER
CREATE ANY CONTEXT
CREATE ANY INDEX
CREATE ANY MATERIALIZED VIEW
CREATE ANY OUTLINE
CREATE ANY SEQUENCE
CREATE ANY SYNONYM
CREATE ANY TABLE
CREATE ANY TRIGGER
CREATE ANY TYPE
CREATE ANY VIEW
CREATE DATABASE LINK
CREATE MATERIALIZED VIEW
CREATE PROCEDURE
CREATE ROLE
CREATE SEQUENCE
CREATE SESSION
CREATE SYNONYM
CREATE TRIGGER
CREATE TYPE
CREATE VIEW
DELETE ANY TABLE
DROP ANY CLUSTER
DROP ANY CONTEXT
DROP ANY INDEX
DROP ANY MATERIALIZED VIEW
DROP ANY OUTLINE
DROP ANY ROLE
DROP ANY SEQUENCE
DROP ANY SYNONYM
DROP ANY TABLE
DROP ANY TRIGGER
DROP ANY TYPE
DROP ANY VIEW
EXECUTE ANY PROCEDURE
GLOBAL QUERY REWRITE
GRANT ANY ROLE
INSERT ANY TABLE
LOCK ANY TABLE
SELECT ANY SEQUENCE
SELECT ANY TABLE
UNLIMITED TABLESPACE
UPDATE ANY TABLE

 

If you have any questions, please contact us at info@integrigy.com

-Michael Miller, CISSP-ISSMP, CCSP, CCSK

References
 
 
 
 
 
 
 
Oracle E-Business Suite
Categories: APPS Blogs, Security Blogs

Partner Webcast – Oracle Application Development Cloud Platform: Proven Way for Supporting SDLC

As you build cloud first solutions, Oracle Cloud provides a platform to develop and deploy nearly any type of application, including enterprise apps, lightweight container apps, web apps, mobile...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Quarterly EBS Upgrade Recommendations: May 2017 Edition

Steven Chan - Tue, 2017-05-09 02:00

We've previously provided advice on the general priorities for applying EBS updates and creating a comprehensive maintenance strategy.   

Here are our latest upgrade recommendations for E-Business Suite updates and technology stack components.  These quarterly recommendations are based upon the latest updates to Oracle's product strategies, latest support timelines, and newly-certified releases

You can research these yourself using this Note:

Upgrade Recommendations for May 2017

  EBS 12.2  EBS 12.1  EBS 12.0  EBS 11.5.10 Check your EBS support status and patching baseline

Apply the minimum 12.2 patching baseline
(EBS 12.2.3 + latest technology stack updates listed below)

In Premier Support to September 30, 2023

Apply the minimum 12.1 patching baseline
(12.1.3 Family Packs for products in use + latest technology stack updates listed below)

In Premier Support to December 31, 2021

In Sustaining Support. No new patches available.

Upgrade to 12.1.3 or 12.2

Before upgrading, 12.0 users should be on the minimum 12.0 patching baseline

In Sustaining Support. No new patches available.

Upgrade to 12.1.3 or 12.2

Before upgrading, 11i users should be on the minimum 11i patching baseline

Apply the latest EBS suite-wide RPC or RUP

12.2.6
Sept. 2016

12.1.3 RPC5
Aug. 2016

12.0.6

11.5.10.2
Use the latest Rapid Install

StartCD 51
Feb. 2016

StartCD 13
Aug. 2011

12.0.6


11.5.10.2

Apply the latest EBS technology stack, tools, and libraries

AD/TXK Delta 9
Apr. 2017

FND
Aug. 2016

EBS 12.2.5 OAF Update 12
May 2017

EBS 12.2.4 OAF Update 15
Mar. 2017

ETCC
May 2017

Web Tier Utilities 11.1.1.9

Daylight Savings Time DSTv28
Nov. 2016

12.1.3 RPC5

OAF Bundle 5
Jun. 2016

JTT Update 4
Oct. 2016

Daylight Savings Time DSTv28
Nov. 2016

 

 

Apply the latest security updates

Apr. 2017 Critical Patch Update

SHA-2 PKI Certificates

SHA-2 Update for Web ADI & Report Manager

Migrate from SSL or TLS 1.0 to TLS 1.2

Sign JAR files

Apr. 2017 Critical Patch Update

SHA-2 PKI Certificates

SHA-2 Update for Web ADI & Report Manager

Migrate from SSL or TLS 1.0 to TLS 1.2

Sign JAR files

Oct. 2015 Critical Patch Update April 2016 Critical Patch Update Use the latest certified desktop components

Use the latest JRE 1.8, 1.7, or 1.6 release that meets your requirements.

Switch to Java Web Start

Upgrade to IE 11

Upgrade to Firefox ESR 52

Upgrade Office 2003 and Office 2007 to later Office versions (e.g. Office 2016)

Upgrade Windows XP and Vista and Win 10v1507 to later versions (e.g. Windows 10v1607)

Use the latest JRE 1.8, 1.7, or 1.6 release that meets your requirements

Switch to Java Web Start

Upgrade to IE 11

Upgrade to Firefox ESR 52

Upgrade Office 2003 and Office 2007 to later Office versions (e.g. Office 2016)

Upgrade Windows XP and Vista and Win 10v1507 to later versions (e.g. Windows 10v1607)

    Upgrade to the latest database Database 11.2.0.4 or 12.1.0.2 Database 11.2.0.4 or 12.1.0.2 Database 11.2.0.4 or 12.1.0.2 Database 11.2.0.4 or 12.1.0.2 If you're using Oracle Identity Management

Upgrade to Oracle Access Manager 11.1.2.3

Upgrade to Oracle Internet Directory 11.1.1.9

Migrate from Oracle SSO to OAM 11.1.2.3

Upgrade to Oracle Internet Directory 11.1.1.9

    If you're using Oracle Discoverer

Migrate to Oracle
Business Intelligence Enterprise Edition (OBIEE), Oracle Business
Intelligence Applications (OBIA).

Discoverer 11.1.1.7 reaches End of Life June 2017

Migrate to Oracle
Business Intelligence Enterprise Edition (OBIEE), Oracle Business
Intelligence Applications (OBIA).

Discoverer 11.1.1.7 reaches End of Life June 2017

    If you're using Oracle Portal Migrate to Oracle WebCenter  11.1.1.9 Migrate to Oracle WebCenter 11.1.1.9 or upgrade to Portal 11.1.1.6 (End of Life Jun. 2017).

 

 
Categories: APPS Blogs

Grid Infrastructure Installation on SLES 12 SP1

Yann Neuhaus - Tue, 2017-05-09 01:43

This week I needed to install Oracle Grid Infrastructure 12c release 1 in a SLES 12 SP1 environment. Everything worked fine until I ran the root.sh at the end of the installation. Here’s a quick description of the problem and the workaround.

The root.sh script ran into error and the installation was completely unsuccessfull:
oracle:/u00/app/grid/12.1.0.2 # /u00/app/grid/12.1.0.2/root.sh
Performing root user operation.
 
The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u00/app/grid/12.1.0.2
/u00/app/grid/12.1.0.2/install/utl/rootinstall.sh: line 39: [: -eq: unary operator expected
/u00/app/grid/12.1.0.2/install/utl/rootinstall.sh: line 100: [: too many arguments
   Copying dbhome to /usr/local/bin ...
/u00/app/grid/12.1.0.2/install/utl/rootinstall.sh: line 100: [: too many arguments
   Copying oraenv to /usr/local/bin ...
/u00/app/grid/12.1.0.2/install/utl/rootinstall.sh: line 100: [: too many arguments
   Copying coraenv to /usr/local/bin ...
 
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u00/app/grid/12.1.0.2/crs/install/crsconfig_params
LOCAL ADD MODE
Creating OCR keys for user 'grid', privgrp 'oinstall'..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node oracle_grid successfully pinned.
2017/03/31 09:56:43 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
 
PRCR-1006 : Failed to add resource ora.ons for ons
PRCR-1115 : Failed to find entities of type resource type that match filters (TYPE_NAME ends .type) and contain attributes
CRS-0184 : Cannot communicate with the CRS daemon.
2017/03/31 09:57:04 CLSRSC-180: An error occurred while executing the command 'srvctl add ons' (error code 0)
 
2017/03/31 09:57:55 CLSRSC-115: Start of resource 'ora.evmd' failed
 
2017/03/31 09:57:55 CLSRSC-202: Failed to start EVM daemon
 
The command '/u00/app/grid/12.1.0.2/perl/bin/perl -I/u00/app/grid/12.1.0.2/perl/lib -I/u00/app/grid/12.1.0.2/crs/install /u00/app/grid/12.1.0.2/crs/install/roothas.pl ' execution failed

When we run crsctl stat res –t :

grid@oracle_grid:/u00/app/grid/product/12.1.0.2/grid/bin> ./crsctl stat res -t 
 -------------------------------------------------------------------------------- 
 Name Target State Server State details 
 -------------------------------------------------------------------------------- 
 Cluster Resources 
 -------------------------------------------------------------------------------- 
 ora.cssd 
 1 OFFLINE OFFLINE STABLE 
 ora.diskmon 
 1 OFFLINE OFFLINE STABLE 
 ora.evmd 
 1 OFFLINE OFFLINE STABLE 
 --------------------------------------------------------------------------------

After trying multiple times with other Oracle Grid Infrastructure versions from 11.2.0.4 to 12.2.0.1, I has to open a service request at Oracle, and they furnished me the following workaround:

Once rot.sh has failed, we do not close the GUI installer windows because we will use it to complete the installation after the root.sh is complete, at first we have to deconfigure the failed installation:

oracle_grid:/u00/app/grid/product/12.1.0.2/grid/crs/install # ./roothas.pl -verbose -deconfig –force
oracle_grid:/u00/app/grid/product/12.1.0.2/grid/crs/install # . rootcrs.sh -deconfig –force

Then we modify the /etc/ld.so file by adding /lib64/noelision as first entry. The file should look like:

oracle@oracle_grid:/u00/app/oracle/product/12.1.0.2/dbhome_1/dbs/ [DORSTREA] cat /etc/ld.so.conf
/lib64/noelision
/usr/local/lib64
/usr/local/lib
include /etc/ld.so.conf.d/*.conf

Finally we create a symbolic link between $GI_HOME/lib/libpthread.so.0 and /lib64/noelision/libpthread-2.19.so

lrwxrwxrwx 1 root root            35 Apr 11 15:56 libpthread.so.0 -> /lib64/noelision/libpthread-2.19.so

We only have to try to run the root.sh, and finally it works fine:

oracle_grid:/u00/app/grid/product/12.1.0.2/grid # . root.sh
Performing root user operation.
 
The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u00/app/grid/product/12.1.0.2/grid
 
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
 
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u00/app/grid/product/12.1.0.2/grid/crs/install/crsconfig_params
LOCAL ADD MODE
Creating OCR keys for user 'grid', privgrp 'oinstall'..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node oracle_grid successfully pinned.
2017/04/11 15:56:37 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
 
oracle_grid    /u00/app/grid/product/12.1.0.2/grid/cdata/oracle_grid/backup_20170411_155653.olr     0    
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'oracle_grid'
CRS-2673: Attempting to stop 'ora.evmd' on 'oracle_grid'
CRS-2677: Stop of 'ora.evmd' on 'oracle_grid' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'oracle_grid' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2017/04/11 15:57:09 CLSRSC-327: Successfully configured Oracle Restart for a standalone server

After the root.sh is successfully completed, we continue with the Oracle Installer, and everything is correctly configured for the Oracle Grid Infrastructure.

I will keep you informed of the bug evolution, and I will test ASAP the Oracle Grid Infrastructure installation  under SLES 12 SP2 …

 

 

 

Cet article Grid Infrastructure Installation on SLES 12 SP1 est apparu en premier sur Blog dbi services.

DBFS and XAG for Goldengate P3

Michael Dinh - Mon, 2017-05-08 19:47

Start Pump Extract at source failed as shown below.

INFO OGG-00993 Oracle GoldenGate Capture for Oracle, p_test.prm: EXTRACT P_TEST started.
ERROR OGG-01224 Oracle GoldenGate Capture for Oracle, p_test.prm: TCP/IP error 111 (Connection refused), endpoint: ggvip_hawk.local:7809.
ERROR OGG-01668 Oracle GoldenGate Capture for Oracle, p_test.prm: PROCESS ABENDING.

Hypothesis is ports are not opened from source to target and let’s verify configuration.

VIP name and address from DNS.

Name:    ggvip_hawk.local
Address: 10.10.10.201

Recall this is how goldengate instance was created.

# $GRID_HOME/bin/agctl add goldengate gg_xx \
--instance_type target \
--oracle_home /u01/app/oracle/product/12.1.0/db_1 \
--nodes hawk1,hawk2 \
--network 1 --ip 10.10.10.201 \
--user ggsuser --group dba \
--filesystems dbfs_mount \
--gg_home /u03/gg/12.2.0 

From Goldengate instance gg_xx,
Application VIP (gg_xx-vip) is created
using address (10.10.10.201).

Check for xag resource.

ggsuser@hawk1 ~ $ $GRID_HOME/bin/crsctl stat res -t|grep -A2 xag
xag.gg_xx-vip.vip
      1        ONLINE  ONLINE       hawk1                STABLE
xag.gg_xx.goldengate
      1        ONLINE  ONLINE       hawk1                STABLE

Verify Application VIP address assigned.

ggsuser@hawk1 /u03/app/gg/12.2.0 
$ $GRID_HOME/bin/crsctl stat res xag.gg_xx-vip.vip -p|grep USR_ORA_VIP
GEN_USR_ORA_VIP=
USR_ORA_VIP=10.10.10.201
ggsuser@hawk1 /u03/app/gg/12.2.0 
$ nslookup 10.10.10.201
Server:		10.80.107.101
Address:	10.80.107.101#53

name = ggvip_hawk.local
ggsuser@hawk1 /u03/app/gg/12.2.0 $ 
$GRID_HOME/bin/crsctl stat res xag.gg_xx-vip.vip -p
NAME=xag.gg_xx-vip.vip
TYPE=app.appvipx.type
ACL=owner:root:rwx,pgrp:root:r-x,other::r--,group:dba:r-x,user:ggsuser:r-x
ACTIONS=
ACTION_SCRIPT=
ACTION_TIMEOUT=60
ACTIVE_PLACEMENT=0
AGENT_FILENAME=%CRS_HOME%/bin/orarootagent%CRS_EXE_SUFFIX%
APPSVIP_FAILBACK=0
AUTO_START=restore
CARDINALITY=1
CHECK_INTERVAL=1
CHECK_TIMEOUT=0
CLEAN_TIMEOUT=60
DEGREE=1
DELETE_TIMEOUT=60
DESCRIPTION=Application VIP
ENABLED=1
FAILOVER_DELAY=0
FAILURE_INTERVAL=0
FAILURE_THRESHOLD=0
GEN_USR_ORA_STATIC_VIP=
GEN_USR_ORA_VIP=
HOSTING_MEMBERS=
INSTANCE_FAILOVER=1
INTERMEDIATE_TIMEOUT=0
LOAD=1
LOGGING_LEVEL=1
MODIFY_TIMEOUT=60
NLS_LANG=
OFFLINE_CHECK_INTERVAL=0
PLACEMENT=balanced
RELOCATE_BY_DEPENDENCY=1
RESTART_ATTEMPTS=0
SCRIPT_TIMEOUT=60
SERVER_CATEGORY=
SERVER_POOLS=*
START_CONCURRENCY=0
START_DEPENDENCIES=hard(ora.net1.network) pullup(ora.net1.network)
START_TIMEOUT=0
STOP_CONCURRENCY=0
STOP_DEPENDENCIES=hard(ora.net1.network)
STOP_TIMEOUT=0
TYPE_VERSION=1.1
UPTIME_THRESHOLD=7d
USER_WORKLOAD=no
USE_STICKINESS=0
USR_ORA_ENV=
USR_ORA_VIP=10.10.10.201
VERSION=12.1.0.1.0

Passing Business Object Values to Custom UI Components in ABCS

Shay Shmeltzer - Mon, 2017-05-08 18:54

This quick one is based on a customer question about Oracle Application Builder Cloud Service. The scenario is that we have a business object that has a field that contains the URL to an image. We want to be able to show that image on a page in Oracle Application Builder Cloud Service.

Image showing up

To do that I add a custom UI component object to the details (or edit) page of a record - then I switched the HTML of that object to be: <img id="logoimg"/>

custom code

 

I then added a button to the page and added a bit of custom JavaScript code in its action as follow:

var img = document.getElementById('logoimg'); img.src=$Company.getValue('Logo'); resolve();

This code simply locates the custom object on the page using the object id and then sets the src property of the img html tag to match the value of the field in the business object.

Code in Button

 

 

Categories: Development

Having a mid-life crisis on top-of-the-range hardware

The Anti-Kyte - Mon, 2017-05-08 17:31

I’ve recently begun to realise that I’m not going to live forever.
“Surely not”, you’re thinking, “look at that young scamp in the profile picture, he’s not old enough to be having a mid-life crisis”.

Well, five minutes ago, that was a recent picture. Suddenly, it’s more than 10 years old. As Terry Pratchett once observed, “Inside every old person is a young person wondering what happened”.

Fortunately, with age comes wisdom…or a sufficiently good credit rating with which to be properly self-indulgent.
Now, from what I’ve observed, men who get to my stage in life seem to seek some rather fast machinery as a cure for the onset of morbid reflections on the nature of their own mortality.
In this case however, it’s not the lure of a fast care that I’ve succumbed to. First and foremost, I am a geek. And right now, I’m a geek with a budget.

Time then to draw up the wish list for my new notebook. It will need to…

  • be bigger than my 10-inch netbook but small enough to still be reasonably portable
  • have a fast, cutting-edge processor
  • have an SSD with sufficient storage for all my needs
  • have large quantities of RAM
  • come with a Linux Operating System pre-installed

For any non-technical readers who’ve wandered down this far, the rough translation is that I want something with more silicon in it than one of those hour-glasses for measuring the time left before Brexit that have been on the telly recently.
It’s going to have to be so fast that it will, at the very least, offer Scotty the prospect of changing the Laws of Physics.
Oh, and I should still be able to use it on the train.

The requirement for a pre-installed Linux OS may be a factor which limits my choices.
Usually, I’m happy enough to purchase a machine with Windows pre-installed and then replace it with a Linux Distro of my choice.
Yes, this may involve some messing about with drivers and – in some cases – a kernel upgrade, but the process is generally fairly painless.
This time though, I’m going to be demanding. However much of a design classic a Mac may be, OSX just isn’t going to cut it. Linux is my OS of choice.
Furthermore, if I’m going to be paying top dollar for top-of-the range then I want everything to work out of the box.
Why? (pause to flick non-existent hair) Because I’m worth it.

Oh, as a beneficial side-effect it does also mean that I’ll save myself a few quid because I won’t have to fork out for a Windows License.

In the end, a combination of my exacting requirements and the advice and guidance of my son, who knows far more about this sort of thing, lead me to my final choice – the Dell XPS13

What follows is in the style of an Apple fanboy/fangirl handling their latest iThing…

Upon delivery, the package was carried to the kitchen table where it lay with all it’s promise of untold joy…

Yea, and there followed careful unwrapping…

It’s a….box

…Russian doll-like…

…before finally…

If Geekgasm isn’t a thing, it jolly well should be.

Now to setup the OS…

…before finally re-starting.

The first re-boot of a machine usually takes a little while as it sorts itself out so I’ll go and make a cof… oh, it’s back.
Yep, Ubuntu plus SSD ( 512GB capacity) plus a quad-core i7-7560 CPU equals “are you sure you actually pressed the button ?”

Ubuntu itself wasn’t necessarily my Linux distro of choice. That doesn’t matter too much however.
First of all, I’m quite happy to get familiar with Unity if it means I can still access all of that Linux loveliness.
Secondly, with the insane amount of system resources available( 16GB RAM to go with that CPU), I can simply spin up virtual environments with different linux distros, all sufficiently fast to act as they would if being run natively.
For example…

Right, now I’ve got that out of my system, I can wipe the drool off the keyboard and start to do something constructive…like search for cat videos.


Filed under: Uncategorized Tagged: xps13

Issue Frequent COMMIT Statements

Tom Kyte - Mon, 2017-05-08 16:06
Tom : I' ve recently read an article about Performance on Dbasupport.com an i couldn't believe what i was reading about commits I usually read your answers on this site and I' ve read your book as well and you always suggest to not commit fequently....
Categories: DBA Blogs

Performance tuning of delete from many tables

Tom Kyte - Mon, 2017-05-08 16:06
Hello, These are my tables: <code> -- This is the main container CREATE TABLE CONTAINER( Id VARCHAR2(18 CHAR) NOT NULL, ContainerName VARCHAR2(100 CHAR) NOT NULL, ContainerDescription VARCHAR2(500 CHAR) NULL ) NOCACHE PARAL...
Categories: DBA Blogs

Oracle Security 12cR2 and Oracle Security Training Dates

Pete Finnigan - Mon, 2017-05-08 16:06
I am going to be teaching my two day class "How to perform a security audit of an Oracle database" in Athens, Greece on the 30th and 31st May 2017. This is advertised on Oracle University website and you can....[Read More]

Posted by Pete On 08/05/17 At 03:51 PM

Categories: Security Blogs

What is in a transportable tablespace dumpfile?

Yann Neuhaus - Mon, 2017-05-08 15:20

On 31st of May in Düsseldorf, at DOAG Datenbank, I’ll talk about transportable tablespaces and pluggable databases. Both methods are transporting data physically, the difference is in the transport of the metadata, which can be more flexible when transported logically, as with TTS, but faster when transported physically with PDB. I have a lot of demos to show transportable tablespaces with RMAN, and the different cloning features available in 12cR2. If I have time I’ll show what is inside the dumpfile when using Data Pump to export the metadata. Here is the idea.

expdp transport_tablespaces

Here is how we export metadata with Data Pump for transportable tablespaces.


expdp system/oracle@//localhost/PDB1 directory=VAR_TMP dumpfile=expdat.tmp transport_tablespaces=USERS exclude=table_statistics,index_statistics;
 
...
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT
Master table "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TRANSPORTABLE_01 is:
/u01/oradata/tmp/expdat.dmp
******************************************************************************
Datafiles required for transportable tablespace USERS:
/u01/oradata/CDB1/PDB1/users01.dbf

The metadata is exported into expdata.dmp and the data resides in the original datafile. The dumpfile is a binary file but there is a way to extract metadata as DDL using impdp

impdp sqlfile

Here I run impdp with sqlfile to generate all DDL into this file. Nothing is imported and the datafiles are not read, reason why I’ve just put something wrong to transport_datafiles:


impdp system/oracle@//localhost/PDB1 directory=VAR_TMP transport_datafiles=blahblahblah sqlfile=sqlfile.sql ;

No error. Only the dumpfile has been read and here is an extract of the DDP in sqlfile.sql concerning the PK_DEPT and PK_EMP indexes:


-- new object type path: TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
ALTER TABLE "SCOTT"."DEPT" ADD CONSTRAINT "PK_DEPT" PRIMARY KEY ("DEPTNO")
USING INDEX (CREATE UNIQUE INDEX "SCOTT"."PK_DEPT" ON "SCOTT"."DEPT" ("DEPTNO")
PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE(SEG_FILE 26 SEG_BLOCK 138 OBJNO_REUSE 73197
INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "USERS" ) ENABLE;
ALTER TABLE "SCOTT"."EMP" ADD CONSTRAINT "PK_EMP" PRIMARY KEY ("EMPNO")
USING INDEX (CREATE UNIQUE INDEX "SCOTT"."PK_EMP" ON "SCOTT"."EMP" ("EMPNO")
PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE(SEG_FILE 26 SEG_BLOCK 154 OBJNO_REUSE 73205
INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "USERS" ) ENABLE;
-- new object type path: TRANSPORTABLE_EXPORT/INDEX_STATISTICS
-- new object type path: TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT
ALTER TABLE "SCOTT"."EMP" ADD CONSTRAINT "FK_DEPTNO" FOREIGN KEY ("DEPTNO")
REFERENCES "SCOTT"."DEPT" ("DEPTNO") ENABLE;
-- new object type path: TRANSPORTABLE_EXPORT/TABLE_STATISTICS
-- new object type path: TRANSPORTABLE_EXPORT/STATISTICS/MARKER
-- new object type path: TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK

This looks like the DDL used to re-create the same table except that we can see two storage attributes that are not documented:

  • SEG_FILE and SEG_BLOCK
  • OBJNO_REUSE
SEG_FILE and SEG_BLOCK

When you create an empty table, you just provide the tablespace name and Oracle will allocate the first extent, with the segment header. You don’t choose the data placement within the tablespace. But here we are in a different case: the extents already exist in the datafiles that we transport, and the DDL must just map to it. This is why in this case the segment header file number and block number is specified. The remaining extent allocation information is stored within the datafiles (Locally Managed Tablespace), only the segment header must be known by the dictionary.

As an example, when I look at the database where the export comes from, I can see that the attributes for PK_EMP (SEG_FILE 26 SEG_BLOCK 154) are the relative file number and header block number of the PK_EMP segment:


10:49:10 SQL> select owner,segment_name,header_file,header_block,blocks,extents,tablespace_name,relative_fno from dba_segments where owner='SCOTT';
 
OWNER SEGMENT_NAME HEADER_FILE HEADER_BLOCK BLOCKS EXTENTS TABLESPACE_NAME RELATIVE_FNO
----- ------------ ----------- ------------ ------ ------- --------------- ------------
SCOTT DEPT 31 130 8 1 USERS 26
SCOTT EMP 31 146 8 1 USERS 26
SCOTT SALGRADE 31 162 8 1 USERS 26
SCOTT PK_DEPT 31 138 8 1 USERS 26
SCOTT PK_EMP 31 154 8 1 USERS 26

This file identifier is a relative file number within the tablespace, which means that there is no need to change it when a tablespace is transported.

You will see exactly the same information in the database where you import the tablespace (except for HEADER_FILE which is the absolute file number).

OBJNO_REUSE

Each segment has a DATA_OBJECT_ID, which is referenced in each block, the ROWIDs. This must not change when we transport a tablespace because the goal is that nothing has to be modified in the datafiles. For this reason, the data object id is exported with the metadata, as we can see for this PK_EMP example (OBJNO_REUSE 73205), and set to the same in the target dictionary. Here are the data object IDs for the objects exported here:


10:49:20 SQL> select owner,object_name,object_type,object_id,data_object_id from dba_objects where owner='SCOTT';
 
OWNER OBJECT_NAME OBJECT_TYPE OBJECT_ID DATA_OBJECT_ID
----- ----------- ----------- --------- --------------
SCOTT DEPT TABLE 73196 73196
SCOTT PK_DEPT INDEX 73197 73197
SCOTT EMP TABLE 73198 73206
SCOTT PK_EMP INDEX 73199 73205
SCOTT BONUS TABLE 73200 73200
SCOTT SALGRADE TABLE 73201 73201

The OBJECT_ID will be different in the target, assigned in the same way as when we create an object, but this one is not referenced anywhere within the datafiles.

So what?

Usually, the metadata precedes the data. With transportable tablespaces, it is the opposite: data is there and metadata is re-created to map the data. This metadata is what is stored into the dumpfile exported to transport tablespaces.
From what you have seen, you can understand now that the RELATIVE_FNO and the DATA_OBJECT_ID are not unique within a database, but only within a tablespace. You can understand also that Transportable Tablespace import duration does not depend on the size of data, but is proportional to the number of objects (metadata). This is where Pluggable Databases is more efficient: metadata is transported physically and import duration does not depend on the number of objects, especially when it does not involve an upgrade to new version and object recompilation.

 

Cet article What is in a transportable tablespace dumpfile? est apparu en premier sur Blog dbi services.

BIP and Mapviewer Mash Up V

Tim Dexter - Mon, 2017-05-08 11:38

The last part on maps, I promise ... its been a fun ride for me at least :0) If you need to catch up on previous episodes:

In this post we're looking at map quality. On the left a JPG map, to the right an SVG output.

If we ignore the fact that they have different levels of features or layers. Imagine getting the maps into a PDF and then printing them. Its pretty clear that the SVG version of the map is going to render better on paper compared to JPG.

Getting the SVG output from mapviewer is pretty straightforward, getting BIP to render it requires a little bit of effort. I have mentioned the XML request that we construct and then do a variable substitution in our servlet. All we need do is add another option to the requested output. Mapviewer supports several flavors of SVG:

  • If you specify SVG_STREAM, the stream of the image in SVG Basic (SVGB) format is returned directly;
  • If you specify SVG_URL, a URL to an SVG Basic image stored on the MapViewer host system is returned.
  • If you specify SVGZ_STREAM, the stream of the image in SVG Compressed (SVGZ) format is returned directly;
  • If you specify SVGZ_URL, a URL to an SVG Compressed image stored on the MapViewer host system is returned. SVG Compressed format can effectively reduce the size of the SVG map by 40 to 70 percent compared with SVG Basic format, thus providing better performance.
  • If you specify SVGTINY_STREAM, the stream of the image in SVG Tiny (SVGT) format is returned directly;
  • If you specify SVGTINY_URL, a URL to an SVG Tiny image stored on the MapViewer host system is returned. (The SVG Tiny format is designed for devices with limited display capabilities, such as cell phones.)

Dont panic, Ive looked at them all for you and we need to use SVGTINY_STREAM. This sends back a complete XML file representation of the map in SVG format. We have a couple of issues:

  1. We need to strip the XML declaration from the top of the file: <?xml version="1.0" encoding="utf-8"?> If we don't BIP will choke on the SVG. Being lazy I just used a string function to strip the line out in my servlet:

    dd

  2. We need to stream the SVG back as text. So we need to set the CONTENT_TYPE for the servlet as 'text/javascript'
  3. We need to handle the SVG when it comes back to the template. We do not use the




Categories: BI & Warehousing

Oracle 12cR2 : Optimizer Statistics Advisor

Yann Neuhaus - Mon, 2017-05-08 10:47

The Optimizer Statistics Advisor is a new Advisor in Oracle 12.2.
The goal of this Advisor is to check the way you gather the statistics on your database, and depending on what is found, it will makes some recommendations on how you can improve the statistics gathering strategy in order to provide more efficient statistics to the CBO.
This Advisor is also able to generate remediation scripts to apply the statistics gathering “best practices”.
adv
The recommendations are based on 23 predefined rules :

SQL> select rule_id, name, rule_type, description from v$stats_advisor_rules;


RULE_ID NAME RULE_TYPE DESCRIPTION
---------- ----------------------------------- --------- -------------------------------------------------------------------------------------
0 SYSTEM
1 UseAutoJob SYSTEM Use Auto Job for Statistics Collection
2 CompleteAutoJob SYSTEM Auto Statistics Gather Job should complete successfully
3 MaintainStatsHistory SYSTEM Maintain Statistics History
4 UseConcurrent SYSTEM Use Concurrent preference for Statistics Collection
5 UseDefaultPreference SYSTEM Use Default Preference for Stats Collection
6 TurnOnSQLPlanDirective SYSTEM SQL Plan Directives should not be disabled
7 AvoidSetProcedures OPERATION Avoid Set Statistics Procedures
8 UseDefaultParams OPERATION Use Default Parameters in Statistics Collection Procedures
9 UseGatherSchemaStats OPERATION Use gather_schema_stats procedure
10 AvoidInefficientStatsOprSeq OPERATION Avoid inefficient statistics operation sequences
11 AvoidUnnecessaryStatsCollection OBJECT Avoid unnecessary statistics collection
12 AvoidStaleStats OBJECT Avoid objects with stale or no statistics
13 GatherStatsAfterBulkDML OBJECT Do not gather statistics right before bulk DML
14 LockVolatileTable OBJECT Statistics for objects with volatile data should be locked
15 UnlockNonVolatileTable OBJECT Statistics for objects with non-volatile should not be locked
16 MaintainStatsConsistency OBJECT Statistics of dependent objects should be consistent
17 AvoidDropRecreate OBJECT Avoid drop and recreate object seqauences
18 UseIncremental OBJECT Statistics should be maintained incrementally when it is beneficial
19 NotUseIncremental OBJECT Statistics should not be maintained incrementally when it is not beneficial
20 AvoidOutOfRange OBJECT Avoid Out of Range Histogram endpoints
21 UseAutoDegree OBJECT Use Auto Degree for statistics collection
22 UseDefaultObjectPreference OBJECT Use Default Object Preference for statistics collection
23 AvoidAnalyzeTable OBJECT Avoid using analyze table commands for statistics collection


24 rows selected.


SQL>

You can have a look at this blog if you want a little bit more informations about these rules.
If you want to exclude some rules or some database objects of the Advisor’s recommandation, you can define multiple filters. (I will do that below.)

Well, let’s see how to use the Advisor. The first step is to create a task which will run it :

DECLARE
tname VARCHAR2(32767);
ret VARCHAR2(32767);
BEGIN
tname := 'stat_advisor_1';
ret := DBMS_STATS.CREATE_ADVISOR_TASK(tname);
END;
/

The task is created :

SQL> select task_name, advisor_name, created, status from dba_advisor_tasks where advisor_name = 'Statistics Advisor';


TASK_NAME ADVISOR_NAME CREATED STATUS
------------------------------ ------------------------------ ------------------- -----------
STAT_ADVISOR_1 Statistics Advisor 04.05.2017-11:19:25 INITIAL


SQL>

Now, I want to define some filters.
The first one will disable the Advisor for all objects, the 2nd will enable it only on a specific table and the 3th and 4th will exclude two rules :

DECLARE
filter1 CLOB;
filter2 CLOB;
filter3 CLOB;
filter4 CLOB;
BEGIN
filter1 := DBMS_STATS.CONFIGURE_ADVISOR_OBJ_FILTER(
task_name => 'STAT_ADVISOR_1',
stats_adv_opr_type => 'EXECUTE',
rule_name => NULL,
ownname => NULL,
tabname => NULL,
action => 'DISABLE' );


filter2 := DBMS_STATS.CONFIGURE_ADVISOR_OBJ_FILTER(
task_name => 'STAT_ADVISOR_1',
stats_adv_opr_type => 'EXECUTE',
rule_name => NULL,
ownname => 'JOC',
tabname => 'T2',
action => 'ENABLE' );


filter3 := DBMS_STATS.CONFIGURE_ADVISOR_RULE_FILTER(
task_name => 'STAT_ADVISOR_1',
stats_adv_opr_type => 'EXECUTE',
rule_name => 'AvoidDropRecreate',
action => 'DISABLE' );


filter4 := DBMS_STATS.CONFIGURE_ADVISOR_RULE_FILTER(
task_name => 'STAT_ADVISOR_1',
stats_adv_opr_type => 'EXECUTE',
rule_name => 'UseGatherSchemaStats',
action => 'DISABLE' );
END;
/

All is ready, let’s run the task…

DECLARE
tname VARCHAR2(32767);
ret VARCHAR2(32767);
BEGIN
tname := 'stat_advisor_1';
ret := DBMS_STATS.EXECUTE_ADVISOR_TASK(tname);
END;
/

…and generate the report :

SQL> select dbms_stats.report_advisor_task('stat_advisor_1',null,'text','all','all') as report from dual;
GENERAL INFORMATION
-------------------------------------------------------------------------------
Task Name : STAT_ADVISOR_1
Execution Name : EXEC_2172
Created : 05-04-17 11:34:51
Last Modified : 05-04-17 11:35:10
-------------------------------------------------------------------------------
SUMMARY
-------------------------------------------------------------------------------
For execution EXEC_2172 of task STAT_ADVISOR_1, the Statistics Advisor has no
findings.

-------------------------------------------------------------------------------
SQL>

Cool ! Nothing to report regarding statistics gathering on my the table JOC.T2 (see filter2 above).
But how does the Advisor reacts when I run it after having deleted the statistics on this table ?
SQL> exec dbms_stats.delete_table_stats(ownname=>'JOC',tabname=>'T2');


PL/SQL procedure successfully completed.


SQL> DECLARE
2 tname VARCHAR2(32767);
3 ret VARCHAR2(32767);
4 BEGIN
5 tname := 'stat_advisor_1';
6 ret := DBMS_STATS.EXECUTE_ADVISOR_TASK(tname);
7 END;
8 /


PL/SQL procedure successfully completed.


SQL> select dbms_stats.report_advisor_task('stat_advisor_1',null,'text','all','all') as report from dual;
GENERAL INFORMATION
-------------------------------------------------------------------------------
Task Name : STAT_ADVISOR_1
Execution Name : EXEC_2182
Created : 05-04-17 11:34:51
Last Modified : 05-04-17 11:44:22
-------------------------------------------------------------------------------
SUMMARY
-------------------------------------------------------------------------------
For execution EXEC_2182 of task STAT_ADVISOR_1, the Statistics Advisor has 1
finding(s). The findings are related to the following rules: AVOIDSTALESTATS.
Please refer to the finding section for detailed information.

-------------------------------------------------------------------------------
FINDINGS
-------------------------------------------------------------------------------
Rule Name: AvoidStaleStats
Rule Description: Avoid objects with stale or no statistics
Finding: There are 1 object(s) with no statistics.
Schema:
JOC
Objects:
T2


Recommendation: Gather Statistics on those objects with no statistics.
Example:
-- Gathering statistics for tables with stale or no statistics in schema, SH:
exec dbms_stats.gather_schema_stats('SH', options => 'GATHER AUTO')
Rationale: Stale statistics or no statistics will result in bad plans.
-------------------------------------------------------------------------------

It looks to work well. The Advisor detected that there is no stats on the table, and a rule were triggered.
And what about the remediation scripts ? Firstly, we have to generate them :

VARIABLE script CLOB
DECLARE
tname VARCHAR2(32767);
BEGIN
tname := 'stat_advisor_1';
:script := DBMS_STATS.SCRIPT_ADVISOR_TASK(tname);
END;
/


PL/SQL procedure successfully completed.

And then display them :

set linesize 3000
set long 500000
set pagesize 0
set longchunksize 100000
set serveroutput on


DECLARE
v_len NUMBER(10);
v_offset NUMBER(10) :=1;
v_amount NUMBER(10) :=10000;
BEGIN
v_len := DBMS_LOB.getlength(:script);
WHILE (v_offset < v_len)
LOOP
DBMS_OUTPUT.PUT_LINE(DBMS_LOB.SUBSTR(:script,v_amount,v_offset));
v_offset := v_offset + v_amount;
END LOOP;
END;
13 /
-- Script generated for the recommendations from execution EXEC_2182
-- in the statistics advisor task STAT_ADVISOR_1
-- Script version 12.2
-- No scripts will be provided for the rule USEAUTOJOB.Please check the report for more details.
-- No scripts will be provided for the rule COMPLETEAUTOJOB.Please check the report for more details.
-- No scripts will be provided for the rule MAINTAINSTATSHISTORY.Please check the report for more details.
-- No scripts will be provided for the rule TURNONSQLPLANDIRECTIVE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDSETPROCEDURES.Please check the report for more details.
-- No scripts will be provided for the rule USEDEFAULTPARAMS.Please check the report for more details.
-- No scripts will be provided for the rule USEGATHERSCHEMASTATS.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDINEFFICIENTSTATSOPRSEQ.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDUNNECESSARYSTATSCOLLECTION.Please check the report for more details.
-- No scripts will be provided for the rule GATHERSTATSAFTERBULKDML.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDDROPRECREATE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDOUTOFRANGE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDANALYZETABLE.Please check the report for more details.
-- No scripts will be provided for the rule USEAUTOJOB.Please check the report for more details.
-- No scripts will be provided for the rule COMPLETEAUTOJOB.Please check the report for more details.
-- No scripts will be provided for the rule MAINTAINSTATSHISTORY.Please check the report for more details.
-- No scripts will be provided for the rule TURNONSQLPLANDIRECTIVE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDSETPROCEDURES.Please check the report for more details.
-- No scripts will be provided for the rule USEDEFAULTPARAMS.Please check the report for more details.
-- No scripts will be provided for the rule USEGATHERSCHEMASTATS.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDINEFFICIENTSTATSOPRSEQ.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDUNNECESSARYSTATSCOLLECTION.Please check the report for more details.
-- No scripts will be provided for the rule GATHERSTATSAFTERBULKDML.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDDROPRECREATE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDOUTOFRANGE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDANALYZETABLE.Please check the report for more details.
-- Scripts for rule USECONCURRENT
-- Rule Description: Use Concurrent preference for Statistics Collection
-- No scripts will be provided for the rule USEAUTOJOB.Please check the report for more details.
-- No scripts will be provided for the rule COMPLETEAUTOJOB.Please check the report for more details.
-- No scripts will be provided for the rule MAINTAINSTATSHISTORY.Please check the report for more details.
-- No scripts will be provided for the rule TURNONSQLPLANDIRECTIVE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDSETPROCEDURES.Please check the report for more details.
-- No scripts will be provided for the rule USEDEFAULTPARAMS.Please check the report for more details.
-- No scripts will be provided for the rule USEGATHERSCHEMASTATS.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDINEFFICIENTSTATSOPRSEQ.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDUNNECESSARYSTATSCOLLECTION.Please check the report for more details.
-- No scripts will be provided for the rule GATHERSTATSAFTERBULKDML.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDDROPRECREATE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDOUTOFRANGE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDANALYZETABLE.Please check the report for more details.
-- Scripts for rule USEDEFAULTPREFERENCE
-- Rule Description: Use Default Preference for Stats Collection
-- Set global preferenes to default values.
-- Scripts for rule USEDEFAULTOBJECTPREFERENCE
-- Rule Description: Use Default Object Preference for statistics collection
-- Setting object-level preferences to default values
-- setting CASCADE to default value for object level preference
-- setting ESTIMATE_PERCENT to default value for object level preference
-- setting METHOD_OPT to default value for object level preference
-- setting GRANULARITY to default value for object level preference
-- setting NO_INVALIDATE to default value for object level preference
-- Scripts for rule USEINCREMENTAL
-- Rule Description: Statistics should be maintained incrementally when it is beneficial
-- Turn on the incremental option for those objects for which using incremental is helpful.
-- Scripts for rule UNLOCKNONVOLATILETABLE
-- Rule Description: Statistics for objects with non-volatile should not be locked
-- Unlock statistics for objects that are not volatile.
-- Scripts for rule LOCKVOLATILETABLE
-- Rule Description: Statistics for objects with volatile data should be locked
-- Lock statistics for volatile objects.
-- Scripts for rule NOTUSEINCREMENTAL
-- Rule Description: Statistics should not be maintained incrementally when it is not beneficial
-- Turn off incremental option for those objects for which using incremental is not helpful.
-- Scripts for rule USEAUTODEGREE
-- Rule Description: Use Auto Degree for statistics collection
-- Turn on auto degree for those objects for which using auto degree is helpful.
-- Scripts for rule AVOIDSTALESTATS
-- Rule Description: Avoid objects with stale or no statistics
-- Gather statistics for those objcts that are missing or have no statistics.
-- Scripts for rule MAINTAINSTATSCONSISTENCY
-- Rule Description: Statistics of dependent objects should be consistent
-- Gather statistics for those objcts that are missing or have no statistics.
declare
obj_filter_list dbms_stats.ObjectTab;
obj_filter dbms_stats.ObjectElem;
obj_cnt number := 0;
begin
obj_filter_list := dbms_stats.ObjectTab();
obj_filter.ownname := 'JOC';
obj_filter.objtype := 'TABLE';
obj_filter.objname := 'T2';
obj_filter_list.extend();
obj_cnt := obj_cnt + 1;
obj_filter_list(obj_cnt) := obj_filter;
dbms_stats.gather_database_stats(
obj_filter_list=>obj_filter_list);
end;
/

PL/SQL procedure successfully completed.
SQL>

It was a very simple demo, but as you can see above, the Advisor provides a small script to adjust what is wrong or what is missing concerning the statistics of the table.

Conclusion :
Once you have upgraded your database to Oracle 12.2, don’t hesitate to set up the new Statistics Advisor. It is easy to deploy and can be fully personalized depending on what you want to check (which objects ? which rules ?). Moreover, it has been developped by the same team who develops and maintains the CBO. Therefore, they know which statistics the Optimizer needs !

 

Cet article Oracle 12cR2 : Optimizer Statistics Advisor est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator