Feed aggregator

Oracle Database Audit Trail

Tom Kyte - Mon, 2017-04-03 13:26
Hi All, My Company is trying to configure Oracle Audit Trail. I was testing it on test environment with XML, EXTENDED setting. My questions are 1. Too much files are being generated at OS level and how can I control it? or make it write on the sa...
Categories: DBA Blogs

An Invalid Insert - SQL Vs PL/SQL & 12c Vs 11g

Tom Kyte - Mon, 2017-04-03 13:26
Hi Team, Recently we had a bug in the production system (which is a coding error on our part!) but while investigating the same I noticed the something. Executed the following: 1) Table Creation: <code> create table dept( dept_no varchar2(...
Categories: DBA Blogs

bottleneck on SELECT FOR UPDATE

Tom Kyte - Mon, 2017-04-03 13:26
Hi AskTOM, We recently stumbled on a performance issue on our application for reservation of equipment. Below is our simplified schema: <code> table T_RESOURCE_HEADER : a master table for all resource type, contains below columns - RESOURCE_...
Categories: DBA Blogs

Best way to capture the audit

Tom Kyte - Mon, 2017-04-03 13:26
I have a following requirement , we have multiple web screens with different data points but one thing common between them is that it can go through review and approval process by two different persons. I want to create a one table which accommoda...
Categories: DBA Blogs

Oracle Text index 12.2

Tom Kyte - Mon, 2017-04-03 13:26
Team, Just started learning about Oracle Text from documentation. http://docs.oracle.com/database/122/CCAPP/getting-started-with-oracle-text.htm#CCAPP9645 <code> drop table text_demo purge; create table text_demo (x int,docs varchar2(100))...
Categories: DBA Blogs

Mindbreeze Partnership Brings GSA Migration Path for Customers

This morning Fishbowl announced a new partnership with Mindbreeze bringing additional enterprise search options to our customers. As a leading provider of enterprise search software, Mindbreeze serves thousands of customers around the globe spanning governments, banks, healthcare, insurance, and educational institutions. Last Friday, Gartner released the 2017 Insight Engines Magic Quadrant; Mindbreeze has been positioned highest for Ability to Execute.

With the sunsetting of the Google Search Appliance announced last year, Fishbowl has been undergoing an evaluation of alternatives to serve both new and existing customers looking to improve information discovery. While Fishbowl will continue to partner with Google on cloud search initiatives, we feel Mindbreeze InSpire provides a superior solution to the problems faced by organizations with large volumes of on-premise content. In addition to on-premise appliances, Mindbreeze also provides cloud search services with federation options for creating a single, hybrid search experience. We’re excited about the opportunity this partnership brings to once again help customers get more value from the millions of unstructured documents buried in siloed systems across the enterprise—particualrly those stored in Oracle WebCenter and PTC Windchill.

In the coming months, we’ll be expanding our connector offerings to integrate Mindbreeze Inspire with Oracle WebCenter Content and PTC Windchill. Mindbreeze InSpire is offered as an on-premise search appliance uniting information from varied internal data sources into one semantic search index. As a full-service Mindbreeze partner, Fishbowl will provide connectors, appliance resale, implementation services, and support for our customers. To learn more about Mindbreeze, GSA migration options, or beta access to our Mindbreeze connectors, please contact us.

The post Mindbreeze Partnership Brings GSA Migration Path for Customers appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

12cR1 RAC Posts -- 8e : Redo Shipping and Apply (RAC to nonRAC)

Hemant K Chitale - Mon, 2017-04-03 10:22
Continuing the series of posts on the RAC/ASM to SingleInstance/FileSystem  DataGuard configuration ...

With both instances RAC1 and RAC2 running, I have redo shipping from both instances to the single instance STBY.
(In my test server with 3 VMs, it takes a number of minutes to have all 3 instances running and communicating with each other ... so there are spurious ORA and TNS errors until the 3 instances have stabilized. {in fact, it would be best to start the Standby before starting the RAC Primary instances}. I am posting messages after the stabilization).

Thus, on RAC 1 :
Mon Apr 03 22:43:51 2017
Archived Log entry 97 added for thread 1 sequence 59 ID 0x96312536 dest 1:
Mon Apr 03 22:43:52 2017
TT00: Standby redo logfile selected for thread 1 sequence 60 for destination LOG_ARCHIVE_DEST_2

And on RAC 2:
Mon Apr 03 22:44:12 2017
Thread 2 cannot allocate new log, sequence 32
Checkpoint not complete
Current log# 3 seq# 31 mem# 0: +DATA/RAC/ONLINELOG/group_3.290.931826413
Current log# 3 seq# 31 mem# 1: +FRA/RAC/ONLINELOG/group_3.259.931826417
Mon Apr 03 22:44:19 2017
Thread 2 advanced to log sequence 32 (LGWR switch)
Current log# 4 seq# 32 mem# 0: +DATA/RAC/ONLINELOG/group_4.291.931826417
Current log# 4 seq# 32 mem# 1: +FRA/RAC/ONLINELOG/group_4.260.931826421
Mon Apr 03 22:44:20 2017
Archived Log entry 99 added for thread 2 sequence 31 ID 0x96312536 dest 1:
Mon Apr 03 22:44:21 2017
TT00: Standby redo logfile selected for thread 2 sequence 32 for destination LOG_ARCHIVE_DEST_2

And on STBY :
Mon Apr 03 22:43:13 2017
Media Recovery Waiting for thread 2 sequence 31 (in transit)
RFS[2]: Selected log 5 for thread 1 sequence 60 dbid 2519807290 branch 931825279
Mon Apr 03 22:43:52 2017
Archived Log entry 25 added for thread 1 sequence 59 ID 0x96312536 dest 1:
Mon Apr 03 22:44:20 2017
Archived Log entry 26 added for thread 2 sequence 31 rlc 931825279 ID 0x96312536 dest 2:
Mon Apr 03 22:44:20 2017
Media Recovery Log /u01/app/oracle/fast_recovery_area/STBY/archivelog/2017_04_03/o1_mf_2_31_dg4qr2qw_.arc
RFS[1]: Selected log 7 for thread 2 sequence 32 dbid 2519807290 branch 931825279
Mon Apr 03 22:44:21 2017
Media Recovery Log /u01/app/oracle/fast_recovery_area/STBY/archivelog/2017_04_03/o1_mf_1_59_dg4qx70d_.arc
Resize operation completed for file# 3, old size 768000K, new size 778240K
Media Recovery Waiting for thread 1 sequence 60 (in transit)
Mon Apr 03 22:44:24 2017
Recovery of Online Redo Log: Thread 1 Group 5 Seq 60 Reading mem 0
Mem# 0: /u01/app/oracle/oradata/STBY/onlinelog/group_5.292.937936339
Mem# 1: /u01/app/oracle/fast_recovery_area/STBY/onlinelog/group_5.303.937936343

Thus, Thread1 (instance RAC1) is currently at Sequence#60, Thread2 (instanc RAC2) is currently at Sequence=32.  The STBY alert log shows that it is receiving Redo for both Threads.

Let me login to the PDB in the RAC database and create some data.

[oracle@collabn1 ~]$ tnsping PDB

TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 03-APR-2017 22:49:34

Copyright (c) 1997, 2014, Oracle. All rights reserved.

Used parameter files:


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = collabn-cluster-scan.racattack)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = PDB.racattack)))
OK (0 msec)
[oracle@collabn1 ~]$ sqlplus hemant/hemant@PDB

SQL*Plus: Release 12.1.0.2.0 Production on Mon Apr 3 22:49:39 2017

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Mon Apr 03 2017 22:48:56 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL> show con_id

CON_ID
------------------------------
3
SQL> create table my_data (id_col number, data_col varchar2(15));

Table created.

SQL> insert into my_data select rownum, 'Row:' || to_char(rownum)
2 from dual connect by level < 101;

100 rows created.

SQL> commit;

Commit complete.

SQL>


So, user HEMANT in the Pluggable Database PDB has a table with 100 rows.

I open the Standby Database Read Only.

[oracle@oem132 ~]$ . oraenv
ORACLE_SID = [oracle] ? STBY
The Oracle base has been set to /u01/app/oracle
[oracle@oem132 ~]$ sqlplus

SQL*Plus: Release 12.1.0.2.0 Production on Mon Apr 3 22:53:05 2017

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Enter user-name: / as sysdba

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> recover managed standby database cancel;
Media recovery complete.
SQL> alter database open;

Database altered.

SQL> alter pluggable database pdb open;
alter pluggable database pdb open
*
ERROR at line 1:
ORA-44309: unknown failure
ORA-44777: Pluggable database service cannot be started.


SQL> alter pluggable database pdb open;
alter pluggable database pdb open
*
ERROR at line 1:
ORA-65019: pluggable database PDB already open

SQL> alter pluggable database pdb close;

Pluggable database altered.


So, there is an error opening the PDB in the Standby database.  I can't find a note about it on MoS.  This will need more research. Let my try with a Common User in CDB$ROOT.

Restart the Standby.
SQL> shutdown
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
[oracle@oem132 ~]$ sqlplus

SQL*Plus: Release 12.1.0.2.0 Production on Mon Apr 3 23:14:54 2017

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Enter user-name: / as sysdba
Connected to an idle instance.

SQL> startup nomount;
ORACLE instance started.

Total System Global Area 838860800 bytes
Fixed Size 2929936 bytes
Variable Size 230689520 bytes
Database Buffers 599785472 bytes
Redo Buffers 5455872 bytes
SQL> alter database mount standby database;

Database altered.

SQL> recover managed standby database using current logfile disconnect from session;
Media recovery complete.
SQL>


Create a Common User with data in the RAC Primary.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options
[oracle@collabn1 ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 12.1.0.2.0 Production on Mon Apr 3 23:17:00 2017

Copyright (c) 1982, 2014, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL> create user C##HKCCOMMON identified by hemant;

User created.

SQL> grant connect, resource, dba to c##HKCCOMMON;

Grant succeeded.

SQL> connect C##HKCCOMMON/hemant
Connected.
SQL> create table root_my_table (id_col number, data_col varchar2(15));

Table created.

SQL> insert into root_my_table select rownum, 'AA' || to_char(rownum)
2 from dual connect by level < 101;

100 rows created.

SQL> commit;

Commit complete.

SQL>


Now, open the Standby Read Only and verify the ROOT_MY_TABLE.

SQL> recover managed standby database cancel;
Media recovery complete.

SQL> alter database open read only;

Database altered.

SQL> connect c##HKCCOMMON/hemant
Connected.
SQL> select count(*) from root_my_table;

COUNT(*)
----------
100

SQL>


Yes, the CDB ROOT user and data have gone over to the Standby !

.
.


Categories: DBA Blogs

Metadata Modeling in the Database with Analytic Views

Rittman Mead Consulting - Mon, 2017-04-03 09:00
Metadata Modeling in the Database with Analytic Views

12.2, the latest Oracle database release provides a whole set of new features enhancing various aspects of the product including JSON support, Auto-List Partitioning and APEX news among others. One of the biggest news in the Data Warehousing / Analytics area was the introduction of the Analytic Views, that as per Oracle's definition are

Metadata objects that enable the user to quickly and easily create complex hierarchical and dimensional queries on data in database tables and views

tl;dr

If you are on rush, here is an abstract of what you'll find in this looooong blog post:

Metadata modeling can now be done directly in the database using Analytic Views, providing to end users a way of querying database objects without needing a knowledge of joining conditions, aggregation functions or order by clauses.
This post will guide you through the creation of an analytic view that replicates a part of a OBIEE's Sampleapp business model. The latest part of the post is dedicated to understanding the usage of analytic views and the benefits for end users especially in cases when self-service BI tools are used.

Metadata Modeling in the Database with Analytic Views

If you are still around and interested in the topic please take a drink and sit comfortably, it will be a good journey.

Metadata Modeling

What are then the Analytics Views in detail? How are they going to improve end user's ability in querying data?

To answer above question I would take a step back. Many readers of this blog are familiar with OBIEE and its core: the Repository. The repository contains the metadata model from the physical sources till the presentation areas and includes the definition of:

  • Joins between tables
  • Hierarchies for dimensions
  • Aggregation rules
  • Security settings
  • Data Filters
  • Data Sources

This allows end users to just pick columns from a Subject Area and display them in the appropriate way without needing to worry about writing SQL or knowing how the data is stored. Moreover definitions are held centrally providing the famous unique source of truth across the entire enterprise.

Metadata Modeling in the Database with Analytic Views

The wave of self-service BI tools like Tableau or Oracle's Data Visualization Desktop provided products capable of querying almost any kind of data sources in a visual and intuitive way directly in the end user hands. An easy and direct access to data is a good thing for end user but, as stated above, requires knowledge of the data model, joins and aggregation methods.
The self-service tools can slightly simplify the process by providing some hints based on column names, types or values but the cruel reality is that the end-user has to build the necessary knowledge of the data source before providing correct results. This is why we've seen several times self-service BI tools being "attached" to OBIEE: get corporate official data from the unique source of truth and mash them up with information coming from external sources like personal Excel files or output of Big Data processes.

Analytics Views

Analytic Views (AV) take OBIEE's metadata modeling concept and move it at database level providing a way of organizing data in a dimensional model so it can be queried with simpler SQL statements.
The Analytical Views are standard views with the following extra options:

  • Enable the definition of facts, dimensions and hierarchies that are included in system-generated columns
  • Automatically aggregate the data based on pre-defined calculations
  • Include presentation metadata

Analytics views are created with a CREATE ANALYTIC VIEW statement, some privileges need to be granted to the creating user, you can find the full list in Oracle's documentation.

Every analytical view is composed by the following metadata objects:

  • Attribute dimensions: organising table/view columns into attributes and levels.
  • Hierarchies: defining hierarchical relationships on top of an attribute dimension object.
  • Analytic view objects: defining fact data referencing both fact tables and hierarchies.

With all the above high level concepts in mind it's now time to try how Analytical Views could be used in a reporting environment.

Database Provisioning

For the purpose blog post I used Oracle's 12.2.0.1 database Docker image, provided by Gerald Venzl, the quickest way of spinning up a local instance. You just need to:

  • Install Docker
  • Download database installer from Oracle's website
  • Place the installer in the proper location mentioned in the documentation
  • Build Oracle Database 12.1.0.2 Enterprise Edition Docker image by executing
./buildDockerImage.sh -v 12.1.0.2 -e
  • Running the image by executing
docker run --name db12c -p 1521:1521 -p 5500:5500 -e ORACLE_SID=orcl -e ORACLE_PDB=pdborcl -e ORACLE_CHARACTERSET=AL32UTF8 oracle/database:12.2.0.1-ee  

The detailed parameters definition can be found in the GitHub repository. You can then connect via sqlplus to your local instance by executing the standard

sqlplus sys/pwd@//localhost:1521/pdborcl as sysdba  

The password is generated automatically during the first run of the image and can be found in the logs, look for the following string

ORACLE AUTO GENERATED PASSWORD FOR SYS, SYSTEM AND PDBAMIN: XXXXxxxxXXX  

Once the database is created it's time to set the goal: I'll try to recreate a piece of the Oracle's Sampleapp RPD model in the database using Analytic Views.

Model description

In this blog post I'll look in the 01 - Sample App business model and specifically I'll try to replicate the logic behind Time, Product and the F0 Sales Base Measures using Analytic Views.

Metadata Modeling in the Database with Analytic Views

Dim Product

The Sampleapp's D1 - Products (Level Based Hierarchy) is based on two logical table sources: SAMP_PRODUCTS_D providing product name, description, LOB and Brand and the SAMP_PROD_IMG_D containing product images. For the purpose of this test we'll keep our focus on SAMP_PRODUCTS_D only.
The physical mapping of Logical columns is shown in the image below.

Metadata Modeling in the Database with Analytic Views

Attribute Dimension

The first piece we're going to build is the attribute dimension, where we'll be defining attributes and levels. The mappings in above image can "easily" be translated into an attributes with the following SQL.

CREATE OR REPLACE ATTRIBUTE DIMENSION D1_DIM_PRODUCT  
USING SAMP_PRODUCTS_D  
ATTRIBUTES  
 (PROD_KEY as P0_Product_Number
    CLASSIFICATION caption VALUE 'P0 Product Number',
  PROD_DSC as P1_Product
    CLASSIFICATION caption VALUE 'P1 Product',
  TYPE as P2_Product_Type
    CLASSIFICATION caption VALUE 'P2 Product Type',
  TYPE_KEY as P2k_Product_Type
    CLASSIFICATION caption VALUE 'P2k Product Type',
  LOB as P3_LOB
    CLASSIFICATION caption VALUE 'P3 LOB',
  LOB_KEY as P3k_LOB
    CLASSIFICATION caption VALUE 'P3k LOB',
  BRAND as P4_Brand
    CLASSIFICATION caption VALUE 'P4 Brand',
  BRAND_KEY as P4k_Brand
    CLASSIFICATION caption VALUE 'P4k Brand',
  ATTRIBUTE_1 as P5_Attribute_1
    CLASSIFICATION caption VALUE 'P5 Attribute 1',
  ATTRIBUTE_2 as P6_Attribute_2
    CLASSIFICATION caption VALUE 'P6 Attribute 2',
  SEQUENCE as P7_Product_Sequence
    CLASSIFICATION caption VALUE 'P7 Product Sequence',
  TOTAL_VALUE as P99_Total_Value
    CLASSIFICATION caption VALUE 'P99 Total Value')

Few pieces to note:

  • CREATE OR REPLACE ATTRIBUTE DIMENSION: we are currently defining a dimension, the attributes and levels.
  • USING SAMP_PRODUCTS_D: defines the datasource, in our case the table SAMP_PRODUCTS_D. Only one datasource is allowed per dimension.
  • PROD_KEY as P0_Product_Number: using the standard notification as we can easily recaption columns names
  • CLASSIFICATION CAPTION ... several options can be added for each attribute like caption or description

The dimension definition is not complete with only attribute declaration, we also need to define the levels. Those can be taken from OBIEE's hierarchy

Metadata Modeling in the Database with Analytic Views

For each level we can define:

  • The level name, caption and description
  • The Key
  • the Member Name and Caption
  • the Order by Clause

Translating above OBIEE's hierarchy levels into Oracle SQL

LEVEL BRAND  
  CLASSIFICATION caption VALUE 'BRAND'
  CLASSIFICATION description VALUE 'Brand'
  KEY P4k_Brand
  MEMBER NAME P4_Brand
  MEMBER CAPTION P4_Brand
  ORDER BY P4_Brand
LEVEL Product_LOB  
  CLASSIFICATION caption VALUE 'LOB'
  CLASSIFICATION description VALUE 'Lob'
  KEY P3k_LOB
  MEMBER NAME P3_LOB
  MEMBER CAPTION P3_LOB
  ORDER BY P3_LOB
  DETERMINES(P4k_Brand)
LEVEL Product_Type  
  CLASSIFICATION caption VALUE 'Type'
  CLASSIFICATION description VALUE 'Type'
  KEY P2k_Product_Type
  MEMBER NAME P2_Product_Type
  MEMBER CAPTION P2_Product_Type
  ORDER BY P2_Product_Type
  DETERMINES(P3k_LOB,P4k_Brand)
LEVEL Product_Details  
  CLASSIFICATION caption VALUE 'Detail'
  CLASSIFICATION description VALUE 'Detail'
  KEY P0_Product_Number
  MEMBER NAME P1_Product
  MEMBER CAPTION P1_Product
  ORDER BY P1_Product
  DETERMINES(P2k_Product_Type,P3k_LOB,P4k_Brand)
ALL MEMBER NAME 'ALL PRODUCTS';  

There is an additional DETERMINES line in above sql for each level apart from Brand, this is how we can specify the relationship between level keys. If we take the Product_LOB example, the DETERMINES(P4k_Brand) defines that any LOB in our table automatically determines a Brand (in OBIEE terms that LOB is a child of Brand).

Hierarchy

Next step is defining a hierarchy on top of the attribute dimension D1_PRODUCTS defined above. We can create it just by specifying:

  • the attribute dimension to use
  • the list of levels and the relation between them

which in our case becomes

CREATE OR REPLACE HIERARCHY PRODUCT_HIER  
  CLASSIFICATION caption VALUE 'Products Hierarchy'
USING D1_DIM_PRODUCT  
  (Product_Details CHILD OF
   Product_Type CHILD OF
   Product_LOB CHILD OF
   BRAND);

When looking into the hierarchy Product_hier we can see that it's creating an OLAP-style dimension with a row for each member at each level of the hierarchy and extra fields like DEPT, IS_LEAF and HIER_ORDER

Metadata Modeling in the Database with Analytic Views

The columns contained in Product_hier are:

  • One for each Attribute defined in attribute dimension D1_PRODUCTS like P0_PRODUCT_NUMBER or P2K_PRODUCT_TYPE
  • The member name, caption and description and unique name
  • The level name in the hierarchy and related depth
  • The relative order of the member in the hierarchy
  • A field IS_LEAF flagging hierarchy endpoints
  • References to the parent level
Member Unique Names

A particularity to notice is that the MEMBER_UNIQUE_NAME of Cell Phones is [PRODUCT_TYPE].&[101] which is the concatenation of the LEVEL and the P2K_PRODUCT_TYPE value.
One could expect the member unique name being represented as the concatenation of all the preceding hierarchy members, Brand and LOB, and the member key itself in a string like [PRODUCT_TYPE].&[10001]&[1001]&[101].

This is the default behaviour, however in our case is not happening since we set the DETERMINES(P3k_LOB,P4k_Brand) in the attribute dimension definition. We Specified that Brand ([10001]) and LOB ([1001]) can automatically be inferred by the Product Type so there is no need to store those values in the member key. We can find the same setting in OBIEE's Product Type logical level

Metadata Modeling in the Database with Analytic Views

Dim Date

The basic D0 Dim Date can be built starting from the table SAMP_TIME_DAY_D following the same process as above. Like in OBIEE, some additional settings are required when creating a time dimension:

  • DIMENSION TYPE TIME: the time dimension type need to be specified
  • LEVEL TYPE <LEVEL_NAME>: each level in the time hierarchy needs to belong to a precise level type chosen from:
    • YEARS
    • HALF_YEARS
    • QUARTERS
    • MONTHS
    • WEEKS
    • DAYS
    • HOURS
    • MINUTES
    • SECONDS
Attribute Dimension

Metadata Modeling in the Database with Analytic Views

Taking into consideration the additional settings, the Dim Date column mappings in above image can be translated in the following attribute dimension SQL definition.

CREATE OR REPLACE ATTRIBUTE DIMENSION D0_DIM_DATE  
DIMENSION TYPE TIME  
USING SAMP_TIME_DAY_D  
ATTRIBUTES  
 (CALENDAR_DATE AS TOO_CALENDAR_DATE,
  PER_NAME_MONTH AS T02_PER_NAME_MONTH,
  PER_NAME_QTR AS T03_PER_NAME_QTR,
  PER_NAME_YEAR AS T04_PER_NAME_YEAR,
  DAY_KEY AS T06_ROW_WID,
  BEG_OF_MTH_WID AS T22_BEG_OF_MTH_WID,
  BEG_OF_QTR_WID AS T23_BEG_OF_QTR_WID
  )
    LEVEL CAL_DAY
      LEVEL TYPE DAYS
      KEY TOO_CALENDAR_DATE
      ORDER BY TOO_CALENDAR_DATE
      DETERMINES(T22_BEG_OF_MTH_WID, T23_BEG_OF_QTR_WID,T04_PER_NAME_YEAR)
    LEVEL CAL_MONTH
      LEVEL TYPE MONTHS
      KEY T22_BEG_OF_MTH_WID
      MEMBER NAME T02_PER_NAME_MONTH
      ORDER BY T22_BEG_OF_MTH_WID
      DETERMINES(T23_BEG_OF_QTR_WID,T04_PER_NAME_YEAR)
    LEVEL CAL_QUARTER
      LEVEL TYPE QUARTERS
      KEY T23_BEG_OF_QTR_WID
      MEMBER NAME T03_PER_NAME_QTR
      ORDER BY T23_BEG_OF_QTR_WID
      DETERMINES(T04_PER_NAME_YEAR)
    LEVEL CAL_YEAR
      LEVEL TYPE YEARS
      KEY T04_PER_NAME_YEAR
      MEMBER NAME T04_PER_NAME_YEAR
      ORDER BY T04_PER_NAME_YEAR
    ALL MEMBER NAME 'ALL TIMES';

You may have noticed a different mapping of keys, member names and order by attributes. Let's take the CAL_MONTH as example. It's defined by two columns

  • BEG_OF_MTH_WID: used for joins and ordering
  • PER_NAME_MONTH: used as "display label"

PER_NAME_MONTH in the YYYY / MM format could be also used for ordering, but most of the times end user requests months in the MM / YYYY format. Being able to set a ordering column different from the member name allows us to properly manage the hierarchy.

Metadata Modeling in the Database with Analytic Views

Hierarchy

Time hierarchy follows the same rules as the product one, no additional settings are required.

CREATE OR REPLACE HIERARCHY TIME_HIER  
USING D0_DIM_DATE  
  (CAL_DAY CHILD OF
   CAL_MONTH CHILD OF
   CAL_QUARTER CHILD OF
   CAL_YEAR);
Fact Sales

The last step in the journey is the definition of the analytic view of the fact table that as per Oracle's documentation

An analytic view specifies the source of its fact data and defines measures that describe calculations or other analytic operations to perform on the data. An analytic view also specifies the attribute dimensions and hierarchies that define the rows of the analytic view.

The analytic view definition contains the following specifications:

  • The data source: the table or view that will be used for the calculation
  • The columns: which columns from the source objects to use in the calculations
  • The attribute dimensions and hierarchies: defining both the list of attributes and the levels of the analysis
  • The measures: a set of aggregations based on the predefined columns from the data source.

Within analytical views definition a materialized view can be defined in order to store aggregated values. This is a similar to OBIEE's Logical Table Source setting for aggregates.

Analytic View Definition

For the purpose of the post I'll use SAMP_REVENUE_F which is one of the sources of F0 Sales Base Measures in Sampleapp. The following image shows the logical column mapping.

Metadata Modeling in the Database with Analytic Views

The above mappings can be translated in the following SQL

CREATE OR REPLACE ANALYTIC VIEW F0_SALES_BASE_MEASURES  
USING SAMP_REVENUE_F  
DIMENSION BY  
  (D0_DIM_DATE
    KEY BILL_DAY_DT REFERENCES TOO_CALENDAR_DATE
    HIERARCHIES (
      TIME_HIER DEFAULT),
   D1_DIM_PRODUCT
    KEY PROD_KEY REFERENCES P0_Product_Number
    HIERARCHIES (
      PRODUCT_HIER DEFAULT)
   )
MEASURES  
 (F1_REVENUE FACT REVENUE AGGREGATE BY SUM,
  F10_VARIABLE_COST FACT COST_VARIABLE AGGREGATE BY SUM,
  F11_FIXED_COST FACT COST_FIXED AGGREGATE BY SUM,
  F2_BILLED_QTY FACT UNITS,
  F3_DISCOUNT_AMOUNT FACT DISCNT_VALUE AGGREGATE BY SUM,
  F4_AVG_REVENUE FACT REVENUE AGGREGATE BY AVG,
  F21_REVENUE_AGO AS (LAG(F1_REVENUE) OVER (HIERARCHY TIME_HIER OFFSET 1))
  )
DEFAULT MEASURE F1_REVENUE;  

Some important parts need to be highlighted:

  • USING SAMP_REVENUE_F: defines the analytic view source, in our case the table SAMP_REVENUE_F
  • DIMENSION BY: this section provides the list of dimensions and related hierarchies to take into account
  • KEY BILL_DAY_DT REFERENCES TOO_CALENDAR_DATE: defines the join between the fact table and attribute dimension
  • HIERARCHIES (TIME_HIER DEFAULT): multiple hierarchies can be defined on top of an attribute dimension and used in an analytical view, however like in OBIEE only one will be used by default
  • F1_REVENUE FACT REVENUE AGGREGATE BY SUM: defines the measure with alias, source column and aggregation method
  • F2_BILLED_QTY FACT UNITS: if aggregation method is not defined it replies on default SUM
  • F21_REVENUE_AGO: new metrics can be calculated based on previously defined columns replicating OBIEE functions like time-series. The formula (LAG(F1_REVENUE) OVER (HIERARCHY TIME_HIER OFFSET 1)) calculates the equivalent of the OBIEE's AGO function for each level of the hierarchy.
  • DEFAULT MEASURE F1_REVENUE: defines the default measure of the analytic view
Using Analytic Views

After the analytic view definition, it's time to analyse what benefits end users have when using them. We are going to take a simple example: a query to return the Revenue and Billed Qty per Month and Brand.

Using only the original tables we would have the following SQL

SELECT D.CAL_MONTH,  
  D.BEG_OF_MTH_WID,
  P.BRAND,
  SUM(F.REVENUE) AS F01_REVENUE,
  SUM(F.UNITS)   AS F02_BILLED_QTY
FROM SAMP_REVENUE_F F  
JOIN SAMP_PRODUCTS_D P  
ON (F.PROD_KEY = P.PROD_KEY)  
JOIN SAMP_TIME_DAY_D D  
ON (F.BILL_DAY_DT = D.CALENDAR_DATE)  
GROUP BY D.CAL_MONTH,  
  D.BEG_OF_MTH_WID,
  P.BRAND
ORDER BY D.BEG_OF_MTH_WID,  
  P.BRAND;

The above SQL requires the knowledge of:

  • Aggregation methods
  • Joins
  • Group by
  • Ordering

Even if this is an oversimplification of the analytic view usage you can already spot that some knowledge of the base data structure and SQL language is needed.

Using the analytic views defined above, the query can be written as

SELECT TIME_HIER.MEMBER_NAME AS TIME_SLICE,  
  PRODUCT_HIER.MEMBER_NAME   AS PRODUCT_SLICE,
  F1_REVENUE,
  F2_BILLED_QTY
FROM F0_SALES_BASE_MEASURES  
WHERE TIME_HIER.LEVEL_NAME  IN ('CAL_MONTH')  
AND PRODUCT_HIER.LEVEL_NAME IN ('BRAND')  
ORDER BY TIME_HIER.HIER_ORDER,  
  PRODUCT_HIER.HIER_ORDER;

As you can see, there is a simplification of the SQL statement: no more aggregation, joining conditions and group by predicates are needed. All the end-user has to know is the analytical view name, and the related hierarchies that can be used.

The additional benefit is that if we want to change the level of granularity of the above query we just need to change the WHERE condition. E.g. to have the rollup per Year and LOB we just have to substitute

WHERE TIME_HIER.LEVEL_NAME  IN ('CAL_MONTH')  
AND PRODUCT_HIER.LEVEL_NAME IN ('BRAND')  

with

WHERE TIME_HIER.LEVEL_NAME  IN ('CAL_YEAR')  
AND PRODUCT_HIER.LEVEL_NAME IN ('LOB')  

without touching granularity, group by and order by statements.

Using Analytic Views in DVD

At the beginning of my blog post I wrote that Analytic Views could be useful when used in conjunction with self-service BI tools. Let's have a look at how the end user journey is simplified in the case of Oracle's Data Visualization Desktop.

Without AV the end-user had two options to source the data:

  • Write the complex SQL statement with joining condition, group and order by clause in the SQL editor to retrieve data at the correct level with the related dimension
  • Import the fact table and dimensions as separate datasources and join them together in DVD's project.

Both options require a SQL and joining conditions knowledge in order to being able to present correct data. Using Analytic Views the process is simplified. We just need to create a new source pointing to the database where the analytic views are sitting.
Next step is retrieve the necessary columns from the analytic view. Unfortunately analytic views are not visible from DVD object explorer (only standard table and views are shown)

Metadata Modeling in the Database with Analytic Views

We can however specify with a simple SQL statement all the informations we need like Time and Member Slice, the related levels and the order in hierarchy.

SELECT TIME_HIER.MEMBER_NAME AS TIME_SLICE,  
  PRODUCT_HIER.MEMBER_NAME   AS PRODUCT_SLICE,
  TIME_HIER.LEVEL_NAME AS TIME_LEVEL,
  PRODUCT_HIER.LEVEL_NAME AS PRODUCT_LEVEL,
  TIME_HIER.HIER_ORDER AS TIME_HIER_ORDER,
  PRODUCT_HIER.HIER_ORDER AS PRODUCT_HIER_ORDER,
  F1_REVENUE,
  F2_BILLED_QTY
FROM F0_SALES_BASE_MEASURES  
ORDER BY TIME_HIER.HIER_ORDER,  
  PRODUCT_HIER.HIER_ORDER;

You may have noted that I'm not specifying any WHERE clause for level filtering: as end user I want to be able to retrieve all the necessary levels by just changing a filter in my DVD project. After including the above SQL in the datasource definition and amending the measure/attribute definition I can start playing with the analytic view data.

Metadata Modeling in the Database with Analytic Views

I can simply include the dimension's MEMBER_NAME in the graphs together with the measures and add the LEVEL_NAME in the filters. In this way I can change the graph granularity by simply selecting the appropriate LEVEL in the filter selector for all the dimensions available.

Metadata Modeling in the Database with Analytic Views

One particular to notice however is that all the data coming from various columns like date, month and year are "condensed" into a single VARCHAR column. In case of different datatypes (like date in the time dimension) this will prevent a correct usage of some DVD's capabilities like time series or trending functions. However if a particular type of graph is needed for a specific level, either an ad-hoc query or a casting operation can be used.

Conclusion

In this blog post we analysed the Analytic Views, a new component in Oracle Database 12.2 and how those can be used to "move" the metadata modeling at DB level to provide an easier query syntax to end-users.

Usually metadata modeling is done in reporting tools like OBIEE that offers additional set of features on top of the one included in analytic views. However centralized reporting tools like OBIEE are not present everywhere and, with the wave of self-service BI tools, analytic views represent a perfect method of enabling users not familiar with SQL to simply query their enterprise data.

If you are interested in understanding more about analytic views or metadata modeling, don't hesitate to contact us!
If you want to improve the SQL skills of your company workforce, check out our recently launched SQL for beginners training!

Categories: BI & Warehousing

Log Buffer #508: A Carnival of the Vanities for DBAs

Pythian Group - Mon, 2017-04-03 08:41

This Log Buffer Edition covers Oracle, SQL Server and MySQL.

Oracle:

Compiling views: When the FORCE Fails You

Goldengate 12c Troubleshooting XAGENABLE

A performance Deep Dive into Tablespace Encryption

EBS Release 12 Certified with Safari 10 and MacOS Sierra 10.12

Oracle Database 12c (12.2.0.1.0) on VirtualBox

SQL Server:

A Single-Parameter Date Range in SQL Server Reporting Services

Generating Plots Automatically From PowerShell and SQL Server Using Gnuplot

Justify the Remote Access Requirement for Biztalk Log Shipping

Building Better Entity Framework Applications

Performing a Right and Comprehensive Age Calculation

MySQL:

Basics of MySQL Administration and Best Practices

Bonanza Cuts Load in Half with VividCortex

Experiments with MySQL 5.7’s Online Buffer Pool Resize

MySQL 8.0 Collations: The Devil is in the Details.

How to Encrypt MySQL Backups on S3

Categories: DBA Blogs

Block Names

Jonathan Lewis - Mon, 2017-04-03 07:04

There are a number of tiny details that I can never remember when I’m sketching out models to test ideas, and one of those is the PL/SQL block name. Virtually every piece of PL/SQL I write ends up with variables which have one of two prefixes in their names “M_” or “G_” (for memory or global, respectively) but I probably ought to be formal than that, so here’s an example of labelling blocks – specifically, labelling anonymous blocks from SQL*Plus using a trivial and silly bit of code:


rem
rem     Script:         plsql_block_names.sql
rem     Author:         Jonathan Lewis
rem     Dated:          March 2017
rem

create table t1
nologging
as
select * from all_objects
;


begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1',
                method_opt       => 'for all columns size 1'
        );
end;
/

create index t1_i1 on t1(object_id) nologging;

--
--        Anonymous PL/SQL block with label, and a couple of 
--        uses of the label as the qualifier for variables
--

<<my_block>>
declare
        data_object_id  t1.data_object_id%type;
        object_id       t1.object_id%type;
begin
        select  data_object_id
        into    my_block.data_object_id
        from    t1
        where
                data_object_id is not null
        and     rownum = 1
        ;
        select  object_id
        into    my_block.object_id
        from    t1
        where
                data_object_id = my_block.data_object_id
        and     rownum = 1
        ;

        dbms_output.put_line('Object: ' || object_id || ' - ' || my_block.object_id);
        dbms_output.put_line('Data Object: ' || data_object_id || ' - ' || my_block.data_object_id);

end;
/

Object: 16 - 16
Data Object: 6 - 6

The important point, of course, is that with a qualified variable name you eliminate the risk of a problem that appears occasionally on the public fora where someone has lost track of duplicated variable names, or used (as I have above) a variable name that matches a column name, and doesn’t notice that a little further down the code Oracle has captured the “wrong” interpretation of the name.

You’ll note that the block name has to be surrounded by doubled angle brackets – and now that I’ve written about it I might actually remember that for a couple of weeks ;)

 

Footnote:

This isn’t the only use for labels, by the way, it’s just one that probably ought to be used more frequently in production code.

The other thing I can never remember is how to escape quote marks – so I invariably end up using the old “double the quotes” method when I want to quote quotes.


Enabling the Mobile Workforce with Cloud Content and Experience - Part 6

WebCenter Team - Mon, 2017-04-03 06:00

By: Mark Paterson, Director, Product Management, Oracle Content and Experience Cloud

Following our discussion on social and mobile collaboration (if you haven't followed the series, here are my previous posts 1, 2, 3, and 5), we now discuss how we can complete the work cycle digitally, using our devices.

Ever needto get a contract signed quickly? Oracle Content and Experience Cloud makes iteasy to request a signature from anywhere.

The OracleContent and Experience Cloud's Application Integration Framework gives partnersan easy way to introduce custom document actions into the service. You’ll findextensions for 2 popular e-signature services, DocuSign (from Redstone) and HelloSign, in the Oracle Cloud Marketplace.

With theextension in place, all you need to do is browse to the contract you need signedand within the 'Actions' menu you will find custom actions introduced by thee-signature extension. You just tap on the custom action to request signatures.

You will beredirected to the e-signature service who will fetch the contract you needsigned and then let you decide:

  • Whoneeds to sign?
  • Wherethey need to sign?
  • Isthere a date required, perhaps there are paragraphs that need to be initialed?

Once youhave finished preparing the contracting you send it out for signing.

You can easily keep track of the signingstatus of the contract, A custom tab in the properties page lets you track thesigning status and then once the contract is signed a new version with all thesignatures gets returned.

So thereyou go, even if you are on the go you can get things signed.

Checkout our latest HOW TO video to see it in action:

You can always find Oracle Content and Experience Cloud mobileapp in both the AppStore and Google play.

Hope you’ve enjoyed these tips concerning our mobile apps. Nexttime will introduce some great new Digital Asset Management features.

Don’t have Oracle Content and Experience Cloud yet? You can starta free trial immediately. Visit https://cloud.oracle.com/content to get started onyour free trial today.



Oracle Database Listener Security Guide – Rewritten For Oracle 12.2

In October 2002 Integrigy first posted a guide to securing the Oracle Listener. Since then this whitepaper has been our most popular download. This month we rewrote the whitepaper for Oracle 12c, inclusive of 12.2

Integrigy Consulting has found the Database Listener to be one of the most frequently overlooked security risks at customers. This whitepaper is an overview of the Database Listener, its unique security risks, and step-by-step recommendations for securing it are provided.

If you have questions, please contact us at info@integrigy.com

-Michael Miller, CISSP-ISSMP, CCSP, CCSK

References
 
 
Security Strategy and Standards, Oracle Database
Categories: APPS Blogs, Security Blogs

Enabling the Mobile Workforce with Cloud Content and Experience - Part 6

WebCenter Team - Mon, 2017-04-03 06:00

By: Mark Paterson, Director, Product Management, Oracle Content and Experience Cloud

Following our discussion on social and mobile collaboration (if you haven't followed the series, here are my previous posts 1, 2, 3, and 5), we now discuss how we can complete the work cycle digitally, using our devices.

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri",sans-serif;}

Ever need to get a contract signed quickly? Oracle Content and Experience Cloud makes it easy to request a signature from anywhere.

The Oracle Content and Experience Cloud's Application Integration Framework gives partners an easy way to introduce custom document actions into the service. You’ll find extensions for 2 popular e-signature services, DocuSign (from Redstone) and HelloSign, in the Oracle Cloud Marketplace.

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri",sans-serif;}

With the extension in place, all you need to do is browse to the contract you need signed and within the 'Actions' menu you will find custom actions introduced by the e-signature extension. You just tap on the custom action to request signatures.

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri",sans-serif;}

You will be redirected to the e-signature service who will fetch the contract you need signed and then let you decide:

  • Who needs to sign?
  • Where they need to sign?
  • Is there a date required, perhaps there are paragraphs that need to be initialed?

Once you have finished preparing the contracting you send it out for signing.

You can easily keep track of the signing status of the contract, A custom tab in the properties page lets you track the signing status and then once the contract is signed a new version with all the signatures gets returned.

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri",sans-serif;}

So there you go, even if you are on the go you can get things signed.

Check out our latest HOW TO video to see it in action:

Normal 0 false false false EN-US X-NONE X-NONE -"/> /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri",sans-serif;}

You can always find Oracle Content and Experience Cloud mobile app in both the AppStore and Google play.

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri",sans-serif;}

Hope you’ve enjoyed these tips concerning our mobile apps. Next time will introduce some great new Digital Asset Management features.

Don’t have Oracle Content and Experience Cloud yet? You can start a free trial immediately. Visit https://cloud.oracle.com/content to get started on your free trial today.



Compiling views: when the FORCE fails you

Jeff Kemp - Sun, 2017-04-02 20:01

Darth-Vader-selfieThe order in which your deployment scripts create views is important. This is a fact that I was reminded of when I had to fix a minor issue in the deployment of version #2 of my application recently.

Normally, you can just generate a create or replace force view script for all your views and just run it in each environment, then recompile your schema after they’re finished – and everything’s fine. However, if views depend on other views, you can run into a logical problem if you don’t create them in the order of dependency.

Software Release 1.0

create table t (id number, name varchar2(100));
create or replace force view tv_base as
select t.*, 'hello' as stat from t;
create or replace force view tv_alpha as
select t.* from tv_base t;

desc tv_alpha;
Name Null Type
---- ---- -------------
ID        NUMBER
NAME      VARCHAR2(100)
STAT      CHAR(5)

Here we have our first version of the schema, with a table and two views based on it. Let’s say that the tv_base includes some derived expressions, and tv_alpha is intended to do some joins on other tables for more detailed reporting.

Software Release 1.1

alter table t add (phone varchar2(10));
create or replace force view tv_alpha as
select t.* from tv_base t;
create or replace force view tv_base as
select t.*, 'hello' as stat from t;

Now, in the second release of the software, we added a new column to the table, and duly recompiled the views. In the development environment the view recompilation may happen multiple times (because other changes are being made to the views as well) – and nothing’s wrong. Everything works as expected.

However, when we run the deployment scripts in the Test environment, the “run all views” script has been run just once; and due to the way it was generated, the views are created in alphabetical order – so tv_alpha was recreated first, followed by tv_base. Now, when we describe the view, we see that it’s missing the new column:

desc tv_alpha;
Name Null Type
---- ---- -------------
ID        NUMBER
NAME      VARCHAR2(100)
STAT      CHAR(5)

Whoops. What’s happened, of course, is that when tv_alpha was recompiled, tv_base still hadn’t been recompiled and so it didn’t have the new column in it yet. Oracle internally defines views with SELECT * expanded to list all the columns. The view won’t gain the new column until we REPLACE the view with a new one using SELECT *. By that time, it’s too late for tv_alpha – it had already been compiled, successfully, so it doesn’t see the new column.

Lesson Learnt

What should we learn from this? Be wary of SELECT * in your views. Don’t get me wrong: they are very handy, especially during initial development of your application; but they can surprise you if not handled carefully and I would suggest it’s good practice to expand those SELECT *‘s into a discrete list of columns.

Some people would go so far as to completely outlaw SELECT *, and even views-on-views, for reasons such as the above. I’m not so dogmatic, because in my view there are some good reasons to use them in some situations.


Filed under: SQL Tagged: development, SQL

When automatic reoptimization plan is less efficient

Yann Neuhaus - Sun, 2017-04-02 05:05

11gR2 started to have the optimizer react at execution time when a misestimate is encountered. Then the next executions are re-optimized with more accurate estimation, derived from the execution statistics. This was called cardinality feedback. Unfortunately, in rare cases we had a fast execution plan with bad estimations, and better estimations lead to worse execution plan. This is rare, but even when 9999 queries are faster, the one that takes too long will gives a bad perception of this optimizer feature.
This feature has been improved in 12cR1 with new names: auto-reoptimization and statistics feedback. I’m showing an example here in 12.1.0.2 without adaptive statistics (the 12.2 backport) and I’ve also disabled adaptive plan because they show the wrong numbers (similar to what I described in this post). I’ll show that at one point, the re-optimization can go back to the initial plan if it was the best in execution time.

V$SQL

Basically, here is what happened: first execution was fast, but with actual number of rows far from the estimated ones. Auto-reoptimisation kicks in for next execution and get a new plan, but with longer execution time. Third execution is another re-optimization, leading to same bad plan. Finally starting at 4th execution, the time is back to reasonable and we see the same as the first plan is used:

SQL> select sql_id,child_number,plan_hash_value,is_reoptimizable,is_resolved_adaptive_plan,parse_calls,executions,elapsed_time/1e6
from v$sql where sql_id='b4rhzfw7d6vdp';
 
SQL_ID CHILD_NUMBER PLAN_HASH_VALUE I I PARSE_CALLS EXECUTIONS ELAPSED_TIME/1E6
------------- ------------ --------------- - - ----------- ---------- ----------------
b4rhzfw7d6vdp 0 1894156093 Y 1 1 .346571
b4rhzfw7d6vdp 1 955499861 Y 1 1 5.173733
b4rhzfw7d6vdp 2 955499861 Y 1 1 4.772258
b4rhzfw7d6vdp 3 1894156093 N 7 7 .5008

The scope of statistic feedback is not to get optimal execution from the first execution. This requires accurate statistics, static or dynamic, and SQL Plan Directives is a try to get that. Statistics feedback goal is to try to get a better plan rather than re-use one that is based on misestimates. But sometimes the better is the enemy of the good and we have an example here in child cursors 1 and 2. But the good thing is that finally we are back to acceptable execution time, with a final plan that can be re-used without re-optimization.

What surprised me here is that the final plan has the same hash value than the initial one. Is it a coincidence that different estimations gives the same plan? Or did the optimizer finally gave up to try to find better?

V$SQL_REOPTIMIZATION_HINTS

In 12c the statistics feedback are exposed in V$SQL_REOPTIMIZATION_HINTS.

SQL> select sql_id,child_number,hint_text,client_id,reparse from v$sql_reoptimization_hints where sql_id='b4rhzfw7d6vdp';
 
SQL_ID CHILD_NUMBER HINT_TEXT CLIENT_ID REPARSE
------------- ------------ ---------------------------------------------------------------------------------------------------- ---------- ----------
b4rhzfw7d6vdp 0 OPT_ESTIMATE (@"SEL$1" INDEX_FILTER "DM_FOLDER_R1"@"SEL$1" "D_1F0049A880000016" ROWS=1517.000000 ) 1 1
b4rhzfw7d6vdp 0 OPT_ESTIMATE (@"SEL$1" INDEX_SCAN "DM_FOLDER_R1"@"SEL$1" "D_1F0049A880000016" MIN=1517.000000 ) 1 1
b4rhzfw7d6vdp 0 OPT_ESTIMATE (@"SEL$1" TABLE "DM_FOLDER_R1"@"SEL$1" ROWS=1517.000000 ) 1 1
b4rhzfw7d6vdp 0 OPT_ESTIMATE (@"SEL$1" INDEX_FILTER "DM_SYSOBJECT_R2"@"SEL$1" "D_1F0049A880000010" MIN=3.000000 ) 1 0
b4rhzfw7d6vdp 0 OPT_ESTIMATE (@"SEL$1" INDEX_SCAN "DM_SYSOBJECT_R2"@"SEL$1" "D_1F0049A880000010" MIN=3.000000 ) 1 0
b4rhzfw7d6vdp 0 OPT_ESTIMATE (@"SEL$1" TABLE "DM_SYSOBJECT_R2"@"SEL$1" MIN=3.000000 ) 1 0
b4rhzfw7d6vdp 1 OPT_ESTIMATE (@"SEL$1" INDEX_FILTER "DM_FOLDER_R1"@"SEL$1" "D_1F0049A880000016" ROWS=1517.000000 ) 1 0
b4rhzfw7d6vdp 1 OPT_ESTIMATE (@"SEL$1" INDEX_SCAN "DM_FOLDER_R1"@"SEL$1" "D_1F0049A880000016" MIN=1517.000000 ) 1 0
b4rhzfw7d6vdp 1 OPT_ESTIMATE (@"SEL$1" TABLE "DM_FOLDER_R1"@"SEL$1" ROWS=1517.000000 ) 1 0
b4rhzfw7d6vdp 1 OPT_ESTIMATE (@"SEL$1" INDEX_FILTER "DM_SYSOBJECT_R2"@"SEL$1" "D_1F0049A880000010" MIN=3.000000 ) 1 0
b4rhzfw7d6vdp 1 OPT_ESTIMATE (@"SEL$1" INDEX_SCAN "DM_SYSOBJECT_R2"@"SEL$1" "D_1F0049A880000010" MIN=3.000000 ) 1 0
b4rhzfw7d6vdp 1 OPT_ESTIMATE (@"SEL$1" TABLE "DM_SYSOBJECT_R2"@"SEL$1" MIN=3.000000 ) 1 0
b4rhzfw7d6vdp 1 OPT_ESTIMATE (@"SEL$582FA660" QUERY_BLOCK ROWS=1491.000000 ) 1 1

The child cursor 0 was re-optimized to cursor 1 with different number of rows for “DM_FOLDER_R1″ and “DM_SYSOBJECT_R2″
The child cursor 1 has the same values, but an additional number of row correction for a query block.

But we don’t see anything about cursor 2. It was re-optimizable, and was actually re-optimized into cursor 3 but no statistics corrections are displayed here.

Trace

As it is a reproducible case, I’ve run the same while tracing 10046, 10053 and 10507 (level 512) to get all information about SQL execution, Optimiser compilation, and statistics feedback. For each child cursor, I’ll show the execution plan with estimated and actual number of rows (E-Rows and A-Rows) and then some interesting lines from the trace, mainly those returned by:
grep -E "KKSMEC|^atom_hint|^@"

Child cursor 0 – plan 1894156093 – 0.34 seconds

Plan hash value: 1894156093
----------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
----------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 171 (100)| 1 |00:00:00.04 | 17679 |
| 1 | SORT AGGREGATE | | 1 | 1 | | 1 |00:00:00.04 | 17679 |
| 2 | NESTED LOOPS | | 1 | 1 | 116 (0)| 1491 |00:00:00.04 | 17679 |
| 3 | NESTED LOOPS | | 1 | 1 | 115 (0)| 1491 |00:00:00.04 | 17456 |
| 4 | NESTED LOOPS | | 1 | 49 | 17 (0)| 5648 |00:00:00.01 | 537 |
|* 5 | INDEX RANGE SCAN | D_1F0049A880000016 | 1 | 3 | 3 (0)| 1517 |00:00:00.01 | 13 |
| 6 | TABLE ACCESS BY INDEX ROWID BATCHED | DM_SYSOBJECT_R | 1517 | 16 | 10 (0)| 5648 |00:00:00.01 | 524 |
|* 7 | INDEX RANGE SCAN | D_1F0049A880000010 | 1517 | 71 | 2 (0)| 5648 |00:00:00.01 | 249 |
|* 8 | TABLE ACCESS BY INDEX ROWID | DM_SYSOBJECT_S | 5648 | 1 | 2 (0)| 1491 |00:00:00.03 | 16919 |
|* 9 | INDEX UNIQUE SCAN | D_1F0049A880000108 | 5648 | 1 | 1 (0)| 1491 |00:00:00.03 | 15428 |
| 10 | NESTED LOOPS SEMI | | 5648 | 2 | 25 (0)| 1491 |00:00:00.02 | 14828 |
| 11 | NESTED LOOPS | | 5648 | 7 | 18 (0)| 2981 |00:00:00.02 | 12869 |
| 12 | TABLE ACCESS BY INDEX ROWID BATCHED| DM_SYSOBJECT_R | 5648 | 71 | 4 (0)| 2981 |00:00:00.01 | 7747 |
|* 13 | INDEX RANGE SCAN | D_1F0049A880000010 | 5648 | 16 | 3 (0)| 2981 |00:00:00.01 | 6145 |
|* 14 | TABLE ACCESS BY INDEX ROWID | DM_SYSOBJECT_S | 2981 | 1 | 2 (0)| 2981 |00:00:00.01 | 5122 |
|* 15 | INDEX UNIQUE SCAN | D_1F0049A880000108 | 2981 | 1 | 1 (0)| 2981 |00:00:00.01 | 2140 |
|* 16 | INDEX UNIQUE SCAN | D_1F0049A880000145 | 2981 | 52759 | 1 (0)| 1491 |00:00:00.01 | 1959 |
|* 17 | INDEX UNIQUE SCAN | D_1F0049A880000142 | 1491 | 1 | 1 (0)| 1491 |00:00:00.01 | 223 |
----------------------------------------------------------------------------------------------------------------------------------------

Because of low cardinality estimation of DM_SYSOBJECT_R predicate (E-Rows=3) the optimizer goes to NESTED LOOP. This plan has good execution time here because all blocks are in buffer cache. Reading 17679 blocks from buffer cache takes less than one second. It would have been much longer if those were physical I/O.

This is a case where the optimizer detects misestimate at execution time. Here is what is recorded in the trace:

Reparsing due to card est...
@=0x63a56820 type=3 nodeid=5 monitor=Y halias="DM_FOLDER_R1" loc="SEL$1" oname="SEL$F5BB74E1" act=1517 min=0 est=3 next=(nil)
Reparsing due to card est...
@=0x638fe2b0 type=5 nodeid=4 monitor=Y halias="" loc="SEL$F5BB74E1" onames="SEL$07BDC5B4"@"SEL$5" "SEL$2"@"SEL$5" act=5648 min=0 est=49 next=0x638fe250
Reparsing due to card est...
@=0x638fe4c0 type=5 nodeid=3 monitor=Y halias="" loc="SEL$F5BB74E1" onames="SEL$07BDC5B4"@"SEL$5" "SEL$2"@"SEL$5" "SEL$3"@"SEL$1" act=1491 min=0 est=1 next=0x638fe460
Reparsing due to card est...
@=0x638fe688 type=5 nodeid=2 monitor=Y halias="" loc="SEL$F5BB74E1" onames="SEL$07BDC5B4"@"SEL$5" "SEL$2"@"SEL$5" "SEL$3"@"SEL$1" "R_OBJECT_ID"@"SEL$1" act=1491 min=0 est=1 next=0x638fe5f8
kkocfbCheckCardEst [sql_id=b4rhzfw7d6vdp] reparse=y ecs=n efb=n ost=n fbs=n

Those are the misestimates which triggers re-optimization.

And here are all statistics feedback.

*********** Begin Dump Context (kkocfbCheckCardEst) [sql_id=b4rhzfw7d6vdp cpcnt=0] ***********
@=0x638fe688 type=5 nodeid=2 monitor=Y halias="" loc="SEL$F5BB74E1" onames="DM_FOLDER_R1"@"SEL$1" "DM_SYSOBJECT_R2"@"SEL$1" "TE_"@"SEL$2" "LJ_"@"SEL$2" act=1491 min=0 est=1 next=0x638fe5f8
@=0x638fe5f8 type=3 nodeid=17 monitor=Y halias="LJ_" loc="SEL$2" oname="D_1F0049A880000142" act=0 min=1 est=1 next=0x638fe4c0
@=0x638fe4c0 type=5 nodeid=3 monitor=Y halias="" loc="SEL$F5BB74E1" onames="DM_FOLDER_R1"@"SEL$1" "DM_SYSOBJECT_R2"@"SEL$1" "TE_"@"SEL$2" act=1491 min=0 est=1 next=0x638fe460
@=0x638fe460 type=1 nodeid=8 monitor=Y halias="TE_" loc="SEL$2" act=0 min=1 est=1 next=0x638fe3d0
@=0x638fe3d0 type=3 nodeid=9 monitor=Y halias="TE_" loc="SEL$2" oname="D_1F0049A880000108" act=0 min=1 est=1 next=0x638fe2b0
@=0x638fe2b0 type=5 nodeid=4 monitor=Y halias="" loc="SEL$F5BB74E1" onames="DM_FOLDER_R1"@"SEL$1" "DM_SYSOBJECT_R2"@"SEL$1" act=5648 min=0 est=49 next=0x638fe250
@=0x638fe250 type=1 nodeid=6 monitor=Y halias="DM_SYSOBJECT_R2" loc="SEL$1" act=3 min=1 est=16 next=0x638fe1c0
@=0x638fe1c0 type=3 nodeid=7 monitor=Y halias="DM_SYSOBJECT_R2" loc="SEL$1" oname="D_1F0049A880000010" act=3 min=1 est=71 next=0x63a56820
@=0x63a56820 type=3 nodeid=5 monitor=Y halias="DM_FOLDER_R1" loc="SEL$1" oname="D_1F0049A880000016" act=1517 min=0 est=3 next=(nil)
*********** End Dump Context ***********

We also see some information about execution performance:

kkoarCopyCtx: [sql_id=b4rhzfw7d6vdp] origin=CFB old=0x63a565d0 new=0x7fe74e2153f0 copyCnt=1 copyClient=y
**************************************************************
kkocfbCopyBestEst: Best Stats
Exec count: 1
CR gets: 17679
CU gets: 0
Disk Reads: 0
Disk Writes: 0
IO Read Requests: 0
IO Write Requests: 0
Bytes Read: 0
Bytes Written: 0
Bytes Exchanged with Storage: 0
Bytes Exchanged with Disk: 0
Bytes Simulated Read: 0
Bytes Simulated Returned: 0
Elapsed Time: 51 (ms)
CPU Time: 51 (ms)
User I/O Time: 15 (us)
*********** Begin Dump Context (kkocfbCopyBestEst) **********
*********** End Dump Context ***********

They are labeled as ‘Best Stats’ because we had only one execution at that time.

Finally, the hints are dumped:

******** Begin CFB Hints [sql_id=b4rhzfw7d6vdp] xsc=0x7fe74e215748 ********
Dumping Hints
=============
atom_hint=(@=0x7fe74e21ebf0 err=0 resol=0 used=0 token=1018 org=6 lvl=3 txt=OPT_ESTIMATE (@"SEL$1" TABLE "DM_SYSOBJECT_R2"@"SEL$1" MIN=3.000000 ) )
atom_hint=(@=0x7fe74e21e758 err=0 resol=0 used=0 token=1018 org=6 lvl=3 txt=OPT_ESTIMATE (@"SEL$1" INDEX_FILTER "DM_SYSOBJECT_R2"@"SEL$1" "D_1F0049A880000010" MIN=3.000000 ) )
atom_hint=(@=0x7fe74e21e3f0 err=0 resol=0 used=0 token=1018 org=6 lvl=3 txt=OPT_ESTIMATE (@"SEL$1" INDEX_SCAN "DM_SYSOBJECT_R2"@"SEL$1" "D_1F0049A880000010" MIN=3.000000 ) )
atom_hint=(@=0x7fe74e21dfd0 err=0 resol=0 used=0 token=1018 org=6 lvl=3 txt=OPT_ESTIMATE (@"SEL$1" INDEX_FILTER "DM_FOLDER_R1"@"SEL$1" "D_1F0049A880000016" ROWS=1517.000000 ) )
atom_hint=(@=0x7fe74e21dc68 err=0 resol=0 used=0 token=1018 org=6 lvl=3 txt=OPT_ESTIMATE (@"SEL$1" INDEX_SCAN "DM_FOLDER_R1"@"SEL$1" "D_1F0049A880000016" MIN=1517.000000 ) )
atom_hint=(@=0x7fe74e21d8c8 err=0 resol=0 used=0 token=1018 org=6 lvl=3 txt=OPT_ESTIMATE (@"SEL$1" TABLE "DM_FOLDER_R1"@"SEL$1" ROWS=1517.000000 ) )
********** End CFB Hints **********

Those are exactly what we see in V$SQL_REOPTIMIZATION_HINTS

This is all what we see for this first execution. The next execution starts with:

KKSMEC: Invalidating old cursor 0 with hash val = 1894156093
KKSMEC: Produced New cursor 1 with hash val = 955499861

As a consequence of child cursor 0 marked as reoptimizable, the next execution invalidates it and creates a new child cursor 1.

Child cursor 1 – new plan 955499861 – 5.17 seconds

Here is the new plan we see after that second execution:

Plan hash value: 955499861
------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 30996 (100)| 1 |00:00:04.58 | 102K| 101K| | | |
| 1 | SORT AGGREGATE | | 1 | 1 | | 1 |00:00:04.58 | 102K| 101K| | | |
| 2 | VIEW | VM_NWVW_2 | 1 | 12039 | 30996 (1)| 1491 |00:00:04.58 | 102K| 101K| | | |
| 3 | HASH UNIQUE | | 1 | 12039 | 30996 (1)| 1491 |00:00:04.58 | 102K| 101K| 941K| 941K| 2597K (0)|
|* 4 | HASH JOIN RIGHT SEMI | | 1 | 12039 | 30490 (1)| 4132 |00:00:04.57 | 102K| 101K| 12M| 3867K| 14M (0)|
| 5 | TABLE ACCESS FULL | DM_DOCUMENT_S | 1 | 213K| 210 (1)| 213K|00:00:00.01 | 741 | 0 | | | |
|* 6 | HASH JOIN | | 1 | 36463 | 29665 (1)| 5622 |00:00:04.51 | 101K| 101K| 1405K| 1183K| 2026K (0)|
|* 7 | HASH JOIN | | 1 | 36463 | 18397 (1)| 5622 |00:00:02.23 | 65103 | 65050 | 940K| 940K| 1339K (0)|
|* 8 | HASH JOIN | | 1 | 2222 | 14489 (1)| 1499 |00:00:01.58 | 51413 | 51369 | 992K| 992K| 1377K (0)|
|* 9 | HASH JOIN | | 1 | 2222 | 14120 (1)| 1499 |00:00:01.46 | 50088 | 50057 | 3494K| 1598K| 4145K (0)|
|* 10 | TABLE ACCESS FULL | DM_SYSOBJECT_S | 1 | 39235 | 10003 (1)| 39235 |00:00:00.83 | 36385 | 36376 | | | |
|* 11 | HASH JOIN | | 1 | 24899 | 3920 (1)| 5648 |00:00:00.62 | 13703 | 13681 | 1199K| 1199K| 1344K (0)|
|* 12 | INDEX RANGE SCAN | D_1F0049A880000016 | 1 | 1517 | 12 (0)| 1517 |00:00:00.01 | 13 | 0 | | | |
|* 13 | TABLE ACCESS FULL| DM_SYSOBJECT_R | 1 | 646K| 3906 (1)| 646K|00:00:00.50 | 13690 | 13681 | | | |
| 14 | TABLE ACCESS FULL | DM_FOLDER_S | 1 | 431K| 367 (1)| 431K|00:00:00.04 | 1325 | 1312 | | | |
|* 15 | TABLE ACCESS FULL | DM_SYSOBJECT_R | 1 | 646K| 3906 (1)| 646K|00:00:00.51 | 13690 | 13681 | | | |
|* 16 | TABLE ACCESS FULL | DM_SYSOBJECT_S | 1 | 646K| 10000 (1)| 646K|00:00:02.14 | 36385 | 36376 | | | |
------------------------------------------------------------------------------------------------------------------------------------------------------------
Note
-----
- statistics feedback used for this statement

The notes makes it clear that the estimations comes from previous run (statistics feedback) and we see that for most operations E-Rows = A-Rows. With those a new plan has been chosen, with complex view merging: VM_NWWM. You can find clues about those internal view names on Jonathan Lewis blog. Here probably because the estimated number of rows is high, the subquery has been unnested. It is an ‘EXISTS’ subquery, which is transformed to semi join and merged to apply a distinct at the end.

So, we have a different plan, which is supposed to be better because it has been costed with more accurate cardinalities. .The goal of this post is not to detail the reason why the execution time is longer with a ‘better’ plan. If you look at ‘Reads’ column you can see that the first one has read all blocks from buffer cache but second one had to do physical I/O for all. With nothing from buffer cache, reading 101K blocks in multiblock reads may be faster than reading 17679 so the optimizer decision was not bad. I’ll have to estimate if it is expected to have most of the blocks in buffer cache in real production life as behavior in UAT is different. Some people will stop here, say that cardinality feedback is bad, disable it or even set optimizer_cost_adj to get the nested loop, but things are more complex than that.

The important thing is that the optimizer doesn’t stop there and compares the new execution statistics with the previous one.

**************************************************************
kkocfbCompareExecStats : Current
Exec count: 1
CR gets: 102226
CU gets: 3
Disk Reads: 101426
Disk Writes: 0
IO Read Requests: 1633
IO Write Requests: 0
Bytes Read: 830881792
Bytes Written: 0
Bytes Exchanged with Storage: 830881792
Bytes Exchanged with Disk: 830881792
Bytes Simulated Read: 0
Bytes Simulated Returned: 0
Elapsed Time: 4586 (ms)
CPU Time: 1305 (ms)
User I/O Time: 3040 (ms)
**************************************************************
kkocfbCompareExecStats : Best
Exec count: 1
CR gets: 17679
CU gets: 0
Disk Reads: 0
Disk Writes: 0
IO Read Requests: 0
IO Write Requests: 0
Bytes Read: 0
Bytes Written: 0
Bytes Exchanged with Storage: 0
Bytes Exchanged with Disk: 0
Bytes Simulated Read: 0
Bytes Simulated Returned: 0
Elapsed Time: 51 (ms)
CPU Time: 51 (ms)
User I/O Time: 15 (us)
kkocfbCompareExecStats: improvement BG: 0.172935 CPU: 0.039555

The first execution, with ‘bad’ statistics, is still the best one and this new execution has an improvement of 0.17, which means 5 times slower.

Then in the trace we see again that re-optimisation (reparsing) is considered:

Reparsing due to card est...
@=0x6a368338 type=5 nodeid=11 monitor=Y halias="" loc="SEL$582FA660" onames="SEL$608EC1F7"@"SEL$582FA660" "SEL$04458B50"@"SEL$582FA660" act=5648 min=0 est=24899 next=0x6a3682d8
Reparsing due to card est...
@=0x6a3687b0 type=5 nodeid=7 monitor=Y halias="" loc="SEL$582FA660" onames="SEL$608EC1F7"@"SEL$582FA660" "SEL$04458B50"@"SEL$582FA660" "SEL$FB0FE72C"@"SEL$33802F1B" "SEL$5"@"SEL$33802F1B" "SEL$07BDC5B4"@"SEL$636B5685" act=5622 min=0 est=36463 next=0x6a368750
Reparsing due to card est...
@=0x6a368990 type=5 nodeid=6 monitor=Y halias="" loc="SEL$582FA660" onames="SEL$608EC1F7"@"SEL$582FA660" "SEL$04458B50"@"SEL$582FA660" "SEL$FB0FE72C"@"SEL$33802F1B" "SEL$5"@"SEL$33802F1B" "SEL$07BDC5B4"@"SEL$636B5685" "SEL$FB0FE72C"@"SEL$4" act=5622 min=0 est=36463 next=0x6a368930
Reparsing due to card est...
@=0x6a368b90 type=5 nodeid=4 monitor=Y halias="" loc="SEL$582FA660" onames="SEL$608EC1F7"@"SEL$582FA660" "SEL$04458B50"@"SEL$582FA660" "SEL$FB0FE72C"@"SEL$33802F1B" "SEL$5"@"SEL$33802F1B" "SEL$07BDC5B4"@"SEL$636B5685" "SEL$FB0FE72C"@"SEL$4" "SEL$F5BB74E1"
@"SEL$4" act=4132 min=0 est=12039 next=0x6a368b30
Reparsing due to card est...
@=0x6a368d60 type=4 nodeid=3 monitor=Y halias="" loc="SEL$582FA660" act=1491 min=0 est=12039 next=0x6a368b90

An additional OPT_ESTIMATE hint is generated for the complext view merging view query block:

atom_hint=(@=0x7fe74e21eb90 err=0 resol=0 used=0 token=1018 org=6 lvl=3 txt=OPT_ESTIMATE (@"SEL$1" INDEX_FILTER "DM_FOLDER_R1"@"SEL$1" "D_1F0049A880000016" ROWS=1517.000000 ) )
atom_hint=(@=0x7fe74e21e7b0 err=0 resol=0 used=0 token=1018 org=6 lvl=3 txt=OPT_ESTIMATE (@"SEL$1" INDEX_SCAN "DM_FOLDER_R1"@"SEL$1" "D_1F0049A880000016" MIN=1517.000000 ) )
atom_hint=(@=0x7fe74e21e470 err=0 resol=0 used=0 token=1018 org=6 lvl=3 txt=OPT_ESTIMATE (@"SEL$1" TABLE "DM_FOLDER_R1"@"SEL$1" ROWS=1517.000000 ) )
atom_hint=(@=0x7fe74e21e050 err=0 resol=0 used=0 token=1018 org=6 lvl=3 txt=OPT_ESTIMATE (@"SEL$1" INDEX_FILTER "DM_SYSOBJECT_R2"@"SEL$1" "D_1F0049A880000010" MIN=3.000000 ) )
atom_hint=(@=0x7fe74e21dce8 err=0 resol=0 used=0 token=1018 org=6 lvl=3 txt=OPT_ESTIMATE (@"SEL$1" INDEX_SCAN "DM_SYSOBJECT_R2"@"SEL$1" "D_1F0049A880000010" MIN=3.000000 ) )
atom_hint=(@=0x7fe74e21da38 err=0 resol=0 used=0 token=1018 org=6 lvl=2 txt=OPT_ESTIMATE (@"SEL$582FA660" QUERY_BLOCK ROWS=1491.000000 ) )
atom_hint=(@=0x7fe74e21d600 err=0 resol=0 used=0 token=1018 org=6 lvl=3 txt=OPT_ESTIMATE (@"SEL$1" TABLE "DM_SYSOBJECT_R2"@"SEL$1" MIN=3.000000 ) )

Whith this new cardinality estimation, the next execution will try to get a better execution, but it doesn’t change the optimizer choice and the new child cursor gets the same execution plan:
KKSMEC: Invalidating old cursor 1 with hash val = 955499861
KKSMEC: Produced New cursor 2 with hash val = 955499861

Child cursor 2 – plan 955499861 again – 4.77 seconds

This the third execution:

Plan hash value: 955499861
------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 30996 (100)| 1 |00:00:04.19 | 102K| 101K| | | |
| 1 | SORT AGGREGATE | | 1 | 1 | | 1 |00:00:04.19 | 102K| 101K| | | |
| 2 | VIEW | VM_NWVW_2 | 1 | 1491 | 30996 (1)| 1491 |00:00:04.19 | 102K| 101K| | | |
| 3 | HASH UNIQUE | | 1 | 1491 | 30996 (1)| 1491 |00:00:04.19 | 102K| 101K| 941K| 941K| 1355K (0)|
|* 4 | HASH JOIN RIGHT SEMI | | 1 | 12039 | 30490 (1)| 4132 |00:00:04.19 | 102K| 101K| 12M| 3867K| 14M (0)|
| 5 | TABLE ACCESS FULL | DM_DOCUMENT_S | 1 | 213K| 210 (1)| 213K|00:00:00.01 | 740 | 0 | | | |
|* 6 | HASH JOIN | | 1 | 36463 | 29665 (1)| 5622 |00:00:04.12 | 101K| 101K| 1405K| 1183K| 2021K (0)|
|* 7 | HASH JOIN | | 1 | 36463 | 18397 (1)| 5622 |00:00:03.39 | 65102 | 65050 | 940K| 940K| 1359K (0)|
|* 8 | HASH JOIN | | 1 | 2222 | 14489 (1)| 1499 |00:00:02.94 | 51412 | 51369 | 992K| 992K| 1331K (0)|
|* 9 | HASH JOIN | | 1 | 2222 | 14120 (1)| 1499 |00:00:01.04 | 50088 | 50057 | 3494K| 1598K| 4145K (0)|
|* 10 | TABLE ACCESS FULL | DM_SYSOBJECT_S | 1 | 39235 | 10003 (1)| 39235 |00:00:00.47 | 36385 | 36376 | | | |
|* 11 | HASH JOIN | | 1 | 24899 | 3920 (1)| 5648 |00:00:00.55 | 13703 | 13681 | 1199K| 1199K| 1344K (0)|
|* 12 | INDEX RANGE SCAN | D_1F0049A880000016 | 1 | 1517 | 12 (0)| 1517 |00:00:00.01 | 13 | 0 | | | |
|* 13 | TABLE ACCESS FULL| DM_SYSOBJECT_R | 1 | 646K| 3906 (1)| 646K|00:00:00.43 | 13690 | 13681 | | | |
| 14 | TABLE ACCESS FULL | DM_FOLDER_S | 1 | 431K| 367 (1)| 431K|00:00:01.82 | 1324 | 1312 | | | |
|* 15 | TABLE ACCESS FULL | DM_SYSOBJECT_R | 1 | 646K| 3906 (1)| 646K|00:00:00.33 | 13690 | 13681 | | | |
|* 16 | TABLE ACCESS FULL | DM_SYSOBJECT_S | 1 | 646K| 10000 (1)| 646K|00:00:00.60 | 36385 | 36376 | | | |
------------------------------------------------------------------------------------------------------------------------------------------------------------
Note
-----
- statistics feedback used for this statement

Same plan and same execution time here. Tables are large and SGA is small here.

*********** Begin Dump Context: best estimates ***********
 
**************************************************************
kkocfbCompareExecStats : Current
Exec count: 1
CR gets: 102224
CU gets: 3
Disk Reads: 101426
Disk Writes: 0
IO Read Requests: 1633
IO Write Requests: 0
Bytes Read: 830881792
Bytes Written: 0
Bytes Exchanged with Storage: 830881792
Bytes Exchanged with Disk: 830881792
Bytes Simulated Read: 0
Bytes Simulated Returned: 0
Elapsed Time: 4206 (ms)
CPU Time: 1279 (ms)
User I/O Time: 3084 (ms)
**************************************************************
kkocfbCompareExecStats : Best
Exec count: 1
CR gets: 17679
CU gets: 0
Disk Reads: 0
Disk Writes: 0
IO Read Requests: 0
IO Write Requests: 0
Bytes Read: 0
Bytes Written: 0
Bytes Exchanged with Storage: 0
Bytes Exchanged with Disk: 0
Bytes Simulated Read: 0
Bytes Simulated Returned: 0
Elapsed Time: 51 (ms)
CPU Time: 51 (ms)
User I/O Time: 15 (us)
kkocfbCompareExecStats: improvement BG: 0.172939 CPU: 0.040363

So where we are here? We had an execution which was based on bad estimations. Then two tries on good estimations, but because of different buffer cache behavior they are finally 5 times slower. Nothing else to try.

The good thing is that the optimizer admits it cannot do better and falls back to the best execution time, now considered as the best estimate:

kkocfbCheckCardEst: reparse using best estimates
...
kkocfbCopyCardCtx: No best stats found

We see no OPT_ESTIMATE hints here, reason why there was noting in V$SQL_REOPTIMIZATION_HINTS for cursor 2, but this cursor is still marked as re-optimizable and next execution invalidates it:

KKSMEC: Invalidating old cursor 2 with hash val = 955499861
KKSMEC: Produced New cursor 3 with hash val = 1894156093

We see that we are back to the original plan, which is expected because the static statistics have not changed, and there are no statistics feedback this time.

Child cursor 3 – back to plan 1894156093 – 0.5 seconds

This is the plan that si used for all subsequent executions now.

Plan hash value: 1894156093
----------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
----------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 171 (100)| 1 |00:00:00.04 | 17677 |
| 1 | SORT AGGREGATE | | 1 | 1 | | 1 |00:00:00.04 | 17677 |
| 2 | NESTED LOOPS | | 1 | 1 | 116 (0)| 1491 |00:00:00.04 | 17677 |
| 3 | NESTED LOOPS | | 1 | 1 | 115 (0)| 1491 |00:00:00.04 | 17454 |
| 4 | NESTED LOOPS | | 1 | 49 | 17 (0)| 5648 |00:00:00.01 | 536 |
|* 5 | INDEX RANGE SCAN | D_1F0049A880000016 | 1 | 3 | 3 (0)| 1517 |00:00:00.01 | 13 |
| 6 | TABLE ACCESS BY INDEX ROWID BATCHED | DM_SYSOBJECT_R | 1517 | 16 | 10 (0)| 5648 |00:00:00.01 | 523 |
|* 7 | INDEX RANGE SCAN | D_1F0049A880000010 | 1517 | 71 | 2 (0)| 5648 |00:00:00.01 | 249 |
|* 8 | TABLE ACCESS BY INDEX ROWID | DM_SYSOBJECT_S | 5648 | 1 | 2 (0)| 1491 |00:00:00.03 | 16918 |
|* 9 | INDEX UNIQUE SCAN | D_1F0049A880000108 | 5648 | 1 | 1 (0)| 1491 |00:00:00.03 | 15427 |
| 10 | NESTED LOOPS SEMI | | 5648 | 2 | 25 (0)| 1491 |00:00:00.02 | 14827 |
| 11 | NESTED LOOPS | | 5648 | 7 | 18 (0)| 2981 |00:00:00.02 | 12868 |
| 12 | TABLE ACCESS BY INDEX ROWID BATCHED| DM_SYSOBJECT_R | 5648 | 71 | 4 (0)| 2981 |00:00:00.01 | 7747 |
|* 13 | INDEX RANGE SCAN | D_1F0049A880000010 | 5648 | 16 | 3 (0)| 2981 |00:00:00.01 | 6145 |
|* 14 | TABLE ACCESS BY INDEX ROWID | DM_SYSOBJECT_S | 2981 | 1 | 2 (0)| 2981 |00:00:00.01 | 5121 |
|* 15 | INDEX UNIQUE SCAN | D_1F0049A880000108 | 2981 | 1 | 1 (0)| 2981 |00:00:00.01 | 2140 |
|* 16 | INDEX UNIQUE SCAN | D_1F0049A880000145 | 2981 | 52759 | 1 (0)| 1491 |00:00:00.01 | 1959 |
|* 17 | INDEX UNIQUE SCAN | D_1F0049A880000142 | 1491 | 1 | 1 (0)| 1491 |00:00:00.01 | 223 |
----------------------------------------------------------------------------------------------------------------------------------------

After a few tries to get a better plan, the optimizer finally switched back to the first one because it was the best in term of response time (I don’t know exactly which execution statistics are used for this decision, elapsed time is just my guess here).

The interesting point here is to understand that you can see a reoptimized cursor without statistics feedback:

  • No rows for the previous cursor in V$SQL_REOPTIMIZATION_HINTS
  • No ‘statistics feedback’ not in the new cursor plan
  • Difference between E-Rows and A-Rows in the new plan
So what?

SQL optimization is a complex task and there is nothing like an execution is ‘fast’ or ‘slow’, an execution plan is ‘good’ or ‘bad’, an optimizer decision is ‘right’ or ‘wrong’. What is fast after several similar executions can be slow on a busy system because less blocks remains in cache. What is slow at a time where the storage is busy may be fast at another time of the day. What is fast with one single user may raise more concurrency contention on a busy system. Cardinality feedback is a reactive attempt to improve an execution plan. On average, things go better with it, but it is not abnormal that few cases can go wrong for a few executions. You can’t blame the optimizer for that, and fast conclusions or optimizer parameter tweaking are not sustainable solutions. And don’t forget that if your data model is well designed, then the critical queries should have one clear optimal access path which will not depend on a small difference in estimated number of rows.

The only thing I can always conclude when I see cardinality feedback going wrong is that there is something to fix in the design of data model, the statistics gathering and/or the query design. When statistics feedback gives a worse execution plan, it is the consequence of the combination of:

  • mis-estimation of cardinalities: bad, insufficient, or stale statistics
  • mis-estimation of response time: bad system statistics, untypical memory sizing, unrepresentative execution context
  • no clear optimal access path: sub-optimal indexing, lack of partitioning,…

It is a good thing to have the auto-reoptimization coming back to the initial plan when nothing better has been observed. I would love to see more control about it. For example, a hint that sets a threshold of execution time where the optimizer should not try to find better. I filled this idea in https://community.oracle.com/ideas/17514 and you can vote for it.

Update 2-APR-2017

I was not clear in this post, but this is the first time I observed this behavior (multiple reoptimization and then back to original plan), so I’m not sure about the reasons and the different conditions required. This was on 12.1.0.2 with JAN17 PSU and the two Adaptive Statistics backport from 12cR2, adaptive plans set to false and no bind variables.

 

Cet article When automatic reoptimization plan is less efficient est apparu en premier sur Blog dbi services.

Can I do it with PostgreSQL? – 13 – create database

Yann Neuhaus - Sun, 2017-04-02 04:02

Following the last post about which tools you can use to create your PostgreSQL instance in this post we’ll look at how you can do the “create database” part that you would do in Oracle when not using the database configuration assistant. Of course can can create a database in PostgreSQL but it is not the same as it is in Oracle. To actually create your Oracle database you would do something like this once you have the instance started in nomount mode:

startup nomount pfile="/u01/app/oracle/admin/DB4/create/init.ora";
CREATE DATABASE "DB4"
MAXINSTANCES 8
MAXLOGHISTORY 1
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 1024
DATAFILE '/u02/oradata/DB4/system01DB4.dbf' SIZE 700M REUSE AUTOEXTEND ON NEXT  10240K MAXSIZE 2048M
EXTENT MANAGEMENT LOCAL
SYSAUX DATAFILE '/u02/oradata/DB4/sysaux01DB4.dbf' SIZE 600M REUSE AUTOEXTEND ON NEXT  10240K MAXSIZE 2048M
SMALLFILE DEFAULT TEMPORARY TABLESPACE TEMP TEMPFILE '/u02/oradata/DB4/temp01DB4.dbf' SIZE 20M REUSE AUTOEXTEND ON NEXT  640K MAXSIZE 2048M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1024K
SMALLFILE UNDO TABLESPACE "UNDOTBS1" DATAFILE  '/u02/oradata/DB4/undotbs01DB4.dbf' SIZE 200M REUSE AUTOEXTEND ON NEXT  5120K MAXSIZE 2048M
CHARACTER SET AL32UTF8
NATIONAL CHARACTER SET AL16UTF16
LOGFILE GROUP 1 ('/u03/oradata/DB4/redog1m1DB4.dbf','/u04/oradata/DB4/redog1m2DB4.dbf') SIZE 50M,
GROUP 2 ('/u03/oradata/DB4/redog2m1DB4.dbf','/u04/oradata/DB4/redog2m2DB4.dbf') SIZE 50M,
GROUP 3 ('/u03/oradata/DB4/redog3m1DB4.dbf','/u04/oradata/DB4/redog3m2DB4.dbf') SIZE 50M,
GROUP 4 ('/u03/oradata/DB4/redog4m1DB4.dbf','/u04/oradata/DB4/redog4m2DB4.dbf') SIZE 50M,
GROUP 5 ('/u03/oradata/DB4/redog5m1DB4.dbf','/u04/oradata/DB4/redog5m2DB4.dbf') SIZE 50M,
GROUP 6 ('/u03/oradata/DB4/redog6m1DB4.dbf','/u04/oradata/DB4/redog6m2DB4.dbf') SIZE 50M
USER SYS IDENTIFIED BY "&&sysPassword" USER SYSTEM IDENTIFIED BY "&&systemPassword"
enable pluggable database
seed file_name_convert=('/u02/oradata/DB4/system01DB4.dbf','/u02/oradata/DB4/pdbseed/system01DB4.dbf','/u02/oradata/DB4/sysaux01DB4.dbf','/u02/oradata/DB4/pdbseed/sysaux01DB4.dbf','/u02/oradata/DB4/temp01DB4.dbf','/u02/oradata/DB4/pdbseed/temp01DB4.dbf','/u02/oradata/DB4/undotbs01DB4.dbf','/u02/oradata/DB4/pdbseed/undotbs01DB4.dbf') LOCAL UNDO ON;

Once this completed you start creating the catalog and install additional stuff that you need for your application (e.g. Oracle Text or Oracle Spatial). How does that work in PostgreSQL?

In the last post we had a very quick look at initdb. To create the PostgreSQL database cluster you have to use initdb, there is no other possibility. When you take a look at the options you can provide to initdb there are not too much. The only mandatory parameter is “-D” or “–pgdata”. This tells initdb where you want to have the files created on disk:

postgres@pgbox:/home/postgres/ [pg962final] initdb -D /home/postgres/test/
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locales
  COLLATE:  en_US.UTF-8
  CTYPE:    en_US.UTF-8
  MESSAGES: en_US.UTF-8
  MONETARY: de_CH.UTF-8
  NUMERIC:  de_CH.UTF-8
  TIME:     en_US.UTF-8
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /home/postgres/test ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    pg_ctl -D /home/postgres/test/ -l logfile start

There are some important messages which got printed to the screen. The first two lines tell us that all the files will be owned by the operating system user which invoked initdb, postgres, and that the same user must be used to start the instance. Then it gets more interesting as it is about the default encoding/characterset for the template database. When you do not specify what you want you get the default of you operating system session:

 
postgres@pgbox:/home/postgres/ [pg962final] locale
LANG=en_US.UTF-8
LC_CTYPE="en_US.UTF-8"
LC_NUMERIC=de_CH.UTF-8
LC_TIME="en_US.UTF-8"
LC_COLLATE="en_US.UTF-8"
LC_MONETARY=de_CH.UTF-8
LC_MESSAGES="en_US.UTF-8"
LC_PAPER=de_CH.UTF-8
LC_NAME=de_CH.UTF-8
LC_ADDRESS=de_CH.UTF-8
LC_TELEPHONE=de_CH.UTF-8
LC_MEASUREMENT=de_CH.UTF-8
LC_IDENTIFICATION=de_CH.UTF-8
LC_ALL=

Of course you can override that by passing any of the supported character sets to initdb, e.g.:

 
postgres@pgbox:/home/postgres/ [pg962final] rm -rf test
postgres@pgbox:/home/postgres/ [pg962final] mkdir test
postgres@pgbox:/home/postgres/ [pg962final] initdb -D test --encoding=LATIN1 --locale=de_DE

You can control how sorting and the display for numbers, money and so on shall happen by specifying the various “–lc” parameters, e.g.:

 
postgres@pgbox:/home/postgres/ [pg962final] rm -rf test
postgres@pgbox:/home/postgres/ [pg962final] mkdir test
postgres@pgbox:/home/postgres/ [pg962final] initdb -D test --encoding=LATIN1 --locale=de_DE --lc-messages=en_US --lc-monetary=de_DE

PostgreSQL comes with build-in full text search and the line below the encoding stuff tells you that the default will be English. Can overwrite this as well with the “–text-search-config” parameter of initdb.

Maybe the most important message is this:”Data page checksums are disabled.”. This means that PostgreSQL will not use checksums to detect silent data corruptions. Of course this introduces overhead when enabled but your data usually is important, isn’t it? You can enable this by using the “–data-checksums” switch of initdb and this cannot be changed afterwards.

The last message we will look at for now is this one: “fixing permissions on existing directory /home/postgres/test … ok”. What does that mean? When you look at the permissions of the files and directories that got created by initdb you’ll notice that only the owner of the files and directories has permission (no permission for group and world):

drwx------. 19 postgres postgres      4096 Mar 31 11:07 test
postgres@pgbox:/home/postgres/ [pg962final] ls -al test/
total 56
drwx------. 19 postgres postgres  4096 Mar 31 11:07 .
drwx------. 10 postgres postgres  4096 Mar 31 10:51 ..
drwx------.  5 postgres postgres    38 Mar 31 11:07 base
drwx------.  2 postgres postgres  4096 Mar 31 11:07 global
drwx------.  2 postgres postgres    17 Mar 31 11:07 pg_clog
drwx------.  2 postgres postgres     6 Mar 31 11:07 pg_commit_ts
drwx------.  2 postgres postgres     6 Mar 31 11:07 pg_dynshmem
-rw-------.  1 postgres postgres  4468 Mar 31 11:07 pg_hba.conf
-rw-------.  1 postgres postgres  1636 Mar 31 11:07 pg_ident.conf
drwx------.  4 postgres postgres    37 Mar 31 11:07 pg_logical
drwx------.  4 postgres postgres    34 Mar 31 11:07 pg_multixact
drwx------.  2 postgres postgres    17 Mar 31 11:07 pg_notify
drwx------.  2 postgres postgres     6 Mar 31 11:07 pg_replslot
drwx------.  2 postgres postgres     6 Mar 31 11:07 pg_serial
drwx------.  2 postgres postgres     6 Mar 31 11:07 pg_snapshots
drwx------.  2 postgres postgres     6 Mar 31 11:07 pg_stat
drwx------.  2 postgres postgres     6 Mar 31 11:07 pg_stat_tmp
drwx------.  2 postgres postgres    17 Mar 31 11:07 pg_subtrans
drwx------.  2 postgres postgres     6 Mar 31 11:07 pg_tblspc
drwx------.  2 postgres postgres     6 Mar 31 11:07 pg_twophase
-rw-------.  1 postgres postgres     4 Mar 31 11:07 PG_VERSION
drwx------.  3 postgres postgres    58 Mar 31 11:07 pg_xlog
-rw-------.  1 postgres postgres    88 Mar 31 11:07 postgresql.auto.conf
-rw-------.  1 postgres postgres 22258 Mar 31 11:07 postgresql.conf

When you change that PostgreSQL will refuse to start:

postgres@pgbox:/home/postgres/ [pg962final] chmod 770 test/
postgres@pgbox:/home/postgres/ [pg962final] pg_ctl -D test/ start
server starting
postgres@pgbox:/home/postgres/ [pg962final] FATAL:  data directory "/home/postgres/test" has group or world access
DETAIL:  Permissions should be u=rwx (0700).

Now that we have everything initialized on disk we are ready to start the instance:

postgres@pgbox:/home/postgres/ [pg962final] pg_ctl -D /home/postgres/test/ start
postgres@pgbox:/home/postgres/ [pg962final] LOG:  database system was shut down at 2017-03-31 11:07:05 CEST
LOG:  MultiXact member wraparound protections are now enabled
LOG:  autovacuum launcher started
LOG:  database system is ready to accept connections
postgres@pgbox:/home/postgres/ [pg962final] psql postgres
psql (9.6.2 dbi services build)
Type "help" for help.

postgres=

From now on you can create a database:

postgres=# create database mydb;
CREATE DATABASE
postgres=# \l
                                  List of databases
   Name    |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges   
-----------+----------+----------+-------------+-------------+-----------------------
 mydb      | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | 
 postgres  | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | 
 template0 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres
 template1 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres

Check here if you want to know what the other databases are for. When you compare that to the Oracle “create database” statement it creates something like a pluggable database and you can create as many as you want. Inside the database you have the usual objects (schemata,tables,views,…). You can even change the encoding for new databases:

postgres=# create database mydb2 encoding='LATIN1' LC_COLLATE='de_CH.iso88591' LC_CTYPE='de_CH.iso88591' template=template0;
CREATE DATABASE
postgres=# \l
                                     List of databases
   Name    |  Owner   | Encoding |    Collate     |     Ctype      |   Access privileges   
-----------+----------+----------+----------------+----------------+-----------------------
 mydb      | postgres | UTF8     | en_US.UTF-8    | en_US.UTF-8    | 
 mydb2     | postgres | LATIN1   | de_CH.iso88591 | de_CH.iso88591 | 
 postgres  | postgres | UTF8     | en_US.UTF-8    | en_US.UTF-8    | 
 template0 | postgres | UTF8     | en_US.UTF-8    | en_US.UTF-8    | =c/postgres          +
           |          |          |                |                | postgres=CTc/postgres
 template1 | postgres | UTF8     | en_US.UTF-8    | en_US.UTF-8    | =c/postgres          +
           |          |          |                |                | postgres=CTc/postgres
(5 rows)

Important to know is that users are global so you can not create users inside databases but of course you can grant access to databases to different users and users can be owners of databases:

postgres=# alter database mydb owner to myusr;
ALTER DATABASE
postgres=# \l
                                     List of databases
   Name    |  Owner   | Encoding |    Collate     |     Ctype      |   Access privileges   
-----------+----------+----------+----------------+----------------+-----------------------
 mydb      | myusr    | UTF8     | en_US.UTF-8    | en_US.UTF-8    | 
 mydb2     | postgres | LATIN1   | de_CH.iso88591 | de_CH.iso88591 | 
 postgres  | postgres | UTF8     | en_US.UTF-8    | en_US.UTF-8    | 
 template0 | postgres | UTF8     | en_US.UTF-8    | en_US.UTF-8    | =c/postgres          +
           |          |          |                |                | postgres=CTc/postgres
 template1 | postgres | UTF8     | en_US.UTF-8    | en_US.UTF-8    | =c/postgres          +
           |          |          |                |                | postgres=CTc/postgres
(5 rows)

How can you then install options into the databases? There are none, all is included. What maybe comes closest to what Oracle calls options are extensions or modules. Some of them are provided by default and you can find them usually in the “share/extension” directory where you installed the PostgreSQL binaries:

postgres@pgbox:/u01/app/postgres/product/96/db_2/ [pg962final] ls
bin  include  lib  share
postgres@pgbox:/u01/app/postgres/product/96/db_2/ [pg962final] ls share/extension/
adminpack--1.0.sql                  hstore--1.3--1.4.sql                  pageinspect.control                      plperlu--unpackaged--1.0.sql
adminpack.control                   hstore--1.4.sql                       pageinspect--unpackaged--1.0.sql         plpgsql--1.0.sql
...

All of those can be installed per database, e.g.:

postgres@pgbox:/home/postgres/ [pg962final] psql postgres
psql (9.6.2 dbi services build)
Type "help" for help.

postgres=# \connect mydb
You are now connected to database "mydb" as user "postgres".
mydb=# create extension hstore;
CREATE EXTENSION
mydb=# \dx
                           List of installed extensions
  Name   | Version |   Schema   |                   Description                    
---------+---------+------------+--------------------------------------------------
 hstore  | 1.4     | public     | data type for storing sets of (key, value) pairs
 plpgsql | 1.0     | pg_catalog | PL/pgSQL procedural language
(2 rows)

mydb=# 

Others are not available by default and you usually need to download them from github, e.g.
cstore

For an overview can you check the PostgreSQL Extension Network.

Conclusion: You need to use initdb to initialize all the files for your PostgreSQL instance on disk. Use pg_ctl to start your instance and then you are ready to create databases. Hope this helps for starting with PostgreSQL.

 

Cet article Can I do it with PostgreSQL? – 13 – create database est apparu en premier sur Blog dbi services.

Slides from Oracle PaaS Forum 2017 - Oracle JET and ADF BC REST Production Experience with Oracle Java Cloud

Andrejus Baranovski - Sun, 2017-04-02 01:56
My colleague Florin Marcus (twitter: @FlorinMarcus) was attending and presenting at Oracle PaaS 2017 forum in Split, Crotia last week. He was explaining our production experience with Oracle Java Cloud Service and running Oracle JET/ADF BC REST system on Cloud instance.

Slides are uploaded on slide share, you can go through and read about real production app built with JET and ADF BC REST running on Oracle Java Cloud. If you are interested in more details - let me know, I could show a demo:



Exciting news from Oracle PaaS Forum 2017 - Red Samurai won award for Outstanding Java Cloud Service Contribution 2017:


This award was received for implementing Oracle JET/ADF BC REST app and running it on Oracle Java Cloud in production. Read more about it - Red Samurai and Oracle PaaS JCS Success - JET/ADF BC REST Cloud Production Application.

RTI and Actionable BI

Dylan's BI Notes - Sat, 2017-04-01 15:59
RTI stands for Response to Intervention.  It is one the Data Driven Instruction methods. It is a way to monitor students’ progress and to determine what interventions students may need, assign the appropriate intervention to them, and monitor the progress closely. This is a good demonstration of the need of actionable BI.  The system collects the […]
Categories: BI & Warehousing

Goldengate 12c Troubleshooting XAGENABLE

Michael Dinh - Sat, 2017-04-01 09:52

Environment:
Oracle RAC with DBFS running as oracle
Goldengate12c running as ggsuser using DBFS and XAGENABLE
Monitoring is running as monitor

ERROR: Transparent Integration with XAG is Enabled but CRS/XAG is not Available (Doc ID 2240440.1)

Modify GLOBALS per Doc ID 2240440.1

$ cat GLOBALS

setenv (GRID_HOME='/u01/app/12.1.0/grid')
XAGENABLE

$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 12.2.0.1.160823 OGGCORE_12.2.0.1.0OGGBP_PLATFORMS_160805.1058_FBO
Linux, x64, 64bit (optimized), Oracle 12c on Aug  5 2016 23:35:08
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2016, Oracle and/or its affiliates. All rights reserved.



2017-03-31 11:24:06  INFO    OGG-02095  Successfully set environment variable GRID_HOME=/u01/app/12.1.0/grid.

*** Could not open error log ggserr.log (error 13,Permission denied) ***

2017-03-31 11:24:06  INFO    OGG-02095  Successfully set environment variable GRID_HOME=/u01/app/12.1.0/grid.
ERROR: Transparent Integration with XAG is enabled but CRS/XAG is not available.
chmod 775 ggserr.log and set umask 002 in .bash_profile

$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 12.2.0.1.160823 OGGCORE_12.2.0.1.0OGGBP_PLATFORMS_160805.1058_FBO
Linux, x64, 64bit (optimized), Oracle 12c on Aug  5 2016 23:35:08
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2016, Oracle and/or its affiliates. All rights reserved.



2017-03-31 11:27:58  INFO    OGG-02095  Successfully set environment variable GRID_HOME=/u01/app/12.1.0/grid.

2017-03-31 11:27:58  INFO    OGG-02095  Successfully set environment variable GRID_HOME=/u01/app/12.1.0/grid.
ERROR: Transparent Integration with XAG is enabled but CRS/XAG is not available.

$ export GRID_HOME=/u01/app/12.1.0/grid
$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 12.2.0.1.160823 OGGCORE_12.2.0.1.0OGGBP_PLATFORMS_160805.1058_FBO
Linux, x64, 64bit (optimized), Oracle 12c on Aug  5 2016 23:35:08
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2016, Oracle and/or its affiliates. All rights reserved.



2017-03-31 11:29:40  INFO    OGG-02095  Successfully set environment variable GRID_HOME=/u01/app/12.1.0/grid.

2017-03-31 11:29:40  INFO    OGG-02095  Successfully set environment variable GRID_HOME=/u01/app/12.1.0/grid.

GGSCI 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt   XAG                           


MANAGER     RUNNING                                                  MANAGED/xag.ggs.goldengate  


GGSCI 2> exit

See anything wrong so far? Extracts are running and not shown.

As it turns out from strace of ggsci – no RW permission.
open(“./dirchk/*.cpe”, O_RDWR) = -1 EACCES (Permission denied)

Remembered my post about XAG many moons ago about role separation and what a PITA it was when implemented incorrectly?


Documentum – Increase the number of concurrent_sessions

Yann Neuhaus - Sat, 2017-04-01 07:01

In Documentum, there is a parameter named concurrent_sessions which basically defines how many sessions a Content Server can open simultaneously. This parameter is defined in the server.ini file (or server_<dm_server_config.object_name>.ini on a Remote Content Server) of each docbase and it has a default value of 100.

An empty Content Server with an IndexAgent and D2 installed (without user using it) will usually take around 10 sessions for the jobs, for the searches, aso… As soon as there are users in the environment, the number of concurrent sessions will quickly grow and therefore depending on how many users you have, you may (will) need to increase this limit. To be more precise, the concurrent_sessions controls the number of connections the server can handle concurrently. This number must take into account not only the number of users who are using the repository concurrently, but also the operations those users are executing. Some operations require a separate connection to complete. For example:

  • Issuing an IDfQuery.execute method with the readquery flag set to FALSE causes an internal connection request
  • Executing an apply method or an EXECUTE DQL statement starts another connection
  • When the agent exec executes a job, it generally requires two additional connections
  • Issuing a full-text query requires an additional connection
  • aso…

 

Do you already know how to increase the number of allowed concurrent sessions? I’m sure you do, it’s pretty easy:

  1. Calculate the appropriate number of concurrent sessions needed based on the information provided above
  2. Open the file $DOCUMENTUM/dba/config/DOCBASE/server.ini and replace “concurrent_sessions = 100″ with the desired value (“concurrent_sessions = 500″ for example)
  3. Restart the docbase DOCBASE using your custom script of the default Documentum scripts under $DOCUMENTUM/dba
  4. Ensure that the Database used can handle the new number of sessions properly and see if you need to increase the sessions/processes for that

 

To know how many sessions are currently used, it’s pretty simple, you can just execute the DQL “execute show_sessions” but be aware that all sessions will be listed and that’s not exactly what we want. Therefore you need to keep only the ones with a dormancy_status that is Active in the final count otherwise the value will be wrong. The number of active sessions is not only linked to a docbase but also to a Content Server (to be more precise, it is only linked to a dm_server_config object). This means that if you have a High Availability environment, each Content Server (each dm_server_config object) will have its own number of active sessions. This also means that you need to take this into account when calculating how many concurrent sessions you need.

For example if you calculated that you will need a total of 500 concurrent sessions (again it’s not the number of concurrent users!) for a docbase, then:

  • If you have only one Content Server, you will need to set “concurrent_sessions = 500″ on the server.ini file of this docbase.
  • If you have two Content Servers (two dm_server_config objects) for the docbase, then you can just set “concurrent_sessions = 275″ on each server.ini files. Yes I know 2*275 isn’t really equal to 500 but that’s because each Content Server will need its internal sessions for the jobs, searches, aso… In addition to that, the Content Servers might need to talk to each other so these 25 additional sessions wouldn’t really hurt.

 

Now is the above procedure working for any value of the concurrent_sessions? Well the answer to this question is actually the purpose of this blog: yes and no. From a logical point of view, there is no restriction to this value but from a technical point of view, there is… A few months ago at one of our customer, I was configuring a new Application which had a requirement of 2150 concurrent_sessions accross a High Availability environment composed of two Content Servers. Based on the information provided above, I started the configuration with 1100 concurrent sessions on each Content Server to match the requirement. But then when I tried to start the docbase again, I got the following error inside the docbase log file ($DOCUMENTUM/dba/log/DOCBASE.log):

***********************************************************

Program error: Illegal parameter value for concurrent_sessions: 1100

***********************************************************

Usage: ./documentum -docbase_name <docbase name> -init_file <filename> [-o<option>]

    -docbase_name : name of docbase
    -init_file    : name of server.ini file (including path) which
                    contains the server startup parameters

Usage: ./documentum -h

 

As you can see, the docbase refuses to start with a number of concurrent sessions set to 1100. What’s the reason behind that? There is an artificial limit set to 1020. This is actually mentioned in the documentation:
The maximum number of concurrent sessions is dependent on the operating system of the server host machine. The limit is:

  • 3275 for Windows systems
  • 1020 for Linux, Solaris, and AIX systems

 

So why is there a limit? Why 1020? This limit is linked to the FD_SETSIZE value. The documentation on FD_SETSIZE says the following:

An fd_set is a fixed size buffer. Executing FD_CLR() or FD_SET() with a value of fd that is negative or is equal to or larger than FD_SETSIZE will result in undefined behavior. Moreover, POSIX requires fd to be a valid file descriptor.

 

Thus FD_SETSIZE doesn’t explicitly limit the number of file descriptors that can be worked on with the system select() call. Inside every UNIX process, for its PID, Documentum maintain a corresponding list (of pointers) of file descriptors. In UNIX based systems, every Documentum session is created as a separate process. Since the number of sessions created directly depends on the number of file descriptors in an OS, each of these processes will be having a list of the file descriptors within their process which will be taking a good chunk of physical memory. With this technical reasoning the value 1020 has been set to be the max concurrent sessions available by default in Documentum.

So basically this limit of 1020 has been set arbitrary by EMC to stay within the default OS (kernel) value which is set to 1024 (can be checked with “ulimit -Sa” for example). An EMC internal task (CS-40186) was opened to discuss this point and to discuss the possibility to increase this maximum number. Since the current default limit is set only in regards to the default OS value of 1024, if this value is increased to 4096 for example (which was our case since the beginning), then there is no real reason to be stuck at 1020 on Documentum side. The Engineering Team implemented a change in the binaries that allows changing the limit. This is done by adding the environment variable DM_FD_SETSIZE.

Therefore to change the concurrent sessions above 1020 (1100 in this example) and in addition to the steps already mentioned before, you also need to do the following (depending on your OS, you might need to update the .bashrc or .profile files instead):

echo "export DM_FD_SETSIZE=1200" >> ~/.bash_profile
source ~/.bash_profile
$DOCUMENTUM/dba/dm_start_DOCBASE

 

With this environment variable DM_FD_SETSIZE now set to 1200, we can use 1100 concurrent sessions without issue. The value that will be used for the concurrent_sessions will be the one from the server.ini file. We just need to define a DM_FD_SETSIZE variable with a value equal or bigger than what we want. Also, I didn’t mention the ulimit but of course, you also need to set the limits of your OS accordingly (this is done in the file /etc/limits.conf or inside any file under /etc/limits.d/).

 

 

Cet article Documentum – Increase the number of concurrent_sessions est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator