Feed aggregator

Effective Way to Get Changed Rows in ADF BC API

Andrejus Baranovski - Tue, 2018-05-29 14:32
Did you ever wondered how to get all changed rows in the transaction, without iterating through entire row set? It turns out to be pretty simple with ADF BC API method - getAllEntityInstancesIterator, which is invoked for Entity Definition attached to current VO.

Method works well - it returns changed rows from different row set pages, not only from the current. In my experiment, I changed couple of rows in the first page:


And couple of rows in 5th page. Also I removed row and created one:


Method returns information about all changed rows, as well as deleted and new:


Example of getAllEntityInstancesIterator method usage in VO Impl class. This method helps to get all changed rows in current transaction, very handy:


Sample application source code is available on GitHub.

EDB Failover Manager in EDB containers in Minishift/OpenShift – Failovers

Yann Neuhaus - Tue, 2018-05-29 13:42

In the last post we had a look at the basic configuration of EDB EFM and confirmed that we can do manual switchovers for maintenance operations. Being able to do a switchover and switchback is fine but what really is important are automatic failovers. Without an automatic failover this setup would be somehow useless as a) EDB EFM is supposed to do exactly that and b) without that we would need to implement something on our own. So lets have a look if that works.

For this little I scaled the PostgreSQL instances to three, meaning one master and two replica instances. Three is the minimum required as EFM requires at least three agents so at least two can decide what to do. In a traditional setup this is usually done by making a non database host the EDB EFM witness host. In this OpenShift/MiniShift setup we do not have a witness node so the minimum amount of database hosts (where EFM runs on) is three. This is how it currently looks like:

dwe@dwe:~$ oc get pods -o wide -L role
NAME                 READY     STATUS    RESTARTS   AGE       IP            NODE        ROLE
edb-as10-0-1-dzcj6   0/1       Running   2          3h        172.17.0.9    localhost   standbydb
edb-as10-0-1-jmfjk   0/1       Running   2          4h        172.17.0.3    localhost   masterdb
edb-as10-0-1-rlzdb   0/1       Running   2          3h        172.17.0.6    localhost   standbydb
edb-pgpool-1-jtwg2   0/1       Running   2          4h        172.17.0.4    localhost   queryrouter
edb-pgpool-1-pjxzs   0/1       Running   2          4h        172.17.0.5    localhost   queryrouter

What I will do is to kill the master pod (edb-as10-0-1-jmfjk) to see what happens to my current setup. Before I do that, lets open a second session into one of the standby pods and check the current EFM status:

dwe@dwe:~$ oc rsh edb-as10-0-1-dzcj6
sh-4.2$ /usr/edb/efm-3.0/bin/efm cluster-status edb
Cluster Status: edb
VIP: 

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Master      172.17.0.3           UP     UP        
	Standby     172.17.0.6           UP     UP        
	Standby     172.17.0.9           UP     UP        

Allowed node host list:
	172.17.0.3

Membership coordinator: 172.17.0.3

Standby priority host list:
	172.17.0.9 172.17.0.6

Promote Status:

	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      172.17.0.3           0/F000060        
	Standby     172.17.0.9           0/F000060        
	Standby     172.17.0.6           0/F000060        

	Standby database(s) in sync with master. It is safe to promote.

Looks fine. So from the first session using the oc command line utility lets kill the master pod. What I at least expect is that one of the standby instances get promoted and the remaining standby is reconfigured to connect to the new master.

dwe@dwe:~$ oc delete pod edb-as10-0-1-jmfjk
pod "edb-as10-0-1-jmfjk" deleted
dwe@dwe:~$ oc get pods
NAME                 READY     STATUS        RESTARTS   AGE
edb-as10-0-1-dzcj6   1/1       Running       2          3h
edb-as10-0-1-jmfjk   1/1       Terminating   2          4h
edb-as10-0-1-rlzdb   1/1       Running       2          4h
edb-as10-0-1-snnxc   0/1       Running       0          2s
edb-bart-1-s2fgj     1/1       Running       2          4h
edb-pgpool-1-jtwg2   1/1       Running       2          4h
edb-pgpool-1-pjxzs   1/1       Running       2          4h

The master pod is terminating. What does the second session in the standby pod tell us when we ask EFM for the cluster status?

sh-4.2$ /usr/edb/efm-3.0/bin/efm cluster-status edb
Cluster Status: edb
VIP: 

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Standby     172.17.0.6           UP     UP        
	Promoting   172.17.0.9           UP     UP        

Allowed node host list:
	172.17.0.3

Membership coordinator: 172.17.0.6

Standby priority host list:
	172.17.0.6

Promote Status:

	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      172.17.0.9           0/F000250        
	Standby     172.17.0.6           0/F000140        

	One or more standby databases are not in sync with the master database.

The master is gone and one of the standby instances gets promoted. Lets wait a few seconds and check again:

sh-4.2$ /usr/edb/efm-3.0/bin/efm cluster-status edb
Cluster Status: edb
VIP: 

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Idle        172.17.0.11          UP     UNKNOWN   
	Standby     172.17.0.6           UP     UP        
	Master      172.17.0.9           UP     UP        

Allowed node host list:
	172.17.0.3

Membership coordinator: 172.17.0.6

Standby priority host list:
	172.17.0.6

Promote Status:

	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      172.17.0.9           0/F000288        
	Standby     172.17.0.6           0/F000288        

	Standby database(s) in sync with master. It is safe to promote.

Idle Node Status (idle nodes ignored in XLog location comparisons):

	Address              XLog Loc         Info
	--------------------------------------------------------------
	172.17.0.11          0/F000288        DB is in recovery.

The new master is ready, the remaining standby is reconfigured to connect to the new master. Even better a few seconds later the old master is back as a new standby:

sh-4.2$ /usr/edb/efm-3.0/bin/efm cluster-status edb
Cluster Status: edb
VIP: 

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Standby     172.17.0.11          UP     UP        
	Standby     172.17.0.6           UP     UP        
	Master      172.17.0.9           UP     UP        

Allowed node host list:
	172.17.0.3

Membership coordinator: 172.17.0.6

Standby priority host list:
	172.17.0.6 172.17.0.11

Promote Status:

	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      172.17.0.9           0/F000288        
	Standby     172.17.0.6           0/F000288        
	Standby     172.17.0.11          0/F000288        

	Standby database(s) in sync with master. It is safe to promote.

Works fine and is what you must have in a good setup. When we kill one of the standby pods the expected result is that the pod restarts and the instance will come back as a standby, lets check:

20:32:47 dwe@dwe:~$ oc delete pod edb-as10-0-1-hf27d
pod "edb-as10-0-1-hf27d" deleted
dwe@dwe:~$ oc get pods -o wide -L role
NAME                 READY     STATUS        RESTARTS   AGE       IP            NODE        ROLE
edb-as10-0-1-8p5rd   0/1       Running       0          3s        172.17.0.6    localhost   
edb-as10-0-1-dzcj6   1/1       Running       2          4h        172.17.0.9    localhost   masterdb
edb-as10-0-1-hf27d   0/1       Terminating   1          12m       172.17.0.3    localhost   
edb-as10-0-1-snnxc   1/1       Running       0          20m       172.17.0.11   localhost   standbydb
edb-pgpool-1-jtwg2   1/1       Running       2          5h        172.17.0.4    localhost   queryrouter
edb-pgpool-1-pjxzs   1/1       Running       2          5h        172.17.0.5    localhost   queryrouter

A few moments later we have the previous status:

dwe@dwe:~$ oc get pods -o wide -L role
NAME                 READY     STATUS    RESTARTS   AGE       IP            NODE        ROLE
edb-as10-0-1-8p5rd   1/1       Running   0          4m        172.17.0.6    localhost   standbydb
edb-as10-0-1-dzcj6   1/1       Running   2          4h        172.17.0.9    localhost   masterdb
edb-as10-0-1-snnxc   1/1       Running   0          24m       172.17.0.11   localhost   standbydb
edb-pgpool-1-jtwg2   1/1       Running   2          5h        172.17.0.4    localhost   queryrouter
edb-pgpool-1-pjxzs   1/1       Running   2          5h        172.17.0.5    localhost   queryrouter

Fine as well. To complete this little series about EDB containers in MiniShift/OpenShift we will have a look at how we can add an EDB BART container to the setup, because backup and recovery is still missing :)

 

Cet article EDB Failover Manager in EDB containers in Minishift/OpenShift – Failovers est apparu en premier sur Blog dbi services.

Oracle and PwC Team to Support Finance Transformation for Insurers and Provide IFRS 17 Compliance Expertise

Oracle Press Releases - Tue, 2018-05-29 07:00
Press Release
Oracle and PwC Team to Support Finance Transformation for Insurers and Provide IFRS 17 Compliance Expertise Insurers to More Easily Navigate Adoption of New Accounting Standard, Modernize Decision Making and Deploy Timely and Insightful Reporting

Redwood Shores, Calif.—May 29, 2018

Oracle Financial Services Analytical Applications (OFSAA) and PwC, an Oracle PartnerNetwork (OPN) Platinum and Cloud Elite partner, announced today a global collaboration to provide insurance clients with clear visibility into near-term and ongoing compliance with the International Accounting Standards Board’s (IASB) new IFRS 17 standard, which insurers must meet by 2021, and the pending GAAP Long Duration Contract Standard. Together, Oracle and PwC are facilitating risk and finance transformation and insurers’ adoption of IFRS 17.

Oracle provides the data management platform to enable finance transformation and analytical applications for IFRS 17 with pre-built business rules, valuations, contractual service margin calculations and disclosures. The data management platform and its associated tools provide prebuilt integration with the accounting and financial close process, facilitate integration with actuarial systems, and create integrated, auditable results for internal and external reporting, with auditability and lineage.

PwC acts as Oracle’s strategic advisor and provide consulting and implementation services and accelerator tools delivering tailored enhancements. Insurers will benefit from improved decision making, modernized actuarial analytics, and more timely and insightful reporting related to compliance needs.

“Now more than ever, insurers have to meet an array of regulatory and compliance requirements including new financial standards, such as IFRS 17,” said Sonny Singh, senior vice president and general manager, Financial Services Global Business Unit, Oracle. “To do so, they need a comprehensive technological solution to properly comply with these increasingly complex requirements and reduce any adverse impact to their business. Oracle’s award winning risk and compliance platform, which are purpose built for financial services institutions, combine with PwC’s experience in advisory services to help provide insurers the necessary capabilities to address these stringent requirements.”

“PwC is a leading advisor to the insurance industry and plays a major role in helping insurers transform and modernize their businesses while meeting new IFRS 17 requirements,” said Alex Bertolotti, PwC IFRS 17 lead partner. “Together with Oracle, which has a strong technology position in the financial services industry, our extensive market experience enables us to provide advisory and software solutions for the insurance industry, and bring strong market experience in helping our insurance clients meet their specific business challenges.”

Contact Info
Judi Palmer
Oracle
+1 650 784 7901
Judi.palmer@oracle.com
Dario Cziráky
PwC - UK
+44(0)7702699200
Dario.Cziraky@pwc.com
Yeon Lee
PwC - US
+1 646-471-9082
Yeon.Lee@pwc.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Oracle PartnerNetwork

Oracle PartnerNetwork (OPN) is Oracle's partner program that provides partners with a differentiated advantage to develop, sell and implement Oracle solutions. OPN offers resources to train and support specialized knowledge of Oracle’s products and solutions and has evolved to recognize Oracle’s growing product portfolio, partner base and business opportunity. Key to the latest enhancements to OPN is the ability for partners to be recognized and rewarded for their investment in Oracle Cloud. Partners engaging with Oracle will be able to differentiate their Oracle Cloud expertise and success with customers through the OPN Cloud program – an innovative program that complements existing OPN program levels with tiers of recognition and progressive benefits for partners working with Oracle Cloud. To find out more visit: http://www.oracle.com/partners.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates.

About PwC

At PwC, our purpose is to build trust in society and solve important problems. We’re a network of firms in 158 countries with more than 236,000 people who are committed to delivering quality in assurance, advisory and tax services. Find out more and tell us what matters to you by visiting us at www.pwc.com. PwC refers to the PwC network and/or one or more of its member firms, each of which is a separate legal entity. Please see www.pwc.com/structure for further details.

Talk to a Press Contact

Judi Palmer

  • +1 650 784 7901

Dario Cziráky

  • +44(0)7702699200

Yeon Lee

  • +1 646-471-9082

Index Bouncy Scan 2

Jonathan Lewis - Tue, 2018-05-29 06:27

I wrote a note some time last year about taking advantage of the “index range scan (min/max)” operation in a PL/SQL loop to find the small number distinct values in a large single column index efficiently (for example an index that was not very efficient but existed to avoid the “foreign key locking” problem. The resulting comments included pointers to other articles that showed pure SQL solutions to the same problem using recursive CTEs (“with” subqueries) from Markus Winand and Sayan Malakshinov: both writers also show examples of extending the technique to cover more cases than the simple list of distinct values.

The topic came up again on the ODC (OTN) database forum a couple of days ago; one of the replies linked back to my original posting, another gave the recursive solution for a single column index – so I ended up seeing the following question twice, once as a comment on my blog, once in the forum: “Can you extend this method to a two column index, what about an N column index ?”

Here’s a walk-through of working out one possible solution for the two-column requirement – how to find all the distinct combinations for the first two columns of a very large index without having to scan and aggregate the whole index. We start with a suitable table and index.


rem
rem     Script:         bouncy_index.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Apr 2018
rem     Purpose:
rem
rem     Last tested
rem             12.2.0.1
rem             12.1.0.2
rem             11.2.0.4
rem

create table t1
segment creation immediate
nologging
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                          id,
        mod(rownum-1,3)                 val1,
        mod(rownum-1,10)                val2,
        lpad('x',100,'x')               padding
from
        generator       v1
order by
        dbms_random.value
;

begin
        dbms_stats.gather_table_stats(
                ownname     => user,
                tabname     => 'T1',
                method_opt  => 'for all columns size 1'
        );
end;
/

alter table t1 add constraint t1_pk primary key(val1, val2, id);

I’ve created a table with 3 values for val1, 10 values for val2, with a total of 30 combinations. The addition of the primary key starting with (val1, val2) is just a lazy way to ensure that I have a suitable index AND val1 and val2 are both declared not null.

With this data my first step will be to demonstrate the recursive CTE (“with” subquery) used by Andrew Sayer in the ODC posting to get the distinct values for val1 using three index “index range scan (min/max)”probes. I’ve included the in-memory execution plan with rowsource execution stats to show that this does a minimal amount of work.

The results in this note come from 12.2.0.1:


set serveroutput off
alter session set statistics_level = all;

with bouncy (val1)
as (
        select  min(val1) val1
        from    t1
        union all
        select  (select min(t1.val1) val1 from t1 where t1.val1 > bouncy.val1) val1
        from    bouncy
        where   bouncy.val1 is not null
    )
select  *
from    bouncy
where   bouncy.val1 is not null
order by
        val1
;

select * from table(dbms_xplan.display_cursor(null,null,'cost allstats last'));

---------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                  | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                           |       |      1 |        |    19 (100)|      3 |00:00:00.01 |       7 |      4 |       |       |          |
|   1 |  SORT ORDER BY                             |       |      1 |      2 |    19   (6)|      3 |00:00:00.01 |       7 |      4 |  2048 |  2048 | 2048  (0)|
|*  2 |   VIEW                                     |       |      1 |      2 |    18   (0)|      3 |00:00:00.01 |       7 |      4 |       |       |          |
|   3 |    UNION ALL (RECURSIVE WITH) BREADTH FIRST|       |      1 |        |            |      4 |00:00:00.01 |       7 |      4 |  1024 |  1024 |          |
|   4 |     SORT AGGREGATE                         |       |      1 |      1 |            |      1 |00:00:00.01 |       2 |      1 |       |       |          |
|   5 |      INDEX FULL SCAN (MIN/MAX)             | T1_PK |      1 |      1 |     2   (0)|      1 |00:00:00.01 |       2 |      1 |       |       |          |
|   6 |     SORT AGGREGATE                         |       |      3 |      1 |            |      3 |00:00:00.01 |       5 |      3 |       |       |          |
|   7 |      FIRST ROW                             |       |      3 |      1 |     2   (0)|      2 |00:00:00.01 |       5 |      3 |       |       |          |
|*  8 |       INDEX RANGE SCAN (MIN/MAX)           | T1_PK |      3 |      1 |     2   (0)|      2 |00:00:00.01 |       5 |      3 |       |       |          |
|   9 |     RECURSIVE WITH PUMP                    |       |      4 |        |            |      3 |00:00:00.01 |       0 |      0 |       |       |          |
---------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("BOUNCY"."VAL1" IS NOT NULL)
   8 - access("T1"."VAL1">:B1)

As you can see I’ve done an “index full scan (min/max)” as the first step of the recursive query, visiting just two buffered blocks (the index leaf-block count is 27 – roughly 9 per value of val1 – so Oracle is clearly doing an efficient access for that value, it’s not rally a “full” scan. We then see 3 “index range scan (min/max)” at roughly 2 buffer visits each to collect the remaining values. (There’s probably a small saving in buffer gets due to the pinning that takes place).

So we can get the val1 values very easily and efficiently with this recurstive CTE technology. Let’s write some code that uses the same technology to find the val2 values for each possible val1 value in turn:

with bounce2 (val1, val2)
as (
        select val1, val2 from (
                select  0 val1, 0 val2 from dual
                union all
                select 1,0 from dual
                union all
                select 2,0 from dual
        )
        union all
        select  bounce2.val1, (select min(t1.val2) val2 from t1 where t1.val1 = bounce2.val1 and t1.val2 > bounce2.val2) val2
        from    bounce2
        where   bounce2.val2 is not null
--      and     bounce2.val1 is not null
)
select * from bounce2
where
        bounce2.val2 is not null
and     bounce2.val1 is not null        -- > redundant predicate
order by
        val1, val2
;

select * from table(dbms_xplan.display_cursor(null,null,'cost allstats last'));

---------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                  | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                           |       |      1 |        |    27 (100)|     30 |00:00:00.01 |      32 |     24 |       |       |          |
|   1 |  SORT ORDER BY                             |       |      1 |      6 |    27   (4)|     30 |00:00:00.01 |      32 |     24 |  2048 |  2048 | 2048  (0)|
|*  2 |   VIEW                                     |       |      1 |      6 |    26   (0)|     30 |00:00:00.01 |      32 |     24 |       |       |          |
|   3 |    UNION ALL (RECURSIVE WITH) BREADTH FIRST|       |      1 |        |            |     33 |00:00:00.01 |      32 |     24 |  1024 |  1024 |          |
|   4 |     VIEW                                   |       |      1 |      3 |     6   (0)|      3 |00:00:00.01 |       0 |      0 |       |       |          |
|   5 |      UNION-ALL                             |       |      1 |        |            |      3 |00:00:00.01 |       0 |      0 |       |       |          |
|   6 |       FAST DUAL                            |       |      1 |      1 |     2   (0)|      1 |00:00:00.01 |       0 |      0 |       |       |          |
|   7 |       FAST DUAL                            |       |      1 |      1 |     2   (0)|      1 |00:00:00.01 |       0 |      0 |       |       |          |
|   8 |       FAST DUAL                            |       |      1 |      1 |     2   (0)|      1 |00:00:00.01 |       0 |      0 |       |       |          |
|   9 |     SORT AGGREGATE                         |       |     30 |      1 |            |     30 |00:00:00.01 |      32 |     24 |       |       |          |
|  10 |      FIRST ROW                             |       |     30 |      1 |     2   (0)|     27 |00:00:00.01 |      32 |     24 |       |       |          |
|* 11 |       INDEX RANGE SCAN (MIN/MAX)           | T1_PK |     30 |      1 |     2   (0)|     27 |00:00:00.01 |      32 |     24 |       |       |          |
|  12 |     RECURSIVE WITH PUMP                    |       |     11 |        |            |     30 |00:00:00.01 |       0 |      0 |       |       |          |
---------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(("BOUNCE2"."VAL2" IS NOT NULL AND "BOUNCE2"."VAL1" IS NOT NULL))
  11 - access("T1"."VAL1"=:B1 AND "T1"."VAL2">:B2)


In this example of the code the second half of the CTE looks remarkably similar to the previous statement – except I now have a two-column CTE and I’ve included an equality predicate against val1 based on the first of the two columns. In the first half of the code I’ve cheated (as a temporary measure) and supplied three rows of data which list the three distinct values of val1 with their associated minimum values for val2.

The execution plan shows that I’ve done 30 “index range scan (min/max)” of the index with 32 buffer visits. And that’s exactly the right number of probes to return my result set. So if I can manage to generate the starting values efficiently I can execute the whole query efficiently. So let’s find a way of changing that “union all on dual” fudge into a generic statement. Let’s replace it with a recursive CTE:


with bounce1(val1, val2) as (
        select val1, val2 
        from    (
                select
                        /*+ index(t1) */
                        val1, val2,
                        row_number() over(order by val1, val2) rn
                from    t1
        )
        where
                rn = 1
        union all
        select
                (select min(t1.val1) val1 from t1 where t1.val1 > bounce1.val1) val1,
                (select min(t1.val2) val2 from t1 where t1.val1 > bounce1.val1 and rownum = 1) val2
        from    bounce1
        where   bounce1.val1 is not null
),
bounce2 (val1, val2)
as (
        select  val1, val2 
        from    bounce1
--      where   bounce1.val1 is not null
        union all
        select  bounce2.val1, (select min(t1.val2) val2 from t1 where t1.val1 = bounce2.val1 and t1.val2 > bounce2.val2) val2
        from    bounce2
        where   bounce2.val2 is not null
--      and     bounce2.val1 is not null
)
select * from bounce2
where
        bounce2.val2 is not null
and     bounce2.val1 is not null        -- > redundant predicate
order by
        val1, val2
;

select * from table(dbms_xplan.display_cursor(null,null,'cost allstats last'));

--------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                    | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                             |       |      1 |        |   189 (100)|     30 |00:00:00.01 |      45 |       |       |          |
|   1 |  SORT ORDER BY                               |       |      1 |      4 |   189   (2)|     30 |00:00:00.01 |      45 |  2048 |  2048 | 2048  (0)|
|*  2 |   VIEW                                       |       |      1 |      4 |   188   (2)|     30 |00:00:00.01 |      45 |       |       |          |
|   3 |    UNION ALL (RECURSIVE WITH) BREADTH FIRST  |       |      1 |        |            |     34 |00:00:00.01 |      45 |  1024 |  1024 |          |
|   4 |     VIEW                                     |       |      1 |      2 |    87   (2)|      4 |00:00:00.01 |      13 |       |       |          |
|   5 |      UNION ALL (RECURSIVE WITH) BREADTH FIRST|       |      1 |        |            |      4 |00:00:00.01 |      13 |  1024 |  1024 |          |
|*  6 |       VIEW                                   |       |      1 |      1 |    29   (0)|      1 |00:00:00.01 |       2 |       |       |          |
|*  7 |        WINDOW NOSORT STOPKEY                 |       |      1 |  10000 |    29   (0)|      1 |00:00:00.01 |       2 | 73728 | 73728 |          |
|   8 |         INDEX FULL SCAN                      | T1_PK |      1 |  10000 |    29   (0)|      2 |00:00:00.01 |       2 |       |       |          |
|   9 |       SORT AGGREGATE                         |       |      3 |      1 |            |      3 |00:00:00.01 |       5 |       |       |          |
|  10 |        FIRST ROW                             |       |      3 |      1 |     2   (0)|      2 |00:00:00.01 |       5 |       |       |          |
|* 11 |         INDEX RANGE SCAN (MIN/MAX)           | T1_PK |      3 |      1 |     2   (0)|      2 |00:00:00.01 |       5 |       |       |          |
|  12 |       SORT AGGREGATE                         |       |      3 |      1 |            |      3 |00:00:00.01 |       6 |       |       |          |
|* 13 |        COUNT STOPKEY                         |       |      3 |        |            |      2 |00:00:00.01 |       6 |       |       |          |
|* 14 |         INDEX RANGE SCAN                     | T1_PK |      3 |    500 |     2   (0)|      2 |00:00:00.01 |       6 |       |       |          |
|  15 |       RECURSIVE WITH PUMP                    |       |      4 |        |            |      3 |00:00:00.01 |       0 |       |       |          |
|  16 |     SORT AGGREGATE                           |       |     30 |      1 |            |     30 |00:00:00.01 |      32 |       |       |          |
|  17 |      FIRST ROW                               |       |     30 |      1 |     2   (0)|     27 |00:00:00.01 |      32 |       |       |          |
|* 18 |       INDEX RANGE SCAN (MIN/MAX)             | T1_PK |     30 |      1 |     2   (0)|     27 |00:00:00.01 |      32 |       |       |          |
|  19 |     RECURSIVE WITH PUMP                      |       |     11 |        |            |     30 |00:00:00.01 |       0 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(("BOUNCE2"."VAL2" IS NOT NULL AND "BOUNCE2"."VAL1" IS NOT NULL))
   6 - filter("RN"=1)
   7 - filter(ROW_NUMBER() OVER ( ORDER BY "VAL1","VAL2")<=1) 11 - access("T1"."VAL1">:B1)
  13 - filter(ROWNUM=1)
  14 - access("T1"."VAL1">:B1)
  18 - access("T1"."VAL1"=:B1 AND "T1"."VAL2">:B2)


Again we see 30 probes using “index range scan (min/max)” with 32 buffer gets to get 30 rows; plus a further 13 buffer gets to generate the three driving rows. The 13 buffer gets break down to: 2 to get the minimum (val1, val2) combination using an “index full scan (min/max)”, then 5 for the probes to get the three minimum values for val1, and 6 for the probes to get the three corresponding minimum values of val2.

You’ll notice that I’ve got various “is not null” predicates scattered throughout the code. In some cases this is to stop Oracle from running into an infinite loop and reporting Oracle error: ORA-32044: cycle detected while executing recursive WITH query” This will occur because of the way that “(select max()…)” inline scalar subqueries returning a null if there is no data found which would lead to the next cycle of the recursive descent taking that null as an input – hence starting the infinite recursion. In some cases the “is not null” predicates are my default pattern for recurstive CTEs and some of them could probably be removed with no change in meaning (or workload).

The /*+ index() */ hint in the starting point for bounce1 was necessary to avoid an “index fast full scan” in 12.2; but that was purely a case of the statistics – number of distinct values, leaf_block count, etc – making the optimizer pick an option that was appropriate for this tiny data set, but not appropriate for the demonstration.  In fact this looks like the side effect of two defects in the 12.1 optimizer code, of which only one has been fixed in 12.2.

Optimizer Limitations

Here’s an extract from the execution plan for the final query with an /*+ index(t1) */ hint in place. The extract is identical for 12.1.0.2 and 12.2.0.1:

--------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                    | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------------------
...
|*  6 |       VIEW                                   |       |      1 |      1 |    29   (0)|      1 |00:00:00.01 |       2 |       |       |          |
|*  7 |        WINDOW NOSORT STOPKEY                 |       |      1 |  10000 |    29   (0)|      1 |00:00:00.01 |       2 | 73728 | 73728 |          |
|   8 |         INDEX FULL SCAN                      | T1_PK |      1 |  10000 |    29   (0)|      2 |00:00:00.01 |       2 |       |       |          |

You’ll notice the Cost at operation 8 is appropriate for a real (i.e. all leaf blocks) full scan of the index. (The leaf_block value was 27 as I mentioned earlier on). You’ll also see that the OMem (PGA requirement for optimum workarea operation) figure is consistent with Oracle processing 10,000 rows in the index. Since the optimizer managed to work out that it could do a full scan with nosort and stopkey it looks a little surprising that the algorithms didn’t manage to make some allowance for the limited access that would occur. (I’d view this as a current limitation, rather than a bug, though).

Now compare the equivalent extracts when we hint an index fast full scan 12.1.0.2 first, then 12.2.0.1:

12.1.0.2
--------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                    | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------------------
...
|*  6 |       VIEW                                   |       |      1 |      1 |    39   (8)|      1 |00:00:00.03 |      32 |       |       |          |
|*  7 |        WINDOW SORT PUSHED RANK               |       |      1 |  10000 |    39   (8)|      1 |00:00:00.03 |      32 |  2048 |  2048 | 2048  (0)|
|   8 |         INDEX FAST FULL SCAN                 | T1_PK |      1 |  10000 |     5   (0)|  10000 |00:00:00.01 |      32 |       |       |          |

12.2.0.1
--------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                    | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------------------
...
|*  6 |       VIEW                                   |       |      1 |      1 |     7  (29)|      1 |00:00:00.01 |      34 |       |       |          |
|*  7 |        WINDOW SORT PUSHED RANK               |       |      1 |  10000 |     7  (29)|      1 |00:00:00.01 |      34 |  2048 |  2048 | 2048  (0)|
|   8 |         INDEX FAST FULL SCAN                 | T1_PK |      1 |  10000 |     5   (0)|  10000 |00:00:00.01 |      34 |       |       |          |

In both cases the cost of the index fast full scan is the same – and much cheaper; but in 12.1.0.2 the cost of the query looks as if it is allowing for sorting (and spilling) the entire 10,000 rows of returned from the index fast full scan (even though the OMem indicates otherwise), while the cost in 12.2.0.1 looks as if it recognises that it just has to do a running comparison through the data set as it returns, keeping only the current minimum in memory at any one moment. This clearly matches our expectations of how Oracle ought to behave, which is why I’d call this a bug in 12.1, fixed by 12.2.

The dramatic change in cost of operation 7 on the upgrade explains the change in plan and the necessity for the /*+ index(t1) */ hint – but if the “first row” predicate were also reflected in the costing then the cost of the “stopkey” index full scan would drop to 2 (probably) and the original 12.1 path would be re-appear.

Footnote

I don’t think there’s a lot of scope for improving the efficiency of this query for getting the (relatively) small number of distinct combinations from the first two columns of a very large index – but there are some very clever SQL bunnies on the ODC forum, so I won’t be surprised if someone comes up with a better solution.

4 Reasons Why You Need Interactive Prototypes

Nilesh Jethwa - Mon, 2018-05-28 23:08

Prototyping plays a pivotal role in the development of a good application. This is the final stage for you to optimize how each element flows and interacts within your application. Prototypes don’t necessarily have to be visually appealing, but they … Continue reading ?

Credit: MockupTiger Wireframes

11g Apex Email not working from Sqlplus

Tom Kyte - Mon, 2018-05-28 21:46
Hi Tom, I can successfully send email from Apex application using apex_mail.send. When I try to use the same from sqlplus it gives the following Error BEGIN apex_mail.send (p_to => 'test@test.com', p_fr...
Categories: DBA Blogs

Data Management with Oracle Utilities products

Anthony Shorten - Mon, 2018-05-28 19:46

One of the most common questions I receive is about how to manage data volumes in the Oracle Utilities products. The Oracle Utilities products are designed to scale no matter how much data is present in the database but obviously storage costs and management of large amounts of data is not optimal.

A few years ago we adopted the Information Lifecycle Management (ILM) capabilities of the Oracle Database as well as developed a unique spin on the management of data. Like biological life, data has a lifecycle. It is born when it is created, it has an active life while the business uses or manipulates it, it goes into retirement but is still accessible and eventually it dies when it is physically removed from the database. The length of that lifecycle will vary from data type to data type, implementation to implementation. The length of the life is dictated by its relevance to the business, company policies and even legal or government legislation.

The data management (ILM) capabilities of Oracle Utilities take this into account:

  • Data Retention Configuration. The business configures how long the active life of the individual data types are for their business. This defines what is called the Active Period. This is when the data needs to be in the database and accessible to the business for update and active use in their business.
  • ILM Eligibility Rules. Once the data retention period is reached, before the data can enter retirement, the system needs to know that anything outstanding, from a business perspective, has been completed. This is the major difference with most data management approaches. I hear DBA's saying that they would just rather the data was deleted after a specific period. Whilst that would cover most situations it would not cover a situation where the business is not finished with the data. Lets explain with an example. In CCB customers are billed and you can also record complains against a bill if their is a dispute. Depending on the business rules and legal processes an old bill may be in dispute. You should not remove anything related to that bill till the complaint is resolved, regardless of the age. Legal issues can be drawn out for lots of reasons. If you use a retention rule only then the data used in the complaint would potentially be lost. In the same situation, the base ILM Eligbility rules would detect something outstanding and bypass the applicable records. Remember these rules are protecting the business and ensuring that the ILM solution adheres to the complex rules of the business.
  • ILM Features in the Database. Oracle, like a lot of vendors, introduced ILM features into the database to help, what I like to call storage manage the data. This provides a set of flexible options and features to allow database administrators a full range of possibilities for their data management needs. Here are the capabilities (refer to the Database Administration Guide for details of each capability):
    • Partitioning. One of the most common capabilities is using the Partitioning option. This allows a large table to be split up, storage wise, into parts or partitions using a partitioned tablespace. This breaks up the table into manageable pieces and allows the database administration to optimize the storage using hardware and/or software options. Some hardware vendors have inbuilt ILM facilities and this option allows you to target specific data partitions to different hardware capabilities or just split the data into trenches (for example to separate the retirement stages of data). Partitioning is also a valid option if you want to use hardware storage tiered based solutions to save money. In this scenario you would pout the less used data on cheaper storage (if you have it) to save costs. For Partitioning advice, refer to the product DBA Guides which outline the most common partitioning schemes used by customers.
    • Advanced Compression. One of the popular options is the using the Advanced Compression option. This allows administrators to set compression rules against the database based upon data usage. The compression is transparent to the product and compressed data can be co-located with uncompressed data with no special processing needed by the code. The compression covers a wide range of techniques including CLOB compression as well as data compression. Customers using Oracle Exadata can also use Hybrid Columnar Compression (HCC) for hardware assisted compression for greater flexibility.
    • Heat Map. One of the features added to Oracle Database 12c and above to help DBA's is the Heat Map. This is a facility where the database will track the usage patterns of the data in your database and give you feedback on the activity of the individual rows in the database. This is an important tool as it helps the DBA identify which data is actually being used by the business and is a useful tool for determining what is important to optimize. It is even useful in the active period to determine which data can be safely compressed as it has reduced update activity against it. It is a useful tool and is part of the autonomous capabilities of the database.
    • Automatic Data Optimization. The Automatic Data Optimization (ADO) is a feature of database that allows database administrations  to implement rules to manage storage capabilities based upon various metrics including heat map. For example, the DBA can put in a rule that says data if data in a specific table is not touched for X months then it should be compressed. The rules cover compression, partition movement, storage features etc and can be triggered by Heat Map or any other valid metric (even SQL procedure code can be used).
    • Transportable Tablespaces. One of the most expensive things you can do in the database is issue a DELETE statement. To avoid this in bulk in any ILM based solution, Oracle offers the ability to use the Partitioning option and create a virtual trash bin via a transportable tablespace. Using ADO or other capabilities you can move data into this tablespace and then using basic commands switch off the tablespace to do bulk removal quickly. An added advantages is that you can archive that tablespace and reconnect it later if needed.

The Oracle Utilities ILM solution is comprehensive and flexible using both a aspect for the business to define their retention and eligibility rules and the various capabilities of the ILM in the database for the database administrator to factor in their individual sites hardware and support policies. It is not as simple as removing data in most cases and the Oracle Utilities ILM solution reduces the risk of managing your data, taking to account both your business and storage needs.

For more information about the Oracle Utilities ILM solution, refer to the ILM Planning Guide (Doc Id: 1682436.1) available from My Oracle Support and read the product DBA Guides for product specific advice.

SQL Saturday Paris 2018 – Pré-conférence SQL Server Linux, Docker et Azure

Yann Neuhaus - Mon, 2018-05-28 14:41

The next SQL Saturday in Paris is coming soon and the agenda has been published by the @GUSS on twitter. Le prochain SQL Saturday Paris arrive bientôt et l’agenda a été publié sur Twitter par le @GUSS

blog 135 - 0 - SQL Sat Paris 2018

Cette année, un savant mélange de speakers francophones et internationaux, un plus pour les participants …

Un rapide coup d’œil à l’agenda – si on se focalise sur la partie moteur SQL Server – montre également qu’une bonne partie du stream rouge concerne Docker et l’automatisation. Intéressant :)

Bien entendu les sujets sur la performance et l’architecture auront toujours leur popularité et vous pourrez continuer à y assister :) Ce n’est pas moi qui vais m’en plaindre :)

 

Pré-conférence SQL Server au delà de Windows : installation, déploiement et gestion

sqlsatparis2018

Comme vous le savez déjà, Microsoft s’est ouvert à l’Open Source depuis SQL Server 2017 et a également renforcé son offre Azure avec notamment les instances managées. Par conséquent l’écho-système autour de SQL Server s’est considérablement agrandi. Cette année j’aurai l’opportunité de partager quelques expériences à ce sujet lors d’une pré- conférence qui se déroulera le vendredi 6 juin 2017.

Nous aborderons ensemble des thématiques bien connus par tout administrateur de données comme l’installation, la configuration, le monitoring, la performance ou encore la haute disponibilité dans des environnements autre que Windows.

Pour vous inscrire c’est par ici.

Au plaisir de vous y retrouver!

 

 

 

Cet article SQL Saturday Paris 2018 – Pré-conférence SQL Server Linux, Docker et Azure est apparu en premier sur Blog dbi services.

EDB Failover Manager in EDB containers in Minishift/OpenShift

Yann Neuhaus - Mon, 2018-05-28 12:47

In the last three posts we deployed an EDB database container and two pgpool instances, scaled that up to include a read only replica and finally customized the PostgreSQL instance with ConfigMaps. In this post will we look at how the EDB Failover Manager is configured in the database containers.

This are the pods currently running in my environment, two pgpool containers and two PostgreSQL containers:

dwe@dwe:~$ oc get pods
NAME                 READY     STATUS    RESTARTS   AGE
edb-as10-0-1-gk8dt   1/1       Running   1          9d
edb-as10-0-1-n5z4w   1/1       Running   0          3m
edb-pgpool-1-h5psk   1/1       Running   1          9d
edb-pgpool-1-tq95s   1/1       Running   1          9d

The first one (edb-as10-0-1-gk8dt) is the primary instance and EFM should be running there as well:

dwe@dwe:~$ oc rsh edb-as10-0-1-gk8dt
sh-4.2$ psql -c "select pg_is_in_recovery()" postgres
 pg_is_in_recovery 
-------------------
 f
(1 row)

sh-4.2$ ps -ef | grep efm
edbuser    202     1  0 08:45 ?        00:00:04 /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.171-7.b10.el7.x86_64/jre/bin/java -cp /usr/edb/efm-3.0/lib/EFM-3.0.0.jar -Xmx32m com.enterprisedb.efm.main.ServiceCommand __int_start /etc/edb/efm-3.0/edb.properties
sh-4.2$ 

Looking at the configuration there are some interesting points:

sh-4.2$ cat /etc/edb/efm-3.0/edb.properties | egrep -v "^$|^#"
db.user=edbuser
db.password.encrypted=ca78865e0f85d15edc6c51b2e5c0a58f
db.port=5444
db.database=edb
db.service.owner=edbuser
db.service.name=
db.bin=/usr/edb/as10/bin
db.recovery.conf.dir=/edbvolume/edb/edb-as10-0-1-gk8dt/pgdata
jdbc.sslmode=disable
user.email=none@none.com
script.notification=
bind.address=172.17.0.6:5430
admin.port=5431
is.witness=false
local.period=10
local.timeout=60
local.timeout.final=10
remote.timeout=10
node.timeout=10
stop.isolated.master=false
pingServerIp=8.8.8.8
pingServerCommand=/bin/ping -q -c3 -w5
auto.allow.hosts=false
db.reuse.connection.count=0
auto.failover=true
auto.reconfigure=true
promotable=true
minimum.standbys=0
recovery.check.period=2
auto.resume.period=0
virtualIp=
virtualIp.interface=
virtualIp.prefix=
script.fence=
script.post.promotion=/var/efm/post_promotion_steps.sh %f
script.resumed=
script.db.failure=/var/efm/stopEFM
script.master.isolated=
script.remote.pre.promotion=
script.remote.post.promotion=
script.custom.monitor=
custom.monitor.interval=
custom.monitor.timeout=
custom.monitor.safe.mode=
sudo.command=sudo
sudo.user.command=sudo -u %u
log.dir=/var/log/efm-3.0
jgroups.loglevel=
efm.loglevel=
jvm.options=-Xmx32m
kubernetes.port.range=1
kubernetes.namespace=myproject
kubernetes.pod.labels=cluster=edb
kubernetes.master.host=172.30.0.1
kubernetes.master.httpsPort=443
create.database.master=/var/lib/edb/bin/createmasterdb.sh
create.database.standby=/var/lib/edb/bin/createstandbydb.sh
kubernetes.is.init.master=true

The last 8 lines are not there when you do a manual EFM installation so this is something specific in the container deployment. Apparently it is EFM that creates the master and the replica instance(s). The rest is more or less the default setup. The cluster status should be fine then:

sh-4.2$ /usr/edb/efm-3.0/bin/efm cluster-status edb
Cluster Status: edb
VIP: 

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Master      172.17.0.6           UP     UP        
	Standby     172.17.0.8           UP     UP        

Allowed node host list:
	172.17.0.6

Membership coordinator: 172.17.0.6

Standby priority host list:
	172.17.0.8

Promote Status:

	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      172.17.0.6           0/5000140        
	Standby     172.17.0.8           0/5000140        

	Standby database(s) in sync with master. It is safe to promote.

We should be able to do a switchover:

sh-4.2$ /usr/edb/efm-3.0/bin/efm promote edb -switchover    
Promote/switchover command accepted by local agent. Proceeding with promotion and will reconfigure original master. Run the 'cluster-status' command for information about the new cluster state.
sh-4.2$ /usr/edb/efm-3.0/bin/efm cluster-status edb
Cluster Status: edb
VIP: 

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Standby     172.17.0.6           UP     UP        
	Master      172.17.0.8           UP     UP        

Allowed node host list:
	172.17.0.6

Membership coordinator: 172.17.0.6

Standby priority host list:
	172.17.0.6

Promote Status:

	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      172.17.0.8           0/60001A8        
	Standby     172.17.0.6           0/60001A8        

	Standby database(s) in sync with master. It is safe to promote.

Seems it worked so the instances should have switched the roles and the current instance must be in recovery:

sh-4.2$ psql -c "select pg_is_in_recovery()" postgres
 pg_is_in_recovery 
-------------------
 t
(1 row)

Fine. This works as expected. So far for the first look at EFM inside the containers. It is not the same setup you’ll find when you install EFM on your own and EFM is doing more here than it does usually. A lot of stuff happens in the scripts provided by EDB here:

sh-4.2$ ls -la /var/lib/edb/bin/
total 72
drwxrwx---  2 enterprisedb root  4096 May 11 20:40 .
drwxrwx--- 24 enterprisedb root  4096 May 28 18:03 ..
-rwxrwx---  1 enterprisedb root  1907 Feb 17 17:14 cleanup.sh
-rwxrwx---  1 enterprisedb root  4219 May 10 22:11 createmasterdb.sh
-rwxrwx---  1 enterprisedb root  2582 May 11 03:30 createstandbydb.sh
-rwxrwx---  1 enterprisedb root  1491 May 10 22:12 dbcommon.sh
-rwxrwx---  1 enterprisedb root 10187 May 10 22:28 dbfunctions.sh
-rwxrwx---  1 enterprisedb root   621 May 10 22:15 dbsettings.sh
-rwxrwx---  1 enterprisedb root  5778 Apr 26 22:55 helperfunctions.sh
-rwxrwx---  1 enterprisedb root    33 Feb 18 03:43 killPgAgent
-rwxrwx---  1 enterprisedb root  5431 May 10 22:29 launcher.sh
-rwxrwx---  1 enterprisedb root   179 May 10 22:12 startPgAgent
-rwxrwx---  1 enterprisedb root   504 May 11 12:32 startPgPool

These scripts are referenced in the EFM configuration in several places and contain all the logic for initializing the cluster, starting it up, stopping and restarting the cluster, setting up replication and so on. To understand what really is going on one needs to understand the scripts (which is out of scope of this post).

 

Cet article EDB Failover Manager in EDB containers in Minishift/OpenShift est apparu en premier sur Blog dbi services.

Big Data Introduction - Workshop

Abhinav Agarwal - Mon, 2018-05-28 12:30
Our focus was clear - this was a level 101 class, for IT professionals in Bangalore who had heard of Big Data, were interested in Big Data, but were unsure how and where to dig their toe in the world of analytics and Big Data. A one-day workshop - with a mix of slides, white-boarding, case-study, a small game, and a mini-project - we felt, was the ideal vehicle for getting people to wrap their minds around the fundamental concepts of Big Data.






On a pleasant Saturday morning in January, Prakash Kadham and I conducted a one-day workshop, "Introduction to Big Data & Analytics". As the name suggests, it was a breadth-oriented introduction to the world of Big Data and the landscape of technologies, tools, platforms, distributions, and business use-cases in the brave new world of big data.

We started out by talking about the need for analytics in general, the kinds of questions analytics - also known as business intelligence sometimes - is supposed answer, and how most analytics platforms used to look like at the beginning of the decade. We then moved to what changed this decade, and the growth of data volumes, the velocity of data generation, and the increasing variety of data that rendered traditional means of data ingestion and analysis inadequate.

A fun game with cards turned out to be an ideal way to introduce the participants to the concepts behind MapReduce, the fundamental paradigm behind the processing and ingestion of massive amounts of data. After all the slides and illustrations of MapReduce, we threw in a curve-ball to the participants by telling them that some companies, like Google, had started to move away from MapReduce since it was deemed unsuitable for data volumes greater than petabyte!

The proliferation of Apache projects in almost every sphere of the Hadoop ecosystem meant that there are many, many choices for the big data engineer to choose from. Just on the subject of data ingestion, there is Apache Flume, Apache Sqoop, Apache Kafka, Apache Samza, Apache NiFi, and many others. Or take databases, where you have columnar, noSQL, document-oriented, graph databases to choose from, each optimized for slightly different use-cases - Hbase (the granddaddy of of noSQL databases), Cassandra (that took birth at Facebook), MongoDB (most suited for documents), Neo4j (a graph database), and so on.

Working through a case-study helps bring theory closer to practice, and the participants got to work on just that - two case-studies, one in the retail segment and the other in healthcare. Coming off the slides and lectures, the participants dove into the case-studies with enthusiasm and high-decibel interactions among all the participants.

The day passed off fast enough and we ended the day with a small visualization exercise, using the popular tool, Tableau. At the end of the long but productive day, the participants had one last task to complete - fill out a feedback form, which contained six objective questions and three free-form ones. It was hugely gratifying that all but one filled out the questionnaire. After the group photo and the workshop was formally over, Prakash and I took a look at the survey questionnaire that the participants had filled out, and did a quick, back-of-the-envelope NPS (Net Promoter Score) calculation. We rechecked our calculations and found we had managed an NPS of 100!

The suggestions we received have been most useful, and we are now working to incorporate the suggestions in the workshop. Among the suggestions was for us to hold a more advanced, Level 200, workshop. That remains our second goal!

Thank you to all the participants who took time out to spend an entire Saturday with us, for their active and enthusiastic participation, and to the valuable feedback they shared with us! A most encouraging start to 2018!

This post was first published on LinkedIn on Feb 5, 2018.
© 2018, Abhinav Agarwal.

Some tips for using Oracle Linux in Oracle Cloud

Wim Coekaerts - Mon, 2018-05-28 11:44

Creating an Oracle Linux instance in Oracle Cloud Infrastructure is easy. For the most part it is the same as creating your own image from the install media but we have done a few extra things that are very useful and you should know about :)

- with recent images, the yum repo file points to a local OCI mirror of yum.oracle.com (and a few repos that are only available on linux.oracle.com for subscribers - but since all OCI users' instances are technically   subscribers -> remember - Oracle Linux support is included with OCI instances at no additional cost or no extra button to click or anything)

So downloading RPMs or using yum on an OCI instance is very, very fast and it does not incur any network traffic to the outside world.

- a number of repos are enabled by default - ol7_UEKR4, _developer, _developer_EPEL, _ksplice _latest _optional_latest _addons _software collections. This gives you direct access to a ton of Oracle Linux related packages out of the box. But consider looking at a number of other repos that we have not enabled by default.  All you have to do is change enabled=0 to enabled=1 in /etc/yum.repos.d/public-yum-ol7.repo. Example : ol7_preview Alternatively you can enable a repo from the yum commandline : yum --enablerepo=ol7_preview <option>

The reason we don't enable these by default is that some of the packages in these channels are newer but, in some cases, pre-releases or developer versions of packages and we want to default to the "GA" versions but you are more than welcome to add these other packages of course. For instance, By default docker-engine gets you 17.06 but... if you want 17.12, then that's in the ol7_preview channel. So if you're looking for something new, don't forget to go look there before manually downloading stuff from a random 3rd party site. We might already have it available.

Other channels include nodejs8, gluster312, php72, MySQL8, developer_UEKR5 etc... Take a look at the repo file. You can always browse the repo content on https://yum.oracle.com. And if you want to see what's added on a regular basis, go check out the yum.oracle.com what's new page.  Anyway having EPEL and software collections gives you quick access to a very wide range of packages. Again, no need to download a yum repo rpm or download packages with wget or what not. Easy to create a development environment and deployment environment.

- some tools are installed by default. For instance an OCI OL instance comes with oci-utils pre-installed. oci-utils contains a number of command lines tools that make it very easy to work with attached block volumes, handle instance metadata, find your public-ip easily, configure your secondary VNICs. I wrote a blog entry about this a few months ago.

- easy access to OCI toolkits:

Want to use terraform? No problem, no need to download stuff, just get it from our yum repo. # yum install terraform terraform-provider-oci  We are typically just a few days behind the tagged releases of both terraform and the oci provider.

Want to use the OCI SDK and OCI CLI? # yum install python-oci-cli python-oci-sdk done. Same as with terraform, these packages are updated at most a few days after the github projects have release tags. No need to mess with updates or adding dependency RPMs. We take care of it and we update them for you

Using Postman For REST API on Oracle Cloud Infrastructure

Michael Dinh - Mon, 2018-05-28 10:21

Just completed quick training for Oracle Cloud Infrastructure (OCI)

REST API can be used on OCI for automation and presentation.

I find presentation for JSON results from CLI on server is UGLY.

Finally, I found POSTMAN which does to very nice job of presentation.

POSTMAN has results history, command history, search function.

https://www.getpostman.com/apps

 

Why might Consistent Reads be increasing with every iteration of a select cursor being run by PL/SQL?

Tom Kyte - Mon, 2018-05-28 03:26
<code></code>We have some PL/SQL that is running many sql statements, and taking an awfully long time to complete. (It?s Oracle code, btw, in E-Business, not custom). The problem seems to be a couple of the statements, (run thousands of times each) w...
Categories: DBA Blogs

Full text indexes in Oracle XE

Tom Kyte - Mon, 2018-05-28 03:26
hello Is it possible to combine b-tree indices with Full-Text indices in oracle 11(express)? Merci
Categories: DBA Blogs

Need some Oracle Request syntaxe to extract informations

Tom Kyte - Mon, 2018-05-28 03:26
hy all, it's my first discussion that i post . please help me: i work on a solution of supervision of oracle database, and i need two request syntaxe to extract informations: The first: the requset to show the most Oracle query consume cpu time ...
Categories: DBA Blogs

Is it possible - to show OS disk free space together with DBA_DATAFILES data ?

Tom Kyte - Mon, 2018-05-28 03:26
In some of my custoner's databases the DBAs are using AUTOEXTENSIBLE datafiles, but with many datafiles comparting the same filesystem, in this sense : tablespace A, datafiles /u01/oradata/ts_A_file01.dbf autoextend unlimited tablespace B, da...
Categories: DBA Blogs

Exadata Vs RAC

Tom Kyte - Mon, 2018-05-28 03:26
Tom ? My understanding of RAC( or grid computing) , we use cluster of not so expensive servers for higher availability. But Oracle is marketing Exadata ( expensive severs ? relatively speaking ) for performance / higher availability , so on. ( if ...
Categories: DBA Blogs

Filtering LOBs

Jonathan Lewis - Mon, 2018-05-28 02:25

A two-part question about the FILTER operation appeared on the Oracle-L list server a couple of days ago. The first part was a fairly common question – one that’s often prompted by the way the optimizer used to behave in older versions of Oracle. Paraphrased, it was: “Why is the total cost of the query so high compared to the sum of its parts?”

Here’s the query, and the execution plan.

 INSERT INTO TEMP
  SELECT DISTINCT 'id',
    PHT.emplid
  FROM PHOTO PHT
  WHERE 1               =1
  AND PHT.PHOTO IS NOT NULL
  AND NOT EXISTS
    (SELECT 'x'
    FROM TEMP TMP
    WHERE PHT.EMPLID=TMP.EMPLID_SRCH
    AND TMP.OPRID  = 'id'
    )
  ;  

  
-------------------------------------------------------------------------------------
| Id  | Operation                | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT         |          | 21210 |  3334K|  5802K  (2)| 00:03:47 |
|   1 |  LOAD TABLE CONVENTIONAL | TEMP     |       |       |            |          |
|*  2 |   FILTER                 |          |       |       |            |          |
|*  3 |    TABLE ACCESS FULL     | PHOTO    | 21211 |  3334K|   313   (1)| 00:00:01 |
|*  4 |    TABLE ACCESS FULL     | TEMP     |     1 |    17 |   380   (2)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter( NOT EXISTS (SELECT 0 FROM " TEMP" "TMP" WHERE
              "TMP"."EMPLID_SRCH"=:B1 AND "TMP"."OPRID"='id'))
   3 - filter("PHT"."PHOTO" IS NOT NULL)
   4 - filter("TMP"."EMPLID_SRCH"=:B1 AND "TMP"."OPRID"='id')

Note that the “not exists” subquery against temp runs as a filter subquery with a cost of 380 for the tablescan. Combine that with the cost of 313 for the driving tablescan of photo and you might wonder why the resulting cost isn’t something like 693 – and in some old versions of Oracle that’s probably how it would be reported.

Historically the optimizer has been very bad about producing a final cost when queries have included subqueries – whether as filter subqueries in the predicate section or as scalar subqueries in the select list. Sometimes the cost would simply vanish from the final cost, sometimes it would be added just once to the final cost regardless of how many times the subquery might actually execute.

In this example the subquery against temp is a correlated subquery and might have to run once for every row in photo where the column photo was not null. At best it would have to run at least once for every distinct value of the photo.emplid column (the correlation column) found in those rows. In recent versions of Oracle the optimizer has tried to introduce some estimate of how many times the subquery would run as part of its calculation of the total cost. So (to a crude approximation) 5802K = 313 + N * 380. Unfortunately if we try to work backwards to N we find it would be about 15,267 which is about 72% of the 21,200 rows estimated as the result of the tablescan of photo – I haven’t tried to investigate the algorithms yet but presumably the optimizer makes some allowances somewhere for “self caching” as the subquery runs.

The more interesting part of the question came when the OP decided to test the effect of getting rid of the subquery. Check the costs in the resulting plan:


  INSERT INTO TEMP
  SELECT DISTINCT 'id',
    PHT.emplid
  FROM PHOTO PHT
  WHERE 1               =1
  AND PHT.UC_PBI_PHOTO IS NOT NULL;

  
-------------------------------------------------------------------------------------
| Id  | Operation                | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT         |          | 21211 |  3334K|  3659   (1)| 00:00:01 |
|   1 |  LOAD TABLE CONVENTIONAL | TEMP     |       |       |            |          |
|*  2 |   TABLE ACCESS FULL      | PHOTO    | 21211 |  3334K|  3659   (1)| 00:00:01 |
-------------------------------------------------------------------------------------
  
Predicate Information (identified by operation id):
---------------------------------------------------
     2 - filter("PHT"."PHOTO" IS NOT NULL)

Note how the cost of the tablescan of photo has gone up from 313 in the previous query to 3,659 in the simpler query! How can a tablescan that drives a subquery have a lower cost than the tablescan on its own? Bear in mind that in both cases the Cost attributed to the operation “Table Access Full” is purely about scanning the rows in the photo table and is (or should be) entirely disconnected from the cost and frequency of the subquery.

The clue is in the table definition. The column photo.photo is a BLOB.

Models

I think there are potentially two errors in the optimizer displayed by this example. The first is that it’s adding in a cost that it shouldn’t even be considering; the second is that it’s being inconsistent in the way that it’s deriving that cost.

To demonstrate what I think is happening, I built a variant of the OP’s example as follows:


rem
rem     Script:         optimizer_lob_costs.sql
rem     Author:         Jonathan Lewis
rem     Dated:          May 2018
rem     Purpose:
rem
rem     Last tested
rem             12.2.0.1
rem             12.1.0.2
rem

create table photo (
        emplid          varchar2(11) not null,
        photo           clob,
        other_col       varchar2(1000)
)
lob (photo) 
store as
        photo_lob(
        disable storage in row 
        cache
        logging
)
;

create unique index ph_uk on photo(emplid);

insert /*+ append */ into photo
select
        lpad(2 * rownum,10,0),
        rpad('x',1000),
        rpad('x',1000)
from
        all_objects
where
        rownum <= 10000 -- > comment to avoid wordpress format issue
;

commit;

create table temp(
        oprid           varchar2(30),
        emplid_srch     varchar2(11)
)
;

insert /*+ append */ into temp
select
        'id',
        lpad(2 * rownum,10,0)
from
        all_objects
where
        rownum <= 1000 -- > comment to avoid wordpress format issue
;

commit;

execute dbms_stats.gather_table_stats(user,'photo',method_opt=>'for all columns size 1', cascade=>true)
execute dbms_stats.gather_table_stats(user,'temp', method_opt=>'for all columns size 1', cascade=>true)


I’ve changed the BLOB to a CLOB defined with storage in row disabled, and I’ve introduced a varchar2() column of the same size as the CLOB column. I’ve declared the correlating column not null and created a unique index on it. Here are the two queries I want to review – slightly simplified versions of the original:


explain plan for
insert into temp(emplid_srch)
select 
        distinct pht.emplid
from 
        photo pht
where 
        1 = 1
and  pht.photo is not null
-- and     pht.other_col is not null
and     not exists (
                select /*+ no_unnest */
                        null
                from 
                        temp tmp
                where 
                        pht.emplid=tmp.emplid_srch
        )
;  

select * from table(dbms_xplan.display);

explain plan for
insert into temp(emplid_srch)
select
        distinct pht.emplid
from    photo pht
where   1               =1
and  pht.photo is not null
-- and     pht.other_col is not nulL
;  

select * from table(dbms_xplan.display);

As you can see I’ve had to include a /*+ no_unnest */ hint in my SQL to get the FILTER operation to appear in the plan (the OP had the hidden parameter “_unnest_subquery” set to false); I’ve also allowed for two variants of each query, one referencing the CLOB column the other referencing the varchar2() column. The only results I’ll show are for the queries accessing the CLOB, and here are the plans first with, then without, the subquery. Check the cost of the tablescan of the photo table in the two cases:


----------------------------------------------------------------------------------
| Id  | Operation                | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------
|   0 | INSERT STATEMENT         |       |  9999 |   956K| 10458   (3)| 00:00:01 |
|   1 |  LOAD TABLE CONVENTIONAL | TEMP  |       |       |            |          |
|*  2 |   FILTER                 |       |       |       |            |          |
|*  3 |    TABLE ACCESS FULL     | PHOTO | 10000 |   957K|   216   (1)| 00:00:01 |
|*  4 |    TABLE ACCESS FULL     | TEMP  |     1 |    11 |     2   (0)| 00:00:01 |
----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter( NOT EXISTS (SELECT /*+ NO_UNNEST */ 0 FROM "TEMP" "TMP"
              WHERE "TMP"."EMPLID_SRCH"=:B1))
   3 - filter("PHT"."PHOTO" IS NOT NULL)
   4 - filter("TMP"."EMPLID_SRCH"=:B1)


----------------------------------------------------------------------------------
| Id  | Operation                | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------
|   0 | INSERT STATEMENT         |       | 10000 |   957K|   285   (2)| 00:00:01 |
|   1 |  LOAD TABLE CONVENTIONAL | TEMP  |       |       |            |          |
|*  2 |   TABLE ACCESS FULL      | PHOTO | 10000 |   957K|   285   (2)| 00:00:01 |
----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("PHT"."PHOTO" IS NOT NULL)

With the subquery in place the tablescan of photo reports a cost of 285, in the absence of the subquery it reports a cost of 216, a difference of 69. Repeating the test but using the varchar2() column the cost of the tablescan was 213 in both cases – suggesting that the variation was due to the column being a LOB.

With no further clues in the plan it looked like one of those rare occasions when I have to look at the 10053 (optimizer) trace file – and this is what I got from the 12.1.0.2 trace, looking at the section headed “SINGLE TABLE ACCESS PATH” for the photo table. First the base query without the subquery:


SINGLE TABLE ACCESS PATH
  Single Table Cardinality Estimation for PHOTO[PHT]
  SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE
  Column (#2): PHOTO(LOB)
    AvgLen: 87 NDV: 0 Nulls: 0 Density: 0.000000
  Table: PHOTO  Alias: PHT
    Card: Original: 10000.000000  Rounded: 10000  Computed: 10000.000000  Non Adjusted: 10000.000000
  Scan IO  Cost (Disk) =   210.000000
  Scan CPU Cost (Disk) =   13571440.480000
  Total Scan IO  Cost  =   210.000000 (scan (Disk))
                         + 70.000000 (io filter eval) (= 0.007000 (per row) * 10000.000000 (#rows))
                       =   280.000000
  Total Scan CPU  Cost =   13571440.480000 (scan (Disk))
                         + 9138463.200000 (cpu filter eval) (= 913.846320 (per row) * 10000.000000 (#rows))
                       =   22709903.680000

Note the “Total Scan IO Cost” described at line 13 includes a component at line 12 labelled “(io filter eval)” – why, for the predicate “photo is null”, would we do any special I/O when that predicate can be met in the basic table scan.

(Note: A predicate like “lob_column is null” means there is no lob locator in place, so no lob access need be done for that test. In fact the related, but very different, predicate “length(lob_column) = 0” meaning the lob locator exists but the lob is “empty” could also be satisfied during the tablescan without reference to the physical lob segment(s) because the length of the lob is included in the lob locator.)

Let’s assume that the optimizer is incorrectly assuming the run-time engine will have to access the lob in some way to determine that the lob is null. The worst case scenario is that Oracle will start by accessing the LOBindex – so why don’t we check how big the LOBindex is. The first step I took was to check the object_id of the LOBindex and then do a tree dump (which showed 66 leaf blocks) and then I checked the segment header block and dumped that with the following results:


  Extent Control Header
  -----------------------------------------------------------------
  Extent Header:: spare1: 0      spare2: 0      #extents: 1      #blocks: 127
                  last map  0x00000000  #maps: 0      offset: 4128
      Highwater::  0x01400447  ext#: 0      blk#: 70     ext size: 127
  #blocks in seg. hdr's freelists: 4
  #blocks below: 70
  mapblk  0x00000000  offset: 0
                   Unlocked
     Map Header:: next  0x00000000  #extents: 1    obj#: 194295 flag: 0x40000000
  Extent Map
  -----------------------------------------------------------------
   0x01400401  length: 127

See the “Highwater::” information at line 6 – the allocated space in the segment is the first 70 blocks of the first extent. That’s (almost certainly) where the incremental cost of 70 (single block I/Os) comes from.  (And I did couple of big updates to the LOB, designed to expand the LOBindex without changing the segment size of the underlying table, to corroborate that hypothesis.)

This brings us to the question of why the cost of the tablescan drops when the subquery is included. Again we generate the 10053 trace and examine the details under the “SINGLE TABLE ACCESS PATH”:


SINGLE TABLE ACCESS PATH
  Single Table Cardinality Estimation for PHOTO[PHT]
  SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE
  Table: PHOTO  Alias: PHT
    Card: Original: 10000.000000  Rounded: 10000  Computed: 10000.000000  Non Adjusted: 10000.000000
  Scan IO  Cost (Disk) =   210.000000
  Scan CPU Cost (Disk) =   13571440.480000
  Total Scan IO  Cost  =   210.000000 (scan (Disk))
                         + 3.500000 (io filter eval) (= 0.000350 (per row) * 10000.000000 (#rows))
                       =   213.500000
  Total Scan CPU  Cost =   13571440.480000 (scan (Disk))
                         + 656923.160000 (cpu filter eval) (= 65.692316 (per row) * 10000.000000 (#rows))
                       =   14228363.640000


In this case the “(io filter eval)” at line 10 is only 3.5 – and if you know your optimizer and how it handles subqueries you’re allowed to guess that could be one of Oracle’s standard guesses of 5% coming into play. (Again, growing the index seemed to corroborate this hypothesis.)

So here’s (possible) bug number 2: the first bug is adding a cost for accessing the LOBindex when there should be no need to  access the index at all – the execution plan says we will get 10,000 rows from the table, the filter predicate does report a cardinality reduced by just 1 on a column that has been declared with a uniqueness constraint, but a fairly commonly used “guess factor” of 5% is used as an indicator of the number of times the lob predicate will be tested. The various bits of the arithmetic are not consistent with each other.

Summary notes:

If you have a tablescan with a predicate that references a lob column then the cost of the tablescan includes the cost of the lob access – and there are cases where lob access is not needed but still gets costed {this is bug number 1 – the predicates are column is/is not null, and length(column) = / != 0)}.

If the lob data itself does not need to be accessed then the size of the lob index – which you can’t easily find – may have a significant impact on the cost of the tablescan.

If the query also includes predicates that result in the optimizer guessing about cardinality effects (1%, 5%, 0.25% are common guesses) then that guess may be used to scale the assumed (and potentially irrelevant) cost of the lob access. (There is scope for further experimentation in this area to examine the effects of “non-guess” predicates and the assumed order of application of predicates, e.g. are lob predicates costed as the last to be applied, does the algorithm for costing matched the execution order.)

As often happens it’s easy to see that there are oddities in the arithmetic that affect the cost of a query in ways that might make the optimizer pick a silly execution plan. Unfortunately it’s not easy to predict when you’re likely to see the effects of these oddities; the best we can do is remember that there is an anomaly with costing lob-based predicates and hope that we think of it when we see the optimizer picking a bad plan for reasons that initially are not obvious.

How to find the UUID of a device in Linux for Oracle ASM

Pakistan's First Oracle Blog - Sun, 2018-05-27 22:45
UUID stands for Universally Unique Identifier. I use UUID for my disk device, when I need to create and add disks for Oracle ASM, as UUID is independet of device name or mountpoint. So its always a good idea to include UUID of device in the fstab file in Linux.

So here is how to find the UUID of a device in Linux for Oracle ASM:




[root@bastion ~]$ ls -l /dev/disk/by-uuid
lrwxrwxrwx 1 root root 11 JAN 18 20:38 1101c254-0b92-42ca-b34a-6d283bd2d31b -> ../../sda2
lrwxrwxrwx 1 root root 11 JAN 18 20:38 11dc5104-C07b-4439-bdab-00a76fcb88df -> ../../sda1

HTH.


Categories: DBA Blogs

Oracle ADF BC REST - Performance Review and Tuning

Andrejus Baranovski - Sun, 2018-05-27 00:12
I thought to check how well ADF BC REST scales and how fast it performs. For that reason, I implemented sample ADF BC REST application and executed JMeter stress load test against it. You can access source code for application and JMeter script on my GitHub repository. Application is called Blog Visitor Counter app for a reason - I'm using same app to count blog visitors. This means each time you are accessing blog page - ADF BC REST service is triggered in the background and it logs counter value with timestamp (no personal data).

Application structure is straightforward - ADF BC REST implementation:


When REST service is accessed (GET request is executed) - it creates and commits new row in the background (this is why I like ADF BC REST - you have a lot of power and flexibility in the backend), before returning total logged rows count:


New row is assigned with counter value from DB sequence, as well as with timestamp. Both values are calculated in Groovy. Another bonus point for ADF BC REST, besides writing logic in Java - you can do scripting in Groovy - this makes code simpler:


Thats it - ADF BC REST service is ready to run. You may wonder, how I'm accessing it from blog page. ADF BC REST services as any other REST, can be invoked through HTTP request. In this particular case, I'm calling GET operation through Ajax call in JavaScript on client side. This script is uploaded to blogger HTML:


Performance

I'm using JMeter to execute performance test. In below example, REST GET request is invoked in infinite loop by 100 concurrent threads. This creates constant load and allows to measure how ADF BC REST application performs under such load:


ADF BC REST scales well, with 100 concurrent threads it does request processing in 0.1 - 0.2 seconds. If we would compare it to ADF UI request processing time, it would be around 10 times faster. This is expected, because JSF and ADF Faces UI classes are not used during ADF BC REST request. Performance test statistics for 100 threads, see Avg logged time in milliseconds:


Tuning

1. Referenced Pool Size and Application Module Pooling

ADF BC REST executes request is stateless mode, REST nature is stateless. I though to check, what this mean for Application Module tuning parameters. I have observed that changing Referenced Pool Size value doesn't influence application performance, it works either with 0 or any other value in the same way. Referenced Pool Size parameter is not important for ADF BC REST runtime:


Application performs well under load, there are no passivations/activations logged, even when Referenced Pool Size is set to zero.


However, I found that it is still important to keep Enable Application Module Pooling = ON. If you switch it OFF - passivation will start to appear, which consumes processing power and is highly unrecommended. So, keep Enable Application Module Pooling = ON.

2. Disconnect Application Module Upon Release

It is important to set Disconnect Application Module Upon Release = ON (read more about it - ADF BC Tuning with Do Connection Pooling and TXN Disconnect Level). This will ensure there will be always near zero DB connections left open:


Otherwise if we keep Disconnect Application Module Upon Release = OFF:


DB connections will not be released promptly:


This summarises important points related to ADF BC REST tuning.

Pages

Subscribe to Oracle FAQ aggregator