Feed aggregator

Virtual Private Database

Tom Kyte - Wed, 2017-11-15 15:46
Hello Tom, I know you can use VPD to restrict data horizontally - that is restrict rows based on where clauses. However, is there a way to automatically hide columns from users using VPD? Let's say there is a table called MBR(MBR_ID NUMBER, LNAM...
Categories: DBA Blogs

merge the cursor output of multiple stored procedures into one result

Tom Kyte - Wed, 2017-11-15 15:46
Hello Tom, Can you please tell me how can i merge the output of 6 stored procedures into one result. The output of each stored procedure is a cursor which holds n number of records(records data structure is same). I have to merge the data and ...
Categories: DBA Blogs

need a single update query for below question

Tom Kyte - Wed, 2017-11-15 15:46
I've table locations and it contains below data : loc_id location_name 101 newyork 102 losangels 103 chicago 104 boston 105 dallas now is that possible to write a single query to update all records like below loc_id location_name 101...
Categories: DBA Blogs

Easy(lazy) way to check which programs have properly configured FetchSize

XTended Oracle SQL - Wed, 2017-11-15 15:37
  ,ceil(max(s.rows_processed/s.fetches)) rows_per_fetch
from v$sql s
and s.executions    >1
and s.fetches       >1
and s.module is not null
and s.command_type  = 3    -- SELECTs only
and s.program_id    = 0    -- do not account recursive queries from stored procs
and s.parsing_schema_id!=0 -- <> SYS
group by s.module
order by rows_per_fetch desc nulls last
Categories: Development

How Long Can I Get Support for a Specific Java Update?

Steven Chan - Wed, 2017-11-15 15:11
Java logo

Support timelines for Oracle products can be tricky to understand.  The time that an overall product release gets updates is governed by the dates in the Oracle Lifetime Support Policies.

EBS users have 12 months to upgrade to the latest Fusion Middleware component patchsets, and 24 months to upgrade to the latest database components. These are called grace periods.

For dates of grace periods for specific Database or Fusion Middleware patchsets, see:

What are the support dates for different Java releases?

Extended Support for Java SE 6 ends on December 31, 2018. E-Business Suite customers must upgrade their servers to Java SE 7 before that date.

Premier Support for Java SE 7 runs to July 31, 2019. Extended Support for Java SE 7 runs to July 31, 2022. 

Do Java updates have grace periods?

No. Support for Java updates works differently than other Oracle products.  New bug fixes and security updates are always delivered on top of the latest Java update available at the time.

This policy applies to Java running on EBS servers, as well as JRE and Java Web Start running on end-user client desktops.

For example:

As of the date that this article was published, the latest Java SE 7 available is Update 1.7.0_161. 

If you report an issue today with an earlier Java SE 7 update such as Java 7 Update 1.7.0_10, you will be asked to apply 1.7.0_161 and attempt to reproduce the issue.

If the issue does not reproduce, then the solution will be to apply 1.7.0_161 to all of your end-user desktops.

If the issue does reproduce, then Oracle Java Support will log a bug and fix the issue on a Java release later than 1.7.0_161.

Related Articles


Categories: APPS Blogs

Conditional Navigation based on Queries in Oracle Visual Builder Cloud Service

Shay Shmeltzer - Wed, 2017-11-15 13:21

A couple of threads on the Oracle Visual Builder Cloud Service forum asked about writing code in buttons in VBCS that compares values entered in a page to data in business objects and perform conditional navigation based on the values. In a past blog I showed the code needed for querying VBCS objects from the UI, but another sample never hurts, so here is another demo...

For this demo I'm going to show how to do it in a login flow - assuming you have a business object that keeps usernames and passwords, and you want to develop a page where a user types a user/pass combination and you need to verify that this is indeed a valid combination that exist in the business object.

(In reality, if you want to do user authentication in VBCS - you should use the built in security frameworks and not code it this way. I'm just using this as an example.)

Here is a quick video of the working app - with pointers to the components detailed below.

The first thing you'll do is create the business object that hosts the user/pass combination - note that in the video since "user" is a reserved word - the ID for the field is actually "user_" - which is what we'll use in our code later on.


Next you'll want to create a new page where people can insert a user/pass combination - to do that create a new page of type "Create" - this page will require you to associate it with a business object, so create a new business object. We won't actually keep data in this new business object. In the video and the code - this business object is called "query".

Now design your page and add the user and pass fields - creating parallel fields in the query business object (quser and qpass in the video). You can then remove the "Save" button that won't be use, and instead add a "validate" button.

For this new button we'll define a new custom action that will contain custom JavaScript code. Custom code should return either a success state - using resolve(); - or failure - using reject();

Based on the success or failure you can define the next action in the flow - in our case we are showing either a success or error message:

success flow

Now lets look at the custom JavaScript code:

require(['operation/js/api/Conditions', 'operation/js/api/Operator'], function (Conditions, Operator) { var eo = Abcs.Entities().findById('Users'); var passid = eo.getProperty('pass'); var userid = eo.getProperty('user_'); var condition = Conditions.AND( Conditions.SIMPLE(passid, Operator.EQUALS,$QueryEntityDetailArchetypeRecord.getValue('qpass') ), Conditions.SIMPLE(userid, Operator.EQUALS, $QueryEntityDetailArchetypeRecord.getValue('quser')) ); var operation = Abcs.Operations().read( { entity : eo, condition : condition }); operation.perform().then(function (operationResult) { if (operationResult.isSuccess()) { operationResult.getData().forEach(function (oneRecord) { resolve("ok"); }); } reject("none"); } ). catch (function (operationResult) { if (operationResult.isFailure()) { // Insert code you want to perform if fetching of records failed alert('didnt worked'); reject("error"); } }); });

Explaining the code:

  • Lines 2-4 - getting the pointers to the business object and the fields in it using their field id.
  • Lines 5-8 - defining a condition with AND - referencing the values of the fields on the page
  • Lins 9-11 - defining the operation to read data with the condition from the business object
  • Line 12 - executing the read operation
  • Line 14-18 - checking if a record has been returned and if it has then we are ok to return success - there was a user/pass combination matching the condition.
  • Line 19 - otherwise we return with a failure.

One recommendation, while coding JavaScript - use a good code editor that will help highlight open/close brackets matches - it would save you a lot of time.

For more on the VBCS JavaScript API that you can use for accessing business components see the doc.

Categories: Development

Can I do it with PostgreSQL? – 18 – Instead of triggers on views

Yann Neuhaus - Wed, 2017-11-15 09:18

It has been quite a while since the last post in this series but today comes the next one. Being at a customer this morning this question popped up: Can we have instead of triggers on a view in PostgreSQL as well? I couln’d immediately answer (although I was quite sure you can) so here is the test. I took an example for Oracle from here and re-wrote it in PostgreSQL syntax.

I took the same tables and adjusted the data types:

                              , CUSTOMER_NAME VARCHAR(20)
                              , COUNTRY VARCHAR(20)
                              , PROJECT_NAME VARCHAR(30)
                              , PROJECT_START_DATE DATE

The same view definition:

CREATE OR REPLACE VIEW customer_projects_view AS
   SELECT cust.customer_id, cust.customer_name, cust.country,
          projectdtls.project_id, projectdtls.project_name, 
   FROM customer_details cust, projects_details projectdtls
   WHERE cust.customer_id = projectdtls.customer_id;

Try to insert:

postgres=# INSERT INTO customer_projects_view VALUES (1,'XYZ Enterprise','Japan',101,'Library management',now());
ERROR:  cannot insert into view "customer_projects_view"
DETAIL:  Views that do not select from a single table or view are not automatically updatable.
HINT:  To enable inserting into the view, provide an INSTEAD OF INSERT trigger or an unconditional ON INSERT DO INSTEAD rule.
Time: 2.135 ms

… and the answer is already in the error message. So obviously we should be able to do that. In PostgreSQL you need a trigger function:

CREATE OR REPLACE FUNCTION cust_proj_view_insert_proc() RETURNS trigger AS $$
   INSERT INTO customer_details (customer_id,customer_name,country)
          VALUES (NEW.customer_id, NEW.customer_name, NEW.country);

   INSERT INTO projects_details (project_id, project_name, project_start_Date, customer_id)

     EXCEPTION WHEN unique_violation THEN
       RAISE EXCEPTION 'Duplicate customer or project id';
$$ LANGUAGE plpgsql;

Then we need a trigger calling this function:

create trigger cust_proj_view_insert_trg 
    instead of insert on customer_projects_view for each row EXECUTE procedure cust_proj_view_insert_proc();

Try the insert again:

INSERT INTO customer_projects_view VALUES (1,'XYZ Enterprise','Japan',101,'Library management',now());
INSERT INTO customer_projects_view VALUES (2,'ABC Infotech','India',202,'HR management',now());

… and here we are:

postgres=# select * FROM customer_details;
 customer_id | customer_name  | country 
           1 | XYZ Enterprise | Japan
           2 | ABC Infotech   | India

Definitely, you can :)


Cet article Can I do it with PostgreSQL? – 18 – Instead of triggers on views est apparu en premier sur Blog dbi services.

Creating a PDB in a desired location in 12.2

Hemant K Chitale - Wed, 2017-11-15 08:53
A video on creating a Pluggable Database in a desired location, using the command-line.

Categories: DBA Blogs

SOASuite 12c: keep running instances using ANT

Darwin IT - Wed, 2017-11-15 08:41
At my current customer I implemented a poor man's devops solution for Release and Deploy. It was based on a framework created as bunch of Ant projects, that I created years ago. It was based on scripts from Edwin Biemond. See for instance here, here and here. I never wrote about my solution, because although I refactored them quite intensively, the basics were already described thoroughly by him.

What I did was that I modularized the lot, split the environment property files, added logging, added OSB 12c  support, based on the config jar tool, etc.

One thing I ran into this week was that at first deployment from our team to the test environment using my framework, the running instances for the BPM projects were aborted.

Now, if you take a look at the deploy.sarLocation target in Edwin's article about deploying soa suite composites  you'll find that he also supported the overwrite and forceDefault properties.

When re-deploying a composite from JDeveloper you're probably familiar with the 'keep running instances' checkbox. I was looking for the ANT alternative in the ${oracle.home}/bin/ant-sca-deploy.xml scripting. First I looked in the 12c docs (see 47.9.4 How to Use ant to Deploy a SOA Composite Application), but it is not documented there.

But when I opened the particular ant-sca-deploy.xml script I read:
 <condition property="overwrite" value="false">
<isset property="overwrite"/>
<condition property="forceDefault" value="true">
<isset property="forceDefault"/>
<condition property="keepInstancesOnRedeploy" value="false">
<isset property="keepInstancesOnRedeploy"/>
<target name="deploy">
<input message="Please enter server URL:" addproperty="serverURL"/>
<input message="Please enter sar location:" addproperty="sarLocation"/>
<input message="Please enter username:" addproperty="user"/>
<input message="Please enter password:" addproperty="password">
<handler classname="org.apache.tools.ant.input.SecureInputHandler" />
<deployComposite serverUrl="${serverURL}" sarLocation="${sarLocation}" realm="${realm}" user="${user}" password="${password}"
overwrite="${overwrite}" forceDefault="${forceDefault}" keepInstancesOnRedeploy="${keepInstancesOnRedeploy}"
regenerateRuleBase="${regenerateRuleBase}" configPlan="${configplan}" scope="${scope}"
sysPropFile="${sysPropFile}" failOnError="${failOnError}" timeout="${timeout}" folder="${folder}"/>

So, the script kindly supports the keepInstancesOnRedeploy property. And thus I implemented a deploy.keepInstancesOnRedeploy property the same way as the deploy.forceDefault/deploy.overwrite properties.

This probably is usefull for Maven based (re-)deployments.

Auto pre-warming in EDB Postgres Advanced Server 10

Yann Neuhaus - Wed, 2017-11-15 06:28

Some days ago EDB Postgres Advanced Server 10 was released and one feature which might be handy is auto pre-warming. What this does is to save all the buffers (or better a description of the buffers) which are currently loaded in to shared_buffers to disk and then re-read the buffers automatically when the instance is restarted. Lets see how it works.

Before getting the feature to work we need to look at two parameters which control the behavior:

  • pg_prewarm.autoprewarm: Enabled or disabled the feature
  • pg_prewarm.autoprewarm_interval: The interval the current state is written to disk or 0 to only write once when the instance shutsdown

Another requirement is to load the library when the instance starts:

postgres=# alter system set shared_preload_libraries ='$libdir/dbms_pipe,$libdir/edb_gen,$libdir/dbms_aq,$libdir/pg_prewarm';

Once the instance is restarted we can proceed with the configuration:

postgres=# alter system set pg_prewarm.autoprewarm=true;
postgres=# alter system set pg_prewarm.autoprewarm_interval='10s';

By doing this we told the server to write the current state of the buffers to disk every 10 seconds. You’ll also notice a new background worker process which is responsible for doing the work:

postgres=# \! ps -ef | grep prewarm | egrep -v "ps|grep"
postgres  3682  3675  0 12:05 ?        00:00:00 postgres: bgworker: autoprewarm   

Lets load something into shared_buffers:

postgres=# insert into t1 select a, md5(a::varchar) from generate_series(1,1000) a;
INSERT 0 1000
postgres=# select count(*) from t1;
(1 row)
postgres=# explain (analyze,buffers) select count(*) from t1;
                                               QUERY PLAN                                                
 Aggregate  (cost=21.50..21.51 rows=1 width=8) (actual time=0.492..0.492 rows=1 loops=1)
   Buffers: shared hit=9
   ->  Seq Scan on t1  (cost=0.00..19.00 rows=1000 width=0) (actual time=0.019..0.254 rows=1000 loops=1)
         Buffers: shared hit=9
 Planning time: 0.070 ms
 Execution time: 0.538 ms
(6 rows)

The “shared hit” confirms that we read the buffers from shared_buffers and not from the os/file system cache. Then lets restart and do the same check again:

postgres@centos7:/u02/pgdata/PG4/ [EDB10] pg_ctl -D . restart -m fast
postgres@centos7:/u02/pgdata/PG4/ [EDB10] psql -X postgres
psql.bin (10.1.5)
Type "help" for help.

postgres=# explain (analyze,buffers) select count(*) from t1;
                                               QUERY PLAN                                                
 Aggregate  (cost=21.50..21.51 rows=1 width=8) (actual time=0.586..0.586 rows=1 loops=1)
   Buffers: shared hit=9
   ->  Seq Scan on t1  (cost=0.00..19.00 rows=1000 width=0) (actual time=0.024..0.295 rows=1000 loops=1)
         Buffers: shared hit=9
 Planning time: 0.451 ms
 Execution time: 0.766 ms
(6 rows)


… here we go. How is this information stored? When you take a look at $PGDATA you’ll notice a file with the following format:

postgres@centos7:/u02/pgdata/PG4/ [EDB10] cat $PGDATA/autoprewarm.blocks | tail

The first field is the OID of the database:

postgres=# select oid,datname from pg_database where oid=15471;
  oid  | datname  
 15471 | postgres
(1 row)

The second one is the tablespace:

postgres=# select oid,spcname from pg_tablespace where oid=1663;
 oid  |  spcname   
 1663 | pg_default
(1 row)

The third one is the table:

postgres=# select oid,relname from pg_class where oid = 16402;
  oid  | relname 
 16402 | t1
(1 row)

postgres=# \! grep 16402 $PGDATA/autoprewarm.blocks

The fourth one is the fork/file (0 is the datafile, 1 is the free space map) and the last one is the actual block to load. This is also described in “./contrib/pg_prewarm/autoprewarm.c” in the PostgreSQL source code:

/* Metadata for each block we dump. */
typedef struct BlockInfoRecord
        Oid                     database;
        Oid                     tablespace;
        Oid                     filenode;
        ForkNumber      forknum;
        BlockNumber blocknum;
} BlockInfoRecord;

For community PostgreSQL there is the contrib module pg_prewarm you can use for that, check here.


Cet article Auto pre-warming in EDB Postgres Advanced Server 10 est apparu en premier sur Blog dbi services.

Podcast: What's Hot? Tech Trends That Made a Real Difference in 2017

OTN TechBlog - Wed, 2017-11-15 05:00

Innovation never sleeps, and tech trends come at you from every angle. That's business as usual in the software developer's world. In 2017, microservices, containers, chatbots, blockchain, IoT, and other trends drew lots of attention and conversation. But what trends and technologies penetrated the hype to make a real difference?

In order to get a sense of what's happening on the street, we gathered a group of highly respected software developers, recognized leaders in the community, crammed them into a tiny hotel room in San Francisco (they were in town to present sessions at JavaOne and Oracle OpenWorld), tossed in a couple of microphones, and asked them to talk about the technologies that actually had an impact on their work over the past year. The resulting conversation is lively, wide-ranging, often funny, and insightful from start to finish. Listen for yourself.

The Panelists

(listed alphabetically)

Lonneke Dikmans Lonneke Dikmans
Chief Product Officer, eProseed
Oracle ACE Director
Developer Champion


Lucas Jellema
Chief Technical Officer, AMIS Services
Oracle ACE Director
Developer Champion


Frank Munz
Software Architect, Cloud Evangelist, Munz & More
Oracle ACE Director
Developer Champion


Pratik Patel
Chief Technical Officer, Triplingo
President, Atlanta Java Users Group
Java Champion
Code Champion


Chris Richardson
Founder, Chief Executive Officer, Eventuate Inc.
Java Champion
Code Champion


Additional Resources Subscribe

Never miss an episode! The Oracle Developer Community Podcast is available via:


Import connections

Tom Kyte - Tue, 2017-11-14 21:26
Need way to import *new* connections to SQL Developer without manual typing. Is it possible to import them from a spreadsheet or csv?
Categories: DBA Blogs

Snapshot database

Tom Kyte - Tue, 2017-11-14 21:26
Dear Team, I have active data guard whose flash retention policy is 1 hour and undo retention is 900. want to convert standby database to snapshot database for few days, please let me know its possible or not. is above mention parameter is depe...
Categories: DBA Blogs

Alert log monitoring script

Tom Kyte - Tue, 2017-11-14 21:26
Hi, Could you please anyone help me on the script for Alert log monitoring for every one 1 hour need to fetch the errors and send it mail. Thanks
Categories: DBA Blogs

Disabling DBMS_SCHEDULER jobs on import

Tom Kyte - Tue, 2017-11-14 21:26
We refresh our test and development environments from production weekly. Is there a way to automatically disable DBMS_SCHEDULER jobs on import? I have a "reset" script that I run after the refresh, but "overdue" jobs seem to run immediately after imp...
Categories: DBA Blogs

Security Alert CVE-2017-10269 Released

Oracle Security Team - Tue, 2017-11-14 15:07

Oracle just released Security Alert CVE-2017-10269 to address a number of vulnerabilities affecting the Jolt Server within Oracle Tuxedo.  The maximum reported CVSS Base Score for these vulnerabilities is 10.0. 

Note that the Oracle Jolt client is not affected by these vulnerabilities.  However, Oracle PeopleSoft products include and make use of Oracle Tuxedo and as a result, PeopleSoft customers should apply the patches provided by this Security Alert.  Oracle strongly recommends affected Oracle Customers apply this Security Alert as soon as possible.


For More Information:

The Advisory for Security Alert CVE-2017-10269 is located at http://www.oracle.com/technetwork/security-advisory/alert-cve-2017-10269-4021872.html

Arizona Oracle User Group meeting Thursday

Bobby Durrett's DBA Blog - Tue, 2017-11-14 13:25

The Arizona Oracle User Group (AZORA) is meeting this Thursday, November 16th, 2017 from 12:30 to 3:30 pm Arizona time.

Here is the link to the Meetup:

#meetup_oembed .mu_clearfix:after { visibility: hidden; display: block; font-size: 0; content: " "; clear: both; height: 0; }* html #meetup_oembed .mu_clearfix, *:first-child+html #meetup_oembed .mu_clearfix { zoom: 1; }#meetup_oembed { background:#eee;border:1px solid #ccc;padding:10px;-moz-border-radius:3px;-webkit-border-radius:3px;border-radius:3px;margin:0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; }#meetup_oembed h3 { font-weight:normal; margin:0 0 10px; padding:0; line-height:26px; font-family:Georgia,Palatino,serif; font-size:24px }#meetup_oembed p { margin: 0 0 10px; padding:0; line-height:16px; }#meetup_oembed img { border:none; margin:0; padding:0; }#meetup_oembed a, #meetup_oembed a:visited, #meetup_oembed a:link { color: #1B76B3; text-decoration: none; cursor: hand; cursor: pointer; }#meetup_oembed a:hover { color: #1B76B3; text-decoration: underline; }#meetup_oembed a.mu_button { font-size:14px; -moz-border-radius:3px;-webkit-border-radius:3px;border-radius:3px;border:2px solid #A7241D;color:white!important;text-decoration:none;background-color: #CA3E47; background-image: -moz-linear-gradient(top, #ca3e47, #a8252e); background-image: -webkit-gradient(linear, left bottom, left top, color-stop(0, #a8252e), color-stop(1, #ca3e47));disvplay:inline-block;padding:5px 10px; }#meetup_oembed a.mu_button:hover { color: #fff!important; text-decoration: none; }#meetup_oembed .photo { width:50px; height:50px; overflow:hidden;background:#ccc;float:left;margin:0 5px 0 0;text-align:center;padding:1px; }#meetup_oembed .photo img { height:50px }#meetup_oembed .number { font-size:18px; }#meetup_oembed .thing { text-transform: uppercase; color: #555; }
AZ-ORA November Meetup – Dan Morgan live!

Thursday, Nov 16, 2017, 12:30 PM

Republic Services – 2nd Floor Conference Rooms
14400 N 87th St (AZ101 & Raintree) Scottsdale, AZ

13 AZORAS Attending

It’s time to learn more about Oracle and Oracle products! Dan Morgan from Forsythe Meta 7 will be speaking at our November 16 Arizona Oracle User Group (AZ-ORA) meeting on two topics: “Channeling Oracle OpenWorld 2017” for DBAs and SQL & PL/SQL Developers and “Security for Oracle DBAs and Developers” – more details to come…When: Thursday Novembe…

Check out this Meetup →

Here is a link to the user group site with more information:

AZORA Meeting Nov 16, 2017

I’m looking forward to it. I hope to see you there!


Categories: DBA Blogs

Quest Experience Week--PeopleSoft Day Virtual Conference

PeopleSoft Technology Blog - Tue, 2017-11-14 13:07

The Quest user group recently conducted a virtual conference for the PeopleSoft user community.  PeopleSoft Day consisted of 19 sessions on a variety of topics about  applications as well as PeopleTools and technology.  There were keynotes, roadmaps, and numerous educational sessions.  If you didn't have a chance to attend the conference live, you can view recorded videos of the sessions on the Quest Content Library.  (You must be a Quest member to access content.)  If you attended live but missed some sessions because you were attending others at the same time, now is your chance to review what you missed.

This event was well attended, and feedback was positive.  Look for more of these Quest events in the future.  They are free to Quest members.

Dynamic Sampling vs. Extended Statistics

Yann Neuhaus - Tue, 2017-11-14 10:09

On datawarehouse databases, I frequently recommend increasing the level of dynamic sampling because:

  • Queries have complex predicates with AND, OR, IN(), ranges and correlated values for which the optimizer cannot estimate the cardinality properly
  • Queries are long anyway (compared to OLTP) and can afford more parse time to get an optimized execution plan

However, there’s a drawback with this approach because sometimes the dynamic sampling estimation may give bad estimations, and supersedes the static statistics which were better. Here is an example in

I run with the following parameters:

SQL> show parameter adaptive;
--------------------------------- ------- -----
optimizer_adaptive_plans boolean TRUE
optimizer_adaptive_reporting_only boolean FALSE
optimizer_adaptive_statistics boolean FALSE
optimizer_dynamic_sampling integer 4

The Dynamic Sampling level comes from previous version (11g) and the Adaptive Statistics have been disabled because of all the problems seen in 12cR1 with Adaptive Dynamic Sampling bugs.

I have a query with very bad response time for some values, going to nested loops for 50000 rows. The reason is an under-estimate in the following part of the query:

SQL> explain plan for
SQL> select * from table(dbms_xplan.display);
Plan hash value: 2187255533
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 7 | 964 (1)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 7 | | |
|* 2 | TABLE ACCESS FULL| TBL_APPLICATION1_ID | 82 | 574 | 964 (1)| 00:00:01 |
Predicate Information (identified by operation id):
2 - filter(UPPER("OPRID")='QWERTZ')
- dynamic statistics used: dynamic sampling (level=4)

The estimation is 82 rows but there are actually 50000 rows. We can see dynamic sampling. The misestimate is probably caused by a sample too small.

Ok, a query with an UPPER() function on the column is not a good idea. Let’s try to gather statistics for the expression:

SQL> exec dbms_stats.gather_table_stats('APP_OWNR','TBL_APPLICATION1_ID',method_opt=>'for all columns size auto for columns (upper(OPRID)) size auto');
PL/SQL procedure successfully completed.
SQL> explain plan for
SQL> select * from table(dbms_xplan.display);
Plan hash value: 2187255533
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 7 | 964 (1)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 7 | | |
|* 2 | TABLE ACCESS FULL| TBL_APPLICATION1_ID | 82 | 574 | 964 (1)| 00:00:01 |
Predicate Information (identified by operation id):
2 - filter(UPPER("OPRID")='QWERTZ')
- dynamic statistics used: dynamic sampling (level=4)

We have the same misestimate. But the problem is not our statistics on expression. This query is still doing dynamic sampling.

Here’s the proof that we have good static statistics:

SQL> alter session set optimizer_dynamic_sampling=2;
Session altered.
SQL> explain plan for
SQL> select * from table(dbms_xplan.display);
Plan hash value: 2187255533
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 7 | 964 (1)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 7 | | |
|* 2 | TABLE ACCESS FULL| TBL_APPLICATION1_ID | 48594 | 332K| 964 (1)| 00:00:01 |
Predicate Information (identified by operation id):
2 - filter(UPPER("OPRID")='QWERTZ')

Dynamic Sampling did not occur at level 2. Now the estimation is ok thanks to the extended statistics. I have a top-frequency histogram where the cardinality of popular value is exact.

The problem is that dynamic sampling is supposed to add more information to the optimizer, but in this case, it replaces the static statistics. In level 4, dynamic sampling is done as soon as there is a complex predicate in the where clause. And the use of the UPPER() function is considered as a complex predicate. However, in this case, because I have extended statistics, it should be considered as a simple column=value predicate.

Here I’ve set dynamic sampling manually, but this is also what happens when SQL Plan Directives trigger the use of Dynamic Sampling and the good histogram is ignored. This reminds me a Ludovico Caldara blog post about SPD.

Here, maybe, the solution would be Adaptive Dynamic Sampling which may increase the level of sampling when needed:

SQL> alter session set optimizer_dynamic_sampling=11;
Session altered.
SQL> explain plan for
SQL> select * from table(dbms_xplan.display);
Plan hash value: 2187255533
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 7 | 964 (1)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 7 | | |
|* 2 | TABLE ACCESS FULL| TBL_APPLICATION1_ID | 37831 | 258K| 964 (1)| 00:00:01 |
Predicate Information (identified by operation id):
2 - filter(UPPER("OPRID")='QWERTZ')
- dynamic statistics used: dynamic sampling (level=AUTO)

In this case, Adaptive Dynamic Sampling is a good approximation. But it would be better to have a level of dynamic sampling that does not consider a predicate as a complex one when the extended statistics exactly match the predicate. Before there is enough artificial intelligence to cope with this, the best recommendation is to focus on design. In this case, ensuring that we have only uppercase values is the best way to keep queries and estimations simple.


Cet article Dynamic Sampling vs. Extended Statistics est apparu en premier sur Blog dbi services.

Meet me in Australia and New Zealand at the OTN Days 2017

Dimitri Gielis - Tue, 2017-11-14 08:32
Tonight I'll start my trip from Belgium to Australia and New Zealand. Although we have a company in New Zealand and Australia, which Lino is managing, I've never been there myself. It has always been my dream to visit the other side of the earth, so I look forward to it :)

I'll present on how I build Oracle APEX apps today (and in the future) and how to make them available for others (cloud and others).

My schedule of the OTN Days 2017 (APAC Tour) looks like this:
If you are in one of those places, I would love to meet you and hear how you use Oracle APEX.
And I'm always up for showing you a live demo of APEX Office Print, you'll see our upcoming AOP 3.2 version as first! Just grab me by my arm and ask :)

In Perth there will also be a Q&A slot - so any Oracle APEX question can be asked there.
Categories: Development


Subscribe to Oracle FAQ aggregator