Feed aggregator

Links for 2017-03-23 [del.icio.us]

Categories: DBA Blogs

Upgrade of Oracle GI 11.2.0.3 to Oracle 12.1.0.2 Steps

Pakistan's First Oracle Blog - Thu, 2017-03-23 22:14
Just noting down the high level steps which were performed for the Upgrade of Oracle GI 11.2.0.3 to Oracle 12.1.0.2 on RHEL 64bit Linux.

Create a backup of 11G GI HOME as root user:

. oraenv <<< +ASM
 sudo su -
cd /oracle/backup
tar -cvf backup_GI.tar /u01/grid/product/11.2.0.3
tar -cvf backup_inventory.tar  /var/opt/ora/oraInventory/



Stop all DB instances and listener

run the GI 12c runInstaller in silent mode using UPGRADE option in the response file.

Run the rootupgrade.sh script as root user.

check the HAS version and oratab

crsctl query has releaseversion
cat /etc/oratab





Start the listener and DB instances

Cleanup Backup files (can be executed few days later)

Hope that helps.
Categories: DBA Blogs

Difference between Two SQL's Count

Tom Kyte - Thu, 2017-03-23 21:26
Hi Chris/Connor, I need to find the count difference of below two queries like -- Get the count of first SQL -- Get the count of second SQL -- Then COUNT(of 1st SQL) - COUNT(of 2nd SQL) -- IF count if greater than 0 then I need to return...
Categories: DBA Blogs

MAX and ORDER working different

Tom Kyte - Thu, 2017-03-23 21:26
Hello, <code>select rn, code, max(code) over () as mcode from ( select 1 as rn, '1C538454-D89A-E211-88D1-1CC1DEE781E4' as code from dual union all select 2 as rn, 'C5FAEF8F-D1B0-E211-A69F-1CC1DEE781E4' as code from dual ) order by code a...
Categories: DBA Blogs

12c Table Clustering feature and storage indexes, bloom filters and where filters

Tom Kyte - Thu, 2017-03-23 21:26
I would be grateful if you can shed some light on the question of using the 12c clustering (sorting) table data to improve the effectiveness of storage indexes. The question is: If there is (A) a widely used predicate on columns SUBMISSION_UNIT_C...
Categories: DBA Blogs

same explain plan but once in a while query is taking unusual time

Tom Kyte - Thu, 2017-03-23 21:26
Hi, In my Application, one query with fixed filter columns and dynamic values, performs in msecs most of the time but once in a while the query is taking up to 1 min. By using SQL ID I was able to get explain plan, which is same for long runni...
Categories: DBA Blogs

Error on table PEGANBAM_EXTERNAL.self_service_id, ELISA_ID. Field in data file exceeds maximum length

Tom Kyte - Thu, 2017-03-23 21:26
Error is : Record 4: Rejected - Error on table SELF_SERVICE_ID_B, column "SSN". ORA-01400: cannot insert NULL into ("PEGANBAM_EXTERNAL"."SELF_SERVICE_ID_B"."SSN") Here is my control file below: OPTIONS ( DIRECT = FALSE , PARALLEL ...
Categories: DBA Blogs

Large Number of buffer Gets

Tom Kyte - Thu, 2017-03-23 21:26
Hi Team, Not able to understand few thing while running SQL. I am running SQL,in which there is full scan table with around 7 million records. and if I check gv$sql,it has high number if IO. almost around 100 millions. My question is: 1. ...
Categories: DBA Blogs

Exchange range-hash partition to another range-hash partition table.

Tom Kyte - Thu, 2017-03-23 21:26
Hi, I want to do partition exchange from Table A to Table B. Following is one test case, create table RANGE_HASH_TEST ( TRN_DT DATE, SEQ_NO NUMBER, REF_NO VARCHAR2(26) ) PARTITION BY RANGE (TR...
Categories: DBA Blogs

Need an SQL script to insert multiple bind variables all at a time into the "Select statement at where clause".

Tom Kyte - Thu, 2017-03-23 21:26
Hi Tom, Good Morning, This is Venkatesh and I am from India,I need an SQL script to insert multiple bind variables all at a time into the "Select statement at where clause". For eg: Select * from Tab where col in ('val1','val2','val3'.......
Categories: DBA Blogs

Times Ten

Tom Kyte - Thu, 2017-03-23 21:26
Hi, Please answer my below questions 1.What is Time ten SQL? 2.whats the difference between Time Ten SQL and Pl/SQL?
Categories: DBA Blogs

sqlnet parameter

Tom Kyte - Thu, 2017-03-23 21:26
what is different between SQLNET.EXPIRE_TIME and SQLNET.INBOUND_CONNECT_TIMEOUT in easy meaning? if How can I simulate above both parameter?
Categories: DBA Blogs

Can You Say That An Oracle Database is nn% secure?

Pete Finnigan - Thu, 2017-03-23 21:26
I often get this type of question from customers and also from people I speak to and even a few times by email. The question is "can you tell us how secure our database is?", is it 10% secure, is....[Read More]

Posted by Pete On 23/03/17 At 03:22 PM

Categories: Security Blogs

Importance of visualizations and analytics for small business owners

Nilesh Jethwa - Thu, 2017-03-23 11:30

Small business owners are all about "Getting the biggest bang for the buck". We as small biz owners want everything to work efficiently and make sure no resources are wasted that includes human resources, time and money.

The question now is "How do you make sure your operation is efficient?" Not by any magic! But you already have the data that can be used to your advantage. Times have changed and Business Intelligence tools are no longer limited to rich corporations. Forget the full Business Intelligence stack, with your data and an easy to use visualization tool can quickly help you deliver "Gold mine" of information.

There is no doubt that the right visualization tools can help small business owners in their way to success. It is just matter of putting your focus on "being more efficient and nimble".

Read more at http://www.infocaptor.com/dashboard/benefits-of-visualizations-and-analytics-for-small-business-owners

Reminder: Upgrade BPEL 11.1.1.7 to 11.1.1.9 Before December 2018

Steven Chan - Thu, 2017-03-23 10:48

Oracle Fusion Middleware products get new Patch Set updates.  When a new Patch Set has been released, a 12 month Grace Period for the previous Patch Set begins.  Once that Grace Period ends, no new patches for the previous Patch Set will be released.

For more details, see:

Oracle BPEL Process Manager is part of Oracle SOA Suite 11.1.1.x.  Note 1290894.1 does not have a separate listing for Oracle SOA Suite; it refers to "Oracle Fusion Middleware" (FWM) instead. The references in that document to "FMW" implicitly include SOA Suite.

SOA Suite 11.1.1.7 was released in April 2013.  SOA Suite 11.1.1.9 was released in May 2015, which means that the Grace Period for SOA Suite 11.1.1.7 will end after December 2018. 

All E-Business Suite users running BPEL Process Manager in SOA Suite 11.1.1.7 should upgrade to BPEL Process Manager in SOA Suite 11.1.1.9 to remain under Error Correction Support. SOA Suite 11.1.1.x is covered by Premier Support to December 2018, and covered by Extended Support to December 2021.

Related Articles

Categories: APPS Blogs

#fakecode

Scott Spendolini - Thu, 2017-03-23 10:03
Unless you've managed to somehow filter out everything about US politics over the last few months (and if you have, please let me know your secret), then you've likely heard about "fake news".  From a high level, my basic understanding of "fake news" is that it refers to stories or websites that are fabricated to advance the political beliefs and/or ideologies of one site or the other.  Your definition may differ.

So what is fake code?  That, I can at least try to explain in a bit more detail.

The other day, I saw this image posted on Kris Rice's twitter feed:



I thought it was a joke, but it's actually a real book. That made me laugh.  Then cry.  Then I read the book, mainly since it's only 4 pages.  Believe it or not,  there's actually some really good content packed in there.  Let me summarize:

If you choose to copy code from Stack Overflow, the OTN forum, or anywhere, really, there's a few things to keep in mind:


  1. Who owns the code.  It's more than likely perfectly legal to copy and use the code you find on these sites "as is", but keep in mind there may be exceptions to this rule.  The first page in the booklet offers some insight as to what they are.
  2. Who gets credit for the code.  While not required, it's a good idea to credit the author of the snippet that you used.  This is not just courteous, but also provides a reference back to the source of the code, so when it breaks, you know where to start looking.  Which brings me to the third and most important thing to consider:
  3. How good is the code. Unfortunately, there is no easy way to verify this.  You can look at the number of up votes for a specific response, but even that can be easily faked - or more likely, just be wrong because it is based on an older version.

The first two issues don't concern me all that much, as the first is rarely an issue and the second is simple to solve.  The third however, does concern me a lot.  FAKE code - where FAKE stands for Found Another Killer Example (yes, I made that up) - is fast becoming the way that we develop solutions.  Simply enter in some string of what you're trying to do into Google, and a set of possible solutions will magically appear on the page.

With very little effort, you can copy & paste that snippet into your application, run it, and if it works, then you're job is done and you're a hero.  If it doesn't, some more searching and some more tinkering is in order.  Maybe it's a 15 minute task instead of a 5 minute one.  But that doesn't matter, as what you were asked to do is done, and you can move on to the next problem.

There's definitely some problems with this approach.  If you don't understand what the code is doing, who is going to fix it when it breaks?  And it will break at some point.  As other libraries or components get upgraded and/or browser versions change, code will eventually break - usually at the worst possible time.

If you get lucky, and it doesn't break, then who is going to change how it works when the users come up with new requirements?  If you don't have a full grasp as to what it does or how it does it, then you'll have little success in changing how it works.

And just because there's a solution, does it mean that is the best, most secure, most performant solution?  It may work fine for a single user on a development environment, but what happens when its implemented in the real world?  This is especially a concern when you're implementing something in a language that you're not as proficient in, as you won't be able to readily spot bad practices.

In no way am I saying that any of these sites are bad, nor saying don't use them.  I use them all of the time to find solutions to problems and contribute solutions, as do many others.  They are all a valuable resource that makes our jobs a lot easier to do.

What I do want to emphasize is that when you do use any site that presents a solution or code snippet, be sure to not only vet the solution, but also ensure that you completely understand how it works, and be ready to fix it if it breaks.  If you can't do either of these things easily, then perhaps its best to find an alternate solution to your problem - one that you can understand and maintain

FAKE code is a real danger to any development project.  Sprinkle enough of it around, and you're essentially creating a ticking time bomb, that's just waiting to explode.  But FAKE code can be spotted and stopped fairly easily: Document the source of any snippet or blog you use.  Take the time to learn what it does line-by-line, and document that, too.  Be ready to support it if in the case it breaks.  And if you're not comfortable doing so, have alternative resources lined up or at least identified.

Together, with a little bit of work, we can stop the FAKEcode epidemic.

Replacing the “V” in Oracle ADF’s MVC design pattern with Oracle JET or other front end framework

This post was written by Fishbowl’s own John Sim – our resident Oracle User Experience expert. From front-end design to user journeys and persona mapping; John has helped numerous customers over 14 years enhance their desktop and mobile experiences with Oracle WebCenter. John is also an Oracle ACE, which recognizes leaders for their technical expertise and community evangelism.

One of our goals at Fishbowl is to continuously enhance and evolve the capabilities of WebCenter for both developers and clients with new tooling capabilities and pre-built custom components that are essential and not available today as part of the OOTB Oracle solution.

We have taken all of our collective knowledge and IP over the years since WebCenter PS3 and created the “Portal Solution Accelerator” previously known as “Intranet In A Box” that takes WebCenter Portal and it’s capabilities to the next level for creating Digital Workplace Portals.

We have taken all of our collective knowledge and IP over the years since WebCenter PS3 and created the “Portal Solution Accelerator” previously known as “Intranet In A Box” that takes WebCenter Portal and it’s capabilities to the next level for creating Digital Workplace Portals.

Today I’m going to cover one of the benefits of using our Portal Solution Accelerator: Replacing the “V” in ADFs MVC design pattern. This enables third party developers, web design agencies, marketers (with basic web design skills) to use other libraries and front end frameworks of their choosing such as Oracle JET, Angular, React, Vue, and Bootstrap – to name a few. By using a different front end library such as JET, you will be able to create more modern and dynamic responsive portals, widgets, and portlets with little to no experience of developing with ADF. You will also be able to leverage the benefits of ADF Model Controller and WebCenter’s Personalisation, Security, Caching and Mashup integration capabilities with other solutions like Oracle E-Business Suite (EBS) and Business Intelligence (BI) on the back end.

So, let’s take a closer look at the Portal Solution Accelerator in the following diagram. You can see it is made up of 2 core components – our back end PSA (Portal Solution Accelerator) component and our front end SPA (Single Page Application) component architecture. One of the things we decided early on is to separate the back end and front end architecture to allow for SPA front end components to be platform agnostic and allow them to work as a Progressive Web App and work on other platforms outside of Portal. This enables us to deploy SPA front end components directly onto BI to provide additional charting capabilities through their narrative components to EBS, SharePoint, and Liferay, as well as onto the cloud. This provides the potential for a hybrid on-premise Portal to Oracle Cloud (Site Cloud Service) Content Experience platform enabling reuse of our portal components and security on the Cloud.

To find out more about our Portal Solution Accelerator head over to our website – https://www.fishbowlsolutions.com/services/oracle-webcenter-portal-consulting/portal-solution-accelerator/

Lets go into a quick dive into WebCenter Portal Taskflows and our Single Page Application (SPA) architecture.

WebCenter Portal – allows you to create Widgets (ADF Taskflows) that can easily be dragged and dropped onto a page by a contributor and can work independently or alongside another taskflow. The interface View is currently generated at the back end with Java processes and can be easily optimised to enable support of adaptive applications. However, you should be aware that this model is very server process intensive.

  • Pros
    • If you know ADF development it makes it extremely fast to create connected web applications using the ADF UI.
    • The ADF generated HTML/JS/CSS UI supports Mobile and desktop browsers.
    • The UI is generated by the application allowing developers to create applications without the need for designers to be involved.
  • Cons
    • If you don’t know ADF or have a UI designed by a third party that does not align with ADFs UI capabilities , it can be very challenging to create complex UI’s using ADF tags, ADF Skins and ADFs Javascript framework.
    • It is a bad practice to combine mix and match open source libraries with ADF tags like jQuery or Bootstrap not supported by Oracle with ADF. This limits the reuse of the largely available open source to create dynamic interactive components and interfaces such as a Carousel etc.
    • It also can be very hard to brand, and is also very server process intensive.

Single Page Applications –  are essentially browser generated applications with Javascript that use AJAX to quickly and easily update and populate the user interface to create fluid and responsive web apps. Instead of the server processing and managing the DOM generated and sent to the client, the client’s browser processes and generates and caches the UI on the fly.

  • Pros
    • All modern front end frameworks allow you to create Single Page Applications and tie into lots of open source front end solutions and interfaces.
  • Cons
    • Can be hard to create Modular Isometric Universal JS applications.
    • You also need to test across browsers and devices your application is looking to support.
    • The front end application can get very large if not managed correctly.

The Portal Solution Accelerator.

What we have done with PSA is create a framework that provides the best of both worlds allowing you to create Modular Single Page Application taskflows that can be dragged and dropped onto a WebCenter Portal page. This allows your web design teams and agencies to manage and develop the front end quickly and effectively with any frameworks and standard HTML5, CSS, and Javascript. You can also use Groovy scripts or Javascript with (Oracle Nashorn) on the server side to create Isometric javascript taskflow applications.

Please note – you cannot create a taskflow that leverages both ADFs View model and our framework together. You can however create 1 taskflow that is pure ADF and drop it on the same page as a taskflow that has been created with a custom front end such as angular using our Portal Solution Accelerator View to replace ADF view. This enables you to use existing OOTB WebCenter Portal taskflows and have them work in conjunction with custom built components.

How Does it work?

Within WebCenter Portal in the composer panel where you can drag and drop in your taskflows onto a page there is a custom taskflow – Fishbowl Single Page Application.

Drop this onto the page and manage its parameters. Here is a quick screenshot of a sample taskflow component for loading in Recent News items.

The Template parameters points to a custom SPA frontend javascript component you would like to load in and inject into the taskflow. You can define custom parameters to pass to this component and these parameters can be dynamic ADF variables via the template parameter panel. The SPA component then handles the magic loading in the template, events, JS libraries CSS and images to be generated from within the taskflow.

Within the SPA API there are custom methods we have created that allow you to pass AJAX JSON calls to the ADF backend groovy or javascript code that enable the app to work and communicate with other services or databases.

ADF Lifecycle… Timeouts.

One of things that often comes up when we present our solution with others who have attempted to integrate JET applications with WebCenter portal is how do you manage the lifecycle and prevent ADF timeouts. For example, if you stay on the same WebCenter Portal page for some time working on a single page application you will get a popup saying you will be automatically logged out. Remember our Portal Solution Accelerator is a taskflow. We are using a similar ADF message queue to pass JSON updates to the ADF lifecycle when a user is working on a complex modular single page application so we don’t run into timeout issues.

Getting out of deployment hell (as well)!!!

One of the downsides with ADF development is having to build your ADF application and deploy stop and start the server to test and find there is a bug that needs to be fixed. And then go through the entire process again. Trust me – it is not quick!

Once you have our framework deployed you can easily deploy / upload standard Javascript Templates, CSS and groovy scripts to Apache or OHS that are automatically consumed by our ADF Taskflow. There is no stop start test. Just upload your updates and refresh the browser!!

I hear Oracle is working to integrate JET with ADF.

Yes, but it’s not there today.
Plus you’re not stuck to just JET with our framework. You can use React or any front end framework or library and you get the benefits of all the additional components, apps, tooling that the Portal Solution Accelerator provide.

Futures

Our next key release that we are working on is to fully support Progressive Web Application Taskflow Development. To find out more on what a progressive web app is head over to google – https://developers.google.com/web/progressive-web-apps/checklist

 

The post Replacing the “V” in Oracle ADF’s MVC design pattern with Oracle JET or other front end framework appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Partner Webcast – Cloud@Customer: Partner opportunities with Oracle Cloud Machines

Customers have a clear understanding of the value Cloud can bring to their business. They see the potential for speeding new applications to market.  They want the agility and flexibility that...

We share our skills to maximize your revenue!
Categories: DBA Blogs

min/max Upgrade

Jonathan Lewis - Thu, 2017-03-23 03:53

A question came up on the OTN database forum a little while ago about a very simple query that was taking different execution paths on two databases with the same table and index definitions and similar data. In one database the plan used the “index full scan (min/max)” operation while the other database used a brute force “index fast full scan” operation.

In most circumstances the starting point to address a question like this is to check whether some configuration details, or some statistics, or the values used in the query are sufficiently different to result in a significant change in costs; and the first simple procedure you can follow is to hint each database to use the plan from the opposite database to see if this produces any clues about the difference – it’s a good idea when doing this test to use one of the more verbose formatting options for the call to dbms_xplan.

In this case, though, the OP discovered a note on MoS reporting exactly the problem he was seeing:

Doc ID 2144428.1: Optimizer Picking Wrong ‘INDEX FAST FULL SCAN’ Plan vs Correct ‘INDEX FULL SCAN (MIN/MAX)’

which referred to

Bug 22662807: OPTIMIZER PICKING INDEX FFS CAN INSTEAD OF MIN/MAX

Conveniently the document suggested a few workarounds:

  • alter session set optimizer_features_enable = ‘11.2.0.3’;
  • alter session set “_fix_control” = ‘13430622:off’;
  • delete object stats [Ed: so that dynamic sampling takes place … maybe a /*+ dynamic_sampling(alias level) */ hint would suffice].

Of the three options my preference would (at least in the short term) be the _fix_control one. Specifically, from the v$system_fix_control view, we can see that it addresses the problem very precisely with the description: “index min/max cardinality estimate fix for filter predicates”.

The example in the bug note showed a very simple statement (even more simple than the OP’s query which was only a single table query anyway), so I thought I’d build a model and run a few tests to see what was going on. Luckily, before I’d started work, one of the other members of the Oak Table network sent an email to the list asking if anyone knew how the optimizer was costing an example he’d constructed – and I’ve finally got around to looking at his example, and here’s the model and answer(s), starting with the data set:


rem
rem     Script:         test_min_max.sql
rem     Dated:          March 2017
rem
rem     Last tested
rem             12.1.0.2
rem             11.2.0.4
rem             11.2.0.3
rem

create table min_max_test nologging
as
with ids as (
        select /*+ Materialize */ rownum  id from dual connect by rownum <= 50000 -- > comment to protect formatting
),
line_nrs as (
        select /*+ Materialize */  rownum line_nr from dual connect by rownum <= 20 -- > comment to protect formatting
)
select
        id, line_nr ,rpad(' ', 800, '*') data
from
        line_nrs, ids
order by
        line_nr, id
;

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'min_max_test',
                method_opt       => 'for all columns size 1'
        );
end;
/

create index mmt_ln_id on min_max_test (line_nr, id) nologging;
create index mmt_id    on min_max_test (id)          nologging;

The table has two critical columns: each id has 20 line_nr values associated with it, but the way the data was generated means that the line numbers for a given id are scattered across 20 separate table blocks.

There are two indexes – one on the id which will allow us to find all the rows for a given id as efficiently as possible, and one (slightly odd-looking in this context) that would allow us to find a specific row for a given line_nr and id very efficiently. Two things about these indexes – in a live application they should both be compressed on the first (only, in the case of index mmt_id) column, and secondly the necessity of the mmt_id index is questionable and it might be an index you could drop if you reversed the order of the columns in mmt_ln_id. The thing about these indexes, though, is that they allow us to demonstrate a problem. So let’s query the data – twice, hinting each index in turn:


set serveroutput off

select
        /*+ index(t(id)) */
        min(line_nr)
from
        min_max_test t
where
        id = :b1
;

select * from table(dbms_xplan.display_cursor);

select
        /*+ index(t(line_nr, id)) */
        min(line_nr)
from
        min_max_test t
where
        id = :b1
;

select * from table(dbms_xplan.display_cursor);

It’s fairly safe to make a prediction about the execution plan and cost of the first query – it’s likely to be a range scan that accesses a couple of branch blocks, a leaf block and 20 separate table blocks followed by a “sort aggregate” – with a cost of about 23.

It’s a little harder to make a prediction about the second query. The optimizer could infer that the min(line_nr) has to be close to the left hand section of the index, and could note that the number of rows in the table is the same as the product of the number of distinct values of the two separate columns, and it might note that the id column is evenly distributed (no histogram) across the data, so it might “guess” that it need only range scan all the entries for the first line_nr to find the appropriate id. So perhaps the optimizer will use the index min/max range scan with a cost that is roughly 2 branch blocks plus total leaf blocks / 20 (since there are 20 distinct values for line_nr); maybe it would divide the leaf block estimate by two because “on average” – i.e. for repeated random selections of value for id – it would have to scan half the leaf blocks. There were 2,618 leaf blocks in my index, so the cost should be close to either 133 or 68.

Here are the two plans – range scan first, min/max second:


select  /*+ index(t(id)) */  min(line_nr) from  min_max_test t where id = :b1
-----------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |              |       |       |    23 (100)|          |
|   1 |  SORT AGGREGATE                      |              |     1 |     8 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| MIN_MAX_TEST |    20 |   160 |    23   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN                  | MMT_ID       |    20 |       |     3   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("ID"=:B1)


select  /*+ index(t(line_nr, id)) */  min(line_nr) from  min_max_test t where  id = :b1
-----------------------------------------------------------------------------------------
| Id  | Operation                   | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |           |       |       |    22 (100)|          |
|   1 |  SORT AGGREGATE             |           |     1 |     8 |            |          |
|   2 |   FIRST ROW                 |           |     1 |     8 |    22   (0)| 00:00:01 |
|*  3 |    INDEX FULL SCAN (MIN/MAX)| MMT_LN_ID |     1 |     8 |    22   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("ID"=:B1)

Spot on with the estimate for the simple range scan – but what did we do wrong with the estimate for the min/max scan ? You might notice in the first example the “table access by rowid batched” and realise that this is running on 12c. Here’s the plan if I get if I set the optimizer_features_enable back to 11.2.0.3 before running the second query again:


select  /*+ index(t(line_nr, id)) */  min(line_nr) from  min_max_test t where  id = :b1
-----------------------------------------------------------------------------------------
| Id  | Operation                   | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |           |       |       |   136 (100)|          |
|   1 |  SORT AGGREGATE             |           |     1 |     8 |            |          |
|   2 |   FIRST ROW                 |           |     1 |     8 |   136   (1)| 00:00:01 |
|*  3 |    INDEX FULL SCAN (MIN/MAX)| MMT_LN_ID |     1 |     8 |   136   (1)| 00:00:01 |
-----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("ID"=:B1)

Using the 11.2.0.3 optimizer model the plan has a cost that’s very close to our prediction – we’ll see why there’s a slight difference in a moment. If we set the optimizer_features_enable to 11.2.0.4 the cost drops back to 22. So for our example 11.2.0.3 will use the simple “index range scan” and an upgrade to 11.2.0.4 (or higher) will switch to the “index full scan (min/max)”. If you look at the OTN posting the impact of the change in costing is exactly the other way around – 11.2.0.3 uses the min/max path, 11.2.0.4 uses the simple index range scan.

The techy bit

You really don’t need to know this – experimenting with the optimizer_features_enable (or _fix_control) will give you plans that show you all the numbers you need to see to check whether or not you’ve run into this particular problem – but if you’re interested here’s a little bit from the two 10053 trace files. We need only look at a few critical lines. From the 11.2.0.3 costing for the min/max scan:


Index Stats::
  Index: MMT_ID  Col#: 1
  LVLS: 2  #LB: 2202  #DK: 50000  LB/K: 1.00  DB/K: 20.00  CLUF: 1000000.00  NRW: 1000000.00
  Index: MMT_LN_ID  Col#: 2 1
  LVLS: 2  #LB: 2618  #DK: 1000000  LB/K: 1.00  DB/K: 1.00  CLUF: 125000.00  NRW: 1000000.00

SINGLE TABLE ACCESS PATH
  Single Table Cardinality Estimation for MIN_MAX_TEST[T]
  Column (#1): ID(NUMBER)
    AvgLen: 5 NDV: 50536 Nulls: 0 Density: 0.000020 Min: 1.000000 Max: 50000.000000
  Table: MIN_MAX_TEST  Alias: T
    Card: Original: 1000000.000000  Rounded: 20  Computed: 19.787874  Non Adjusted: 19.787874

 ****** Costing Index MMT_LN_ID
  Access Path: index (Min/Max)
    Index: MMT_LN_ID
    resc_io: 135.000000  resc_cpu: 961594
    ix_sel: 1.000000  ix_sel_with_filters: 1.9788e-05
    Cost: 135.697679  Resp: 135.697679  Degree: 1

I was running 12.1.0.2 so there were a few extra bits and pieces that I’ve deleted (mostly about SQL Plan Directives and in-memory). Critically we can see that the stats collection has a small error for the ID column – 50,536 distinct values (NDV) instead of exactly 50,000. This seems to have given us a cost for the expected index range of: 2 (blevel) + ceiling(2618 (leaf blocks) * 50536 / 1000000) = 2 + ceil(132.3) = 135, to which we add a bit for the CPU and get to 136. (Q.E.D.)

Then we switch to costing for 11.2.0.4:


SINGLE TABLE ACCESS PATH
  Single Table Cardinality Estimation for MIN_MAX_TEST[T]
  Column (#1): ID(NUMBER)
    AvgLen: 5 NDV: 50536 Nulls: 0 Density: 0.000020 Min: 1.000000 Max: 50000.000000
  Table: MIN_MAX_TEST  Alias: T
    Card: Original: 1000000.000000  Rounded: 20  Computed: 19.787874  Non Adjusted: 19.787874

 ****** Costing Index MMT_LN_ID
  Access Path: index (Min/Max)
    Index: MMT_LN_ID
    resc_io: 21.787874  resc_cpu: 156872
    ix_sel: 1.000000  ix_sel_with_filters: 1.9788e-05
    Cost: 22.324608  Resp: 22.324608  Degree: 1

We still have the small error in the number of distinct values for id, so the estimated number of rows that we need to access from the table for a given id (before “aggregating” to find its minimum line_nr) is 19.787874 (Computed: / Non Adjusted:) rather than exactly 20. Notice, then, that the cost of using the index is 19.787874 + 2 which looks suspiciously like adding the blevel to the number of table blocks to get a cost and forgetting that we might have to kiss a lot of frogs before we find the prince. Basically, in this example at least, it looks like the costing algorithm has NOTHING to do with the mechanics of what actually has to happen at run-time.

Footnote

This is only an initial probe into what’s going on with the min/max scan; there are plenty more patterns of data that would need to be tested before we could have any confidence that we had produced a generic model of how the optimizer does its calculations – the only thing to note so far is that there IS a big change as  you move from 11.2.0.3 to later versions: the case on OTN showed the min/max scan disappearing on the upgrade, the example above shows the min/max disappearing on the downgrade – either change could be bad news for parts of a production system.

There are a couple of related bugs that might also be worth reviewing.

  • Bug 11834402 : CBO CHOOSES A SLOW INDEX FULL SCAN OVER A MUCH FASTER INDEX RANGE SCAN
  • Bug 13430622 : INDEX SCAN IN VERY SLOW FOR ONE PREDICATE AND FAST FOR OTHERS

There is a note, though that this last bug was fixed in 12.1

Footnote 2

When experimenting, one idea to pursue as the models get more complex and you’re using indexes with more than two columns is to test whether the presence of carefully chosen column group statistics might make a difference to the optimizer’s estimates of cardinality (hence cost) of the min/max scan.

 


Index on Column having special characters

Tom Kyte - Thu, 2017-03-23 03:06
HI Team, Consider below table structure - tb_contact_details ------------------ sequenceno mobileno request_date ---------- -------- ------------ 100001 +9198989 11/12/2001 100002 +9198989 11/12/2002 100003 +9191258 11/12/200...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator