Feed aggregator

[BLOG] Art of Oracle Database Performance Tuning: Things You Must Know

Online Apps DBA - Mon, 2018-11-19 04:29

Do you want to learn How to Improve and multiply the performance of a database significantly? If yes, then visit: https://k21academy.com/tuning13 where we have covered: ✔Hardware & DBMS Tuning ✔Query Optimisation ✔Process of Performance Tuning & much more… Do you want to learn How to Improve and multiply the performance of a database significantly? If […]

The post [BLOG] Art of Oracle Database Performance Tuning: Things You Must Know appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Is there too much memory for my SQL Server instance?

Yann Neuhaus - Mon, 2018-11-19 02:56

Is there too much memory for my SQL Server instance? This is definitely an uncommon question I had to deal with of my customers a couple of weeks ago. Usually DBAs complain when they don’t have enough memory for environments they have to manage and the fact is SQL Server (like other SGBDRs) provides a plenty of tools for memory pressure troubleshooting. But what about of the opposite? This question raised in a context of an environment that includes a lot of virtual database servers (> 100) on the top of VMWare where my customer was asked for lowering the SQL Server instance memory reservations when possible in order to free memory from ESX hosts.

blog 147 - 0 - banner

Let’s start with the sys.dm_os_sys_memory. This is the first one that my customer wanted to dig into. This DMV may be helpful to get a picture of the overall system state including external memory conditions at the operating system level and the physical limits of the underlying hardware.

select 
	available_physical_memory_kb / 1024 AS available_physical_memory_MB,
	total_physical_memory_kb / 1024 AS total_physical_memory_MB,
	total_page_file_kb / 1024 AS total_page_file_MB,
	available_page_file_kb / 1024 AS available_page_file_MB,
	system_cache_kb / 1024 AS system_cache_MB
from sys.dm_os_sys_memory;

 

blog 147 - 1 - sys.dm_os_memory

 

But in the context of my customer, it partially helped to figure out SQL Server memory consumption instances because we didn’t really face any environments under pressure here.

However, another interesting DMV we may rely on is sys.dm_os_sys_info. We may also use their counterparts with perfmon counters \Memory Manager\Target Server Memory (KB) and \Memory Manager\Total Server Memory (KB) as shown below:

select 
	physical_memory_kb / 1024 AS physical_memory_MB,
	visible_target_KB,
	committed_kb,
	committed_target_kb
from sys.dm_os_sys_info;

 

blog 147 - 9 - sys.dm_os_sys_info

The concept of committed and Target commit memory are important here to figure out how SQL Server deals with memory space. The commit memory represents the physical memory allocated by the SQL Server process whereas the Target memory is the amount of memory SQL Server tries to maintain as committed memory regarding different factors described in the BOL. Chances are the latter is closed to the max server memory value in most of scenarios from my experience by the way.

But relying blindly on the committed memory may contribute to misinterpretation about what SQL Server is really consuming for a specific period of time. Indeed, let’s say my SQL Server instance is capped to 2GB and after the daily business workload here the corresponding figures. Let’s say the values in the context of my customer were of a different order of magnitude but this demo will help to figure out the issue that motivated this write-up:

select 
	physical_memory_kb / 1024 AS physical_memory_MB,
	visible_target_KB,
	committed_kb,
	committed_target_kb
from sys.dm_os_sys_info;

 

blog 147 - 10 - sys.dm_os_sys_info_after_daily_workload

The committed memory is about 365MB and by far from the configured max server memory parameter value – 2GB. But now let’s the database maintenance kicks-in. Usually this is a nightly and a daily or weekly basis job that includes generally a rebuilding index task that consists in reading generally all the data structures to get external fragmentation values through the DMF sys.dm_db_index_physical_stats(). This operation can touch structures that are not used during daily business and may have a huge impact on the buffer pool. In my case here the new memory state after executing this maintenance task:

blog 147 - 11 - sys.dm_os_sys_info_after_maintenance

The game has changed here because SQL Server has committed all the memory until reaching the max server memory value. This time we may go through the sys.dm_os_memory_clerks DMV to get details from different memory clerks of my SQL Server instance. pages_kb column is used because SQL instances run with SQL 2014 version.

SELECT
	[type],
	pages_kb / 1024 AS size_MB
FROM sys.dm_os_memory_clerks
WHERE memory_node_id = 0
ORDER BY size_MB DESC

 

blog 147 - 12 - sys.dm_os_memory_clerks

So, from now the committed memory has good chance to keep closed from the max server memory value while in fact the daily business workload won’t probably need all this memory allocated to the SQL Server process. This exactly why my customer asked me for a way to get a more realistic picture of memory consumption of its SQL Server instances during daily business by excluding the nightly database maintenance workload.

We went through a solution that consisted in freeing up the committed memory before starting the journey and to leave the memory grow up gradually until reaching its maximum usage. It is worth noting that there is no easy way. as far as I know, to free up the committed memory and SQL Server may decrease it only if the corresponding target server memory value is lower. From my experience this situation is more an exception than the rule of thumbs and therefore it is difficult to rely on it. One potential workaround might be to restart the SQL Server instance(s) but in the case of my customer restarting the database servers was not an option and we looked into a solution that forced SQL Server making room by setting up the max server memory closed to the min server memory value. Don’t get me wrong, I don’t consider this as a best practice but more as an emergency procedure because as restarting a SQL Server instance, it may lead to a temporary impact but in a high number of magnitudes especially whether the workload performance is directly tied to the buffer cache state (warm vs cold). In addition, I would say that scaling the max server memory value with only the daily business workload may be controversial in many ways and in fact we have to consider some tradeoffs here. In the context of my customer, the main goal was to release “unused memory” from SQL Server instances during daily business to free up memory from VMWare ESX hosts but there is no free lunch. For instance, the nightly basis workload execution may become suddenly higher in duration if there is less room to work in memory. Another direct side effect of working with less memory might be the increase of I/O operations from the storage layout. In a nutshell, there is no black or white solution and we have to deal with what we consider at the best solution for the specific context.

See you!

 

 

 

Cet article Is there too much memory for my SQL Server instance? est apparu en premier sur Blog dbi services.

Bulgarian Oracle User Group (BGOUG) 2018 : The Journey Home

Tim Hall - Mon, 2018-11-19 00:12

It was a 03:00 start, which is never a good thing. I got down to reception to meet my fellow travellers and we started on our trip to the airport. As we walked out of the hotel we were greeted by a lite scattering of snow. It was clearly visible on some of the mountains the day before, but it was quite a surprise to see it here, especially as I left my balcony door open for the whole of my stay…

The drive to the airport was quick, as there was very little traffic. The baggage drop and check-in queue for Lufthansa was pretty large, but fortunately I had checked in online and I was hang-luggage only, so I walked straight to, and through, security. That left me with over an hour before the flight.

The flight from Sofia to Frankfurt was pretty easy. I had an empty seat next to me, so I got the laptop out and started to write two presentations I’ve got to give at work.

I was expecting the layover in Frankfurt to be about 70 minutes, but it turned out is was nearly 5 hours, because I didn’t read the itinerary properly, so I logged into work and cleared down all the crap that collected during the two days I was away.

The flight from Frankfurt to Birmingham was about and hour and went pretty smoothly. Once again I had an empty seat next to me, so happy days!

Getting through security was pretty quick, then I was in the bounciest taxi ride ever to get home, and that is was my last international conference of the year complete.

As followers of the blog will know, this year has been problematic for me from a conference perspective. It’s especially disappointing when my travelling curse hits my favourite conference of the year.

Thanks to everyone from BGOUG for letting me come for the 8th time. Thanks to the people who came to my sessions. The turnout was great, and it certainly lifted my spirits! Sorry I wasn’t able to get more involved on the first day, but at least everything went well on the second day. See you again soon!

Cheers

Tim…

PS. Here are the other posts from this trip.

 

Bulgarian Oracle User Group (BGOUG) 2018 : The Journey Home was first posted on November 19, 2018 at 7:12 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

removing control characters from text

Tom Kyte - Sun, 2018-11-18 22:06
Is there a routine in Oracle that can test for and remove unwanted characters in a text string, ie control characters?
Categories: DBA Blogs

Presenting at AUSOUG Connect 2018 Conference in Melbourne, 21 November 2018.

Richard Foote - Sun, 2018-11-18 06:25
  After initially not being in a position to make it this year, I will now be presenting at the AUSOUG Connect 2018 Conference in Melbourne this coming Wednesday, 21 November 2018. My presentation will be: “12c Release 2 and 18c – New Indexing Related Features” Oracle Database 12.2 and 18.3 releases have introduced a number […]
Categories: DBA Blogs

Exadata Cloud Machine - Hardware capacity

Syed Jaffar - Sun, 2018-11-18 04:13
Everyone who is aware and utilizes Exadata Database Machine is certainly knew the performance it can deliver. I have involved in many Exadata migration projects, and witnessed how customers gained the database performance and satisfied post migration. I am not talking about the cost, the need etc., as a technical guy, I knew the capabilities of the box and how it can benefit customers to fulfill their need and future demand.

We all knew about Cloud technologies, how every software company and organization trying to race with the trend and need of cloud technologies. In some countries, the cloud adoption is bit slower compare to the other part of the world. But, gradually majority of the companies would be adopting cloud technologies, this is for sure. Certainly, cloud has its own share of advantages and disadvantages. Whoever utilizes it smartly, can gain much flexibility and benefits.

To ensure and meet customers demand to have Exadata availability on cloud, Oracle started Exadata Cloud services offering to facilitate Exadata machine on cloud. Still, some organization couldn't adopt cloud due to industry regulations, corporate policies, security compliance etc. Therefore, Oracle announced Exadata Cloud Machine availability. With this model, customers who want to have cloud on-premises with Exadata hardware, can go for this model.

I would like to highlight the hardware capabilities that Exadata Could Machine (ExaCM) offers.


  • 40Gbps InfiniBand Networking
  • Ultra-fast NVMe Flash storage
  • Up to 257GB/sec Throughput
  • Up to 3.6 Million 8k I/Os per sec
  • 1/4 millisecond response time
  • Fastest Compute 
  • Fastest x86 processors
  • Large Memory Capacity - 720GB per compute node
  • Complete Redundancy
Soon will talk more about Exadata Cloud Machine migrations. Stayed tuned and hunger for knowledge.

Polymorphic Table Functions

Bar Solutions - Sun, 2018-11-18 00:04

I have been working on a presentation on Polymorphic Table Functions. During this time I was looking for a real use case for Polymorphic Table Functions. I came up with an example which is not very useful in real life, but very useful to explain the technique.
At my current job I came across a piece of code that I had to copy and adjust to fit the needs for that specific case. The idea was always the same, I get a table with semi-colon separated values in one column that have to be split into the correct number of columns before checking the data to the current data in a specific table.
I thought: ‘Maybe I can solve this copy-paste-adjust process by using a Polymorphic Table Function.’

Let’s first set the current scene.
We have two tables. The well known EMP and DEPT tables.

create table emp
(empno    number(4) not null
,ename    varchar2(10)
,job      varchar2(9)
,mgr      number(4)
,hiredate date
,sal      number(7, 2)
,comm     number(7, 2)
,deptno   number(2)
)
/
create table dept
(deptno number(2)
,dname  varchar2(14)
,loc    varchar2(14)
)
/        

And we add the well known data:

insert into emp values (7369, 'SMITH',  'CLERK',     7902, to_date('17-DEC-1980', 'DD-MON-YYYY'),  800, null, 20);
insert into emp values (7499, 'ALLEN',  'SALESMAN',  7698, to_date('20-FEB-1981', 'DD-MON-YYYY'), 1600,  300, 30);
insert into emp values (7521, 'WARD',   'SALESMAN',  7698, to_date('22-FEB-1981', 'DD-MON-YYYY'), 1250,  500, 30);
insert into emp values (7566, 'JONES',  'MANAGER',   7839, to_date('2-APR-1981', 'DD-MON-YYYY'),  2975, null, 20);
insert into emp values (7654, 'MARTIN', 'SALESMAN',  7698, to_date('28-SEP-1981', 'DD-MON-YYYY'), 1250, 1400, 30);
insert into emp values (7698, 'BLAKE',  'MANAGER',   7839, to_date('1-MAY-1981', 'DD-MON-YYYY'),  2850, null, 30);
insert into emp values (7782, 'CLARK',  'MANAGER',   7839, to_date('9-JUN-1981', 'DD-MON-YYYY'),  2450, null, 10);
insert into emp values (7788, 'SCOTT',  'ANALYST',   7566, to_date('09-DEC-1982', 'DD-MON-YYYY'), 3000, null, 20);
insert into emp values (7839, 'KING',   'PRESIDENT', null, to_date('17-NOV-1981', 'DD-MON-YYYY'), 5000, null, 10);
insert into emp values (7844, 'TURNER', 'SALESMAN',  7698, to_date('8-SEP-1981', 'DD-MON-YYYY'),  1500,    0, 30);
insert into emp values (7876, 'ADAMS',  'CLERK',     7788, to_date('12-JAN-1983', 'DD-MON-YYYY'), 1100, null, 20);
insert into emp values (7900, 'JAMES',  'CLERK',     7698, to_date('3-DEC-1981', 'DD-MON-YYYY'),   950, null, 30);
insert into emp values (7902, 'FORD',   'ANALYST',   7566, to_date('3-DEC-1981', 'DD-MON-YYYY'),  3000, null, 20);
insert into emp values (7934, 'MILLER', 'CLERK',     7782, to_date('23-JAN-1982', 'DD-MON-YYYY'), 1300, null, 10);
insert into dept values (10, 'ACCOUNTING', 'NEW YORK');
insert into dept values (20, 'RESEARCH',   'DALLAS');
insert into dept values (30, 'SALES',      'CHICAGO');
insert into dept values (40, 'OPERATIONS', 'BOSTON');

We get data from a different system, which is in a semi-colon separated format. So we load that in a couple of staging tables:

create table empstg
(line varchar2(4000)
)
/
create table deptstg
(line varchar2(4000)
)
/

And then we add some data:

insert into empstg values ('7369;SMITH;CLERK;7902;17121980;800; ;20');
insert into empstg values ('7499;ALLEN;SALESMAN;7698;20021981;1600; 300;30');
insert into empstg values ('7521;WARD;SALESMAN;7698;22021981;1250; 500;30');
insert into empstg values ('7566;JONES;MANAGER;7839;02041981; 2975; ;20');
insert into empstg values ('7654;MARTIN;SALESMAN;7698;28091981;1250;1400;30');
insert into empstg values ('7698;BLAKE;MANAGER;7839;01051981; 2850; ;30');
insert into empstg values ('7782;CLARK;MANAGER;7839;09061981; 2450; ;10');
insert into empstg values ('7788;SCOTT;ANALYST;7566;09121982;3000; ;20');
insert into empstg values ('7839;KING;PRESIDENT; ;17111981;5000; ;10');
insert into empstg values ('7844;TURNER;SALESMAN;7698;08091981; 1500;0;30');
insert into empstg values ('7876;ADAMS;CLERK;7788;12011983;1100; ;20');
insert into empstg values ('7900;JAMES;CLERK;7698;03121981;  950; ;30');
insert into empstg values ('7902;FORD;ANALYST;7566;03121981; 3000; ;20');
insert into empstg values ('7934;MILLER;CLERK;7782;23011982;1300; ;10');
insert into empstg values ('2912;BAREL;DEVELOPER;7839;29122017;4000; ;50');

insert into deptstg values ('10;ACCOUNTING;NEW YORK');
insert into deptstg values ('20;RESEARCH;DALLAS');
insert into deptstg values ('30;SALES;NORTH CAROLINA');
insert into deptstg values ('40;OPERATIONS;BOSTON');
insert into deptstg values ('50;DEVELOPMENT;SAN FRANCISCO');

To process the data and merge it into the main tables we use a package. We could have used a merge statement, but this implies all the rows that are the same will get an update anyway, which results in a lot of journal-ling data which is done by triggers. Using the EMP and DEPT tables this wouldn’t be too much of a problem, but we are talking 250k+ rows each time (at least once a day).
So we want a little more control and only insert/update when it’s really necessary.

create or replace package process_stg is
  procedure dept;
  
  procedure emp;
end process_stg;
/
create or replace package body process_stg is
  failure_in_forall exception;
  pragma exception_init(failure_in_forall, -24381);
  c_limit     constant number := 10;
  procedure dept is
    cursor c_inserts is
      with import as
       (select trim(regexp_substr(line, '[^;]+', 1, 1)) deptno
              ,trim(regexp_substr(line, '[^;]+', 1, 2)) dname
              ,trim(regexp_substr(line, '[^;]+', 1, 3)) loc
          from deptstg stg)
      select i.deptno
            ,i.dname
            ,i.loc
        from import i
        left outer join dept d on (d.deptno = i.deptno)
       where 1 = 1
         and d.deptno is null;
    cursor c_updates is
      with import as
       (select trim(regexp_substr(line, '[^;]+', 1, 1)) deptno
              ,trim(regexp_substr(line, '[^;]+', 1, 2)) dname
              ,trim(regexp_substr(line, '[^;]+', 1, 3)) loc
          from deptstg stg)
      select i.deptno
            ,i.dname
            ,i.loc
        from import i
        join dept d on (d.deptno = i.deptno)
       where 1 = 1
         and (   coalesce(d.dname,'-NULL') <> coalesce(i.dname,'-NULL-')
              or coalesce(d.loc, '-NULL')  <> coalesce(i.loc, '-NULL-')
             );
    type data_t is table of c_inserts%rowtype index by pls_integer;
    l_data data_t;
  begin
    open c_inserts;
    loop
      fetch c_inserts bulk collect into l_data limit c_limit;
      if l_data.count > 0 then
        begin
          forall indx in l_data.first .. l_data.last save exceptions
            insert into dept(deptno, dname, loc)
            values (l_data(indx).deptno, l_data(indx).dname, l_data(indx).loc);
        exception
          when failure_in_forall then null;
        end;
      end if;
      exit when l_data.count < c_limit;
    end loop;
    close c_inserts;
--
    open c_updates;
    loop
      fetch c_updates bulk collect into l_data limit c_limit;
      if l_data.count > 0 then
        begin
          forall indx in l_data.first .. l_data.last save exceptions
            update dept
               set dname = l_data(indx).dname
                 , loc = l_data(indx).loc
             where 1=1
               and deptno = l_data(indx).deptno;
        exception
          when failure_in_forall then null;
        end;
      end if;
      exit when l_data.count < c_limit;
    end loop;
    close c_updates;
  end dept;

  procedure emp is
    cursor c_inserts is
      with import as
       (select trim(regexp_substr(line, '[^;]+', 1, 1)) empno   
              ,trim(regexp_substr(line, '[^;]+', 1, 2)) ename   
              ,trim(regexp_substr(line, '[^;]+', 1, 3)) job     
              ,trim(regexp_substr(line, '[^;]+', 1, 4)) mgr     
              ,trim(regexp_substr(line, '[^;]+', 1, 5)) hiredate
              ,trim(regexp_substr(line, '[^;]+', 1, 6)) sal     
              ,trim(regexp_substr(line, '[^;]+', 1, 7)) comm    
              ,trim(regexp_substr(line, '[^;]+', 1, 8)) deptno  
          from empstg stg)
      select i.empno   
            ,i.ename   
            ,i.job     
            ,i.mgr     
            ,i.hiredate
            ,i.sal     
            ,i.comm    
            ,i.deptno  
        from import i
        left outer join emp e on (e.empno = i.empno)
       where 1 = 1
         and e.empno is null;
    cursor c_updates is
      with import as
       (select trim(regexp_substr(line, '[^;]+', 1, 1)) empno   
              ,trim(regexp_substr(line, '[^;]+', 1, 2)) ename   
              ,trim(regexp_substr(line, '[^;]+', 1, 3)) job     
              ,trim(regexp_substr(line, '[^;]+', 1, 4)) mgr     
              ,trim(regexp_substr(line, '[^;]+', 1, 5)) hiredate
              ,trim(regexp_substr(line, '[^;]+', 1, 6)) sal     
              ,trim(regexp_substr(line, '[^;]+', 1, 7)) comm    
              ,trim(regexp_substr(line, '[^;]+', 1, 8)) deptno  
          from empstg stg)
      select i.empno   
            ,i.ename   
            ,i.job     
            ,i.mgr     
            ,i.hiredate
            ,i.sal     
            ,i.comm    
            ,i.deptno  
        from import i
        left outer join emp e on (e.empno = i.empno)
       where 1 = 1
         and (   coalesce(e.ename,    '-NULL') <> coalesce(i.ename,    '-NULL')
              or coalesce(e.job,      '-NULL') <> coalesce(i.job,      '-NULL')
              or          e.mgr                <>          i.mgr               
              or coalesce(to_char(e.hiredate,'DDMMYYYY'), '-NULL') <> coalesce(i.hiredate, '-NULL')
              or          e.sal                <>          i.sal               
              or          e.comm               <>          i.comm               
              or          e.deptno             <>          i.deptno            
             );
    type data_t is table of c_inserts%rowtype index by pls_integer;
    l_data data_t;
  begin
    open c_inserts;
    loop
      fetch c_inserts bulk collect into l_data limit c_limit;
      if l_data.count > 0 then
        begin
          forall indx in l_data.first .. l_data.last save exceptions
            insert into emp(empno, ename, job, mgr, hiredate, sal, comm, deptno)
            values (l_data(indx).empno, l_data(indx).ename, l_data(indx).job, l_data(indx).mgr, to_date(l_data(indx).hiredate, 'DDMMYYYY'), l_data(indx).sal, trim(l_data(indx).comm), l_data(indx).deptno);
        exception
          when failure_in_forall then null;
        end;
      end if;
      exit when l_data.count < c_limit;
    end loop;
    close c_inserts;
--
    open c_updates;
    loop
      fetch c_updates bulk collect into l_data limit c_limit;
      if l_data.count > 0 then
        begin
          forall indx in l_data.first .. l_data.last save exceptions
            update emp
              set ename       = l_data(indx).ename    
                , job         = l_data(indx).job      
                , mgr         = l_data(indx).mgr      
                , hiredate    = to_date(l_data(indx).hiredate, 'DDMMYYYY') 
                , sal         = l_data(indx).sal      
                , comm        = l_data(indx).comm     
                , deptno      = l_data(indx).deptno   
            where 1=1
              and empno       = l_data(indx).empno;
        exception
          when failure_in_forall then null;
        end;
      end if;
      exit when l_data.count < c_limit;
    end loop;
    close c_updates;
  end emp;
end process_stg;
/

As you can see, the code, especially for the cursors, is pretty much the same. Only difference is the number of columns that are generated from the semi-colon separated line.
I really don’t like to do the same thing over and over again, especially when the only difference is the number of columns and their names. But since this is what changes between the tables I think there is no way of making this generic in 12c or earlier. But then 18c came into play and they provide us with Polymorphic Table Functions.

This is what the documentation says (summary):
Polymorphic Table Functions
Polymorphic Table Functions (PTF) are user-defined functions that can be invoked in the FROM clause.
They are capable of processing tables whose row type is not declared at definition time and producing a
result table whose row type may or may not be declared at definition time. Polymorphic Table Functions
allow application developers to leverage the long-defined dynamic SQL capabilities to create powerful
and complex custom functions.

In my own words: Call a function, supplying a table and get a set of columns back. You can supply the names (and number) of columns as a parameter. Also, these columns don’t have to exist in the table, you can create them on the fly. That is exactly what I need. I have different tables with pretty much the same layout but the results I need are completely different.
So I came up with the following Polymorphic Table Function to do what I want. First there is the specification of the package. What I need is the DESCRIBE function (which is mandatory) and a procedure to fetch the rows, where I can alter the results.

create or replace package separated_ptf is
  function describe(tab  in out dbms_tf.table_t
                   ,cols in dbms_tf.columns_t default null) return dbms_tf.describe_t;

  procedure fetch_rows;
end separated_ptf;
/

Then there is the implementation of the package:

create or replace package body separated_ptf as
  g_colcount pls_integer; -- save the number of columns requested
  g_colname  varchar2(128); -- save the name of the first column
  function describe(tab  in out dbms_tf.table_t
                   ,cols in dbms_tf.columns_t default null) return dbms_tf.describe_t as
    -- metadata for column to add
    l_new_col dbms_tf.column_metadata_t;
    -- table of columns to add
    l_new_cols dbms_tf.columns_new_t; -- := DBMS_TF.COLUMNS_NEW_T();
  begin
    -- Mark the first column ReadOnly and don't display it anymore
    tab.column(1).for_read := true;
    tab.column(1).pass_through := false;
    -- Save the name of the first column for use in the fetch_rows procedure
    g_colname := tab.column(1).description.name;
    -- Save the number of columns for use in the fetch_rows procedure
    g_colcount := cols.count;
    -- Add the new columns, as specified in the cols parameter
    for indx in 1 .. cols.count loop
      -- define metadata for column named cols(indx)
      -- that will default to a datatype of varchar2 with
      -- a length of 4000
      l_new_col := dbms_tf.column_metadata_t(name => cols(indx));
      -- add the new column to the list of columns new columns
      l_new_cols(l_new_cols.count + 1) := l_new_col;
    end loop;
    -- Instead of returning NULL we will RETURN a specific
    -- DESCRIBE_T that adds new columns
    return dbms_tf.describe_t(new_columns => l_new_cols);
  end;

  procedure fetch_rows is
    -- define a table type of varchar2 tables
    type colset is table of dbms_tf.tab_varchar2_t index by pls_integer;
    -- variable to hold the rowset as retrieved
    l_rowset   dbms_tf.row_set_t;
    -- variable to hold the number of rows as retrieved
    l_rowcount pls_integer;
    -- variable to hold the new values
    l_newcolset colset;
  begin
    -- fetch rows into a local rowset
    -- at this point the rows will have columns
    -- from the the table/view/query passed in
    dbms_tf.get_row_set(l_rowset, l_rowcount);
    -- for every row in the rowset...
    for rowindx in 1 .. l_rowcount loop
      -- for every column
      for colindx in 1 .. g_colcount loop
        -- split the row into separate values
        -- splitting the regexp way: http://nuijten.blogspot.com/2009/07/splitting-comma-delimited-string-regexp.html
        l_newcolset(colindx)(rowindx) := trim(regexp_substr(json_value(dbms_tf.row_to_char(l_rowset, rowindx), '$.' || g_colname)
                                                      ,'[^;]+'
                                                      ,1
                                                      ,colindx));
      end loop; -- every column
    end loop; -- every row in the rowset
    -- add the newly populated columns to the rowset
    for indx in 1 .. g_colcount loop
      dbms_tf.put_col(columnid => indx, collection => l_newcolset(indx));
    end loop;
  end;
end separated_ptf;
/

After creating this Polymorphic Table Function we need an interface to use it in a SQL statement:

-- create a 'wrapper' function for the polymorphic table function
CREATE OR REPLACE FUNCTION separated_fnc(p_tbl IN TABLE,  
                cols columns DEFAULT NULL)
   RETURN TABLE PIPELINED
   ROW POLYMORPHIC USING separated_ptf;
/

Now, with this wrapper function in place we can start using it:

select *
  from separated_fnc(deptstg, columns(deptno, dname, loc))
/

Which is a lot easier than:

select trim(regexp_substr(line, '[^;]+', 1, 1)) deptno
      ,trim(regexp_substr(line, '[^;]+', 1, 2)) dname
      ,trim(regexp_substr(line, '[^;]+', 1, 3)) loc
  from deptstg stg
/

And similarly we can access the same code to retrieve data from the other table:

select *
  from separated_fnc(empstg, columns(empno, ename, job, mgr, hiredate, sal, comm, deptno))
/

That looks pretty much the same as the other one, but is definitely a lot simpler than

select trim(regexp_substr(line, '[^;]+', 1, 1)) empno
      ,trim(regexp_substr(line, '[^;]+', 1, 2)) ename
      ,trim(regexp_substr(line, '[^;]+', 1, 3)) job
      ,trim(regexp_substr(line, '[^;]+', 1, 4)) mgr
      ,trim(regexp_substr(line, '[^;]+', 1, 5)) hiredate
      ,trim(regexp_substr(line, '[^;]+', 1, 6)) sal
      ,trim(regexp_substr(line, '[^;]+', 1, 7)) comm
      ,trim(regexp_substr(line, '[^;]+', 1, 8)) deptno
  from empstg stg
/

The new implementation of my package is now like this:

create or replace package body process_stg is
  failure_in_forall exception;
  pragma exception_init(failure_in_forall, -24381);
  c_limit     constant number := 10;
  procedure dept is
    cursor c_inserts is
      with import as
       (select *
          from separated_fnc(deptstg, columns(deptno, dname, loc))
       )
      select i.deptno
            ,i.dname
            ,i.loc
        from import i
        left outer join dept d on (d.deptno = i.deptno)
       where 1 = 1
         and d.deptno is null;
    cursor c_updates is
      with import as
       (select *
          from separated_fnc(deptstg, columns(deptno, dname, loc))
       )
      select i.deptno
            ,i.dname
            ,i.loc
        from import i
        join dept d on (d.deptno = i.deptno)
       where 1 = 1
         and (   coalesce(d.dname,'-NULL') <> coalesce(i.dname,'-NULL-')
              or coalesce(d.loc, '-NULL')  <> coalesce(i.loc, '-NULL-')
             );
    type data_t is table of c_inserts%rowtype index by pls_integer;
    l_data data_t;
  begin
    open c_inserts;
    loop
      fetch c_inserts bulk collect into l_data limit c_limit;
      if l_data.count > 0 then
        begin
          forall indx in l_data.first .. l_data.last save exceptions
            insert into dept(deptno, dname, loc)
            values (l_data(indx).deptno, l_data(indx).dname, l_data(indx).loc);
        exception
          when failure_in_forall then null;
        end;
      end if;
      exit when l_data.count < c_limit;
    end loop;
    close c_inserts;
--
    open c_updates;
    loop
      fetch c_updates bulk collect into l_data limit c_limit;
      if l_data.count > 0 then
        begin
          forall indx in l_data.first .. l_data.last save exceptions
            update dept
               set dname = l_data(indx).dname
                 , loc = l_data(indx).loc
             where 1=1
               and deptno = l_data(indx).deptno;
        exception
          when failure_in_forall then null;
        end;
      end if;
      exit when l_data.count < c_limit;
    end loop;
    close c_updates;
  end dept;

  procedure emp is
    cursor c_inserts is
      with import as
       (select *
          from separated_fnc(empstg, columns(empno,ename,job,mgr,hiredate,sal,comm,deptno))
       )
      select i.empno   
            ,i.ename   
            ,i.job     
            ,i.mgr     
            ,i.hiredate
            ,i.sal     
            ,i.comm    
            ,i.deptno  
        from import i
        left outer join emp e on (e.empno = i.empno)
       where 1 = 1
         and e.empno is null;
    cursor c_updates is
      with import as
       (select *
          from separated_fnc(empstg, columns(empno,ename,job,mgr,hiredate,sal,comm,deptno))
       )
      select i.empno   
            ,i.ename   
            ,i.job     
            ,i.mgr     
            ,i.hiredate
            ,i.sal     
            ,i.comm    
            ,i.deptno  
        from import i
        left outer join emp e on (e.empno = i.empno)
       where 1 = 1
         and (   coalesce(e.ename,    '-NULL') <> coalesce(i.ename,    '-NULL')
              or coalesce(e.job,      '-NULL') <> coalesce(i.job,      '-NULL')
              or          e.mgr                <>          i.mgr               
              or coalesce(to_char(e.hiredate,'DDMMYYYY'), '-NULL') <> coalesce(i.hiredate, '-NULL')
              or          e.sal                <>          i.sal               
              or          e.comm               <>          i.comm               
              or          e.deptno             <>          i.deptno            
             );
    type data_t is table of c_inserts%rowtype index by pls_integer;
    l_data data_t;
  begin
    open c_inserts;
    loop
      fetch c_inserts bulk collect into l_data limit c_limit;
      if l_data.count > 0 then
        begin
          forall indx in l_data.first .. l_data.last save exceptions
            insert into emp(empno, ename, job, mgr, hiredate, sal, comm, deptno)
            values (l_data(indx).empno, l_data(indx).ename, l_data(indx).job, l_data(indx).mgr, to_date(l_data(indx).hiredate, 'DDMMYYYY'), l_data(indx).sal, trim(l_data(indx).comm), l_data(indx).deptno);
        exception
          when failure_in_forall then 
            dbms_output.put_line(q'[error]');
           dbms_output.put_line(sqlcode);
           dbms_output.put_line(sqlerrm);
            for indx in 1 .. sql%bulk_exceptions.count loop
              dbms_output.put_line('Error ' || indx || ' occurred on index ' || sql%bulk_exceptions(indx).error_index);
              dbms_output.put_line('Oracle error is ' ||
                                sqlerrm(-1 * sql%bulk_exceptions(indx).error_code));
            end loop;
            null;
        end;
      end if;
      exit when l_data.count < c_limit;
    end loop;
    close c_inserts;
--
    open c_updates;
    loop
      fetch c_updates bulk collect into l_data limit c_limit;
      if l_data.count > 0 then
        begin
          forall indx in l_data.first .. l_data.last save exceptions
            update emp
              set ename       = l_data(indx).ename    
                , job         = l_data(indx).job      
                , mgr         = l_data(indx).mgr      
                , hiredate    = to_date(l_data(indx).hiredate, 'DDMMYYYY')
                , sal         = l_data(indx).sal      
                , comm        = l_data(indx).comm     
                , deptno      = l_data(indx).deptno   
            where 1=1
              and empno       = l_data(indx).empno;
        exception
          when failure_in_forall then null;
        end;
      end if;
      exit when l_data.count < c_limit;
    end loop;
    close c_updates;
  end emp;
end process_stg;
/

There is absolutely some improvement possible to the current implementation, like supporting duplicate separators, making the column to be split up a parameter, making the separator character a parameter as well, but that is a nice project for a later time.

I hope it all makes a bit of sense. If you have any improvements, don’t hesitate to comment.

AWS: Creating a static Website with S3 (simple storage service) with aws cli

Dietrich Schroff - Sat, 2018-11-17 13:34
There is a nice tutorial how to create a static webpage with using Amazon S3:
https://docs.aws.amazon.com/AmazonS3/latest/dev/HostingWebsiteOnS3Setup.html

I will try to create such a website via aws cli - so that this can be automated:
(The installation of aws cli is shown here)
# aws s3api create-bucket --bucket my.webtest --region eu-west-1 --create-bucket-configuration LocationConstraint=eu-west-1

{

    "Location": "http://my.webtest.s3.amazonaws.com/"

}

Then create a website.json file:

$ cat website.json 

{

    "IndexDocument": {

         "Suffix": "index.html"

     },

     "ErrorDocument": {

          "Key": "error.html"

     }

 }

and run

$ aws s3api put-bucket-website --bucket my.webtest --website-configuration file://website.json

After that the web console should show:
and

Next step is to create the file policy.json:

$ cat policy.json 

{

   "Version":"2012-10-17",

   "Statement":[{

     "Sid":"PublicReadForGetBucketObjects",

         "Effect":"Allow",

       "Principal": "*",

       "Action":["s3:GetObject"],

       "Resource":["arn:aws:s3:::my-webtest/*"

       ]

     }

   ]

 }

and run

aws s3api put-bucket-policy --bucket my.webtest --policy file://policy.json

You can check via:
$ aws s3api get-bucket-policy --bucket my.webtest

{

    "Policy": "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Sid\":\"PublicReadForGetBucketObjects\",\"Effect\":\"Allow\",\"Principal\":\"*\",\"Action\":\"s3:GetObject\",\"Resource\":\"arn:aws:s3:::my.webtest/*\"}]}"

}
Via the web console:
 Then upload you html page:

$ aws s3 cp TestWebPage.html s3://my.webtest/index.html

upload: ./TestWebPage.html to s3://my.webtest/index.html  
 And here we go:


That was easy. Ok - a DNS resolution via Amazon route 53 is missing, but with these commands you are able to deploy a static website without clicking around...



Postings related to AWS:









Migration from 11g to 12c change execution plan(Adaptative plan)

Tom Kyte - Fri, 2018-11-16 15:06
Hi, we are working on a PeopleSoft Migration and Database too. We're migrating Oracle 11.2.0.3 to 12.2.0.1, so we have an issiue with a PeopleSoft Query. The query on actual database enviroment(11.2.0.3), have a excecution plan with minimal cost ...
Categories: DBA Blogs

pushing predicate into union-all view

Tom Kyte - Fri, 2018-11-16 15:06
Hi, LiveSQL link: https://livesql.oracle.com/apex/livesql/s/hjml6z0yg45qznob5sebg53vk I have the big table with an index on ID: <code> create table tst1 as select level id, mod(level, 10) code from dual connect by level < 1000000; create...
Categories: DBA Blogs

EBS Releases 12.1 and 12.2 certified with SLES 12

Steven Chan - Fri, 2018-11-16 12:08

I am pleased to announce that Oracle E-Business Suite Releases 12.1.3 and 12.2.6 (or higher) are now certified with SUSE Linux Enterprise Server (SLES) 12 on x86-64.

Installations of E-Business Suite on this operating system require specific patches to the latest startCD prior to installing, followed by the application of the 12.1.3 RUP or the 12.2.6 RUP (or higher) for EBS 12.1 and 12.2 respectively. Cloning of existing EBS 12.1.3 or 12.2 environments to SLES 12 is also certified using the standard Rapid Clone process.

There are also requirements to upgrade technology components such as the Oracle Database (to 11.2.0.4 or 12.1.0.2) and Fusion Middleware components as necessary. All requirements, known issues, patches needed, etc. are noted in the Installation and Upgrade Notes (IUN) below and must be reviewed and implemented.

For more information on requirements, please review the following documents:

 

Categories: APPS Blogs

Nvarchar to Varchar2 conversion (UTF8 to AL32UTF8)

Tom Kyte - Thu, 2018-11-15 20:46
We are planning to convert all NVarchar fields to Varchar2 fields as we're going to change our character set and since Oracle recommends AL32UTF8 character set encoding. My question is it 100% sure that all characters from Nvarchar (UTF8) can be conv...
Categories: DBA Blogs

sql plan management - difference in defining parameters at system and session level

Tom Kyte - Thu, 2018-11-15 20:46
Hi Tom, I am very new to performance tuning. there's something that I am unclear about sql plan management. which one is faster - 1. setting OPTIMIZER_CAPTURE_SQL_PLAN_BASELINE to TRUE at session level (inside function body) and OPTIMIZE...
Categories: DBA Blogs

Moving Oracle DB from one server to another

Tom Kyte - Thu, 2018-11-15 20:46
Hi, I am having an Oracle 11g database in an AIX linux server. I am planning to move this to a different server with same OS. I will be using same version of Oracle database in target DB as well. I have multiple schema in source database and in t...
Categories: DBA Blogs

SQL Slowdown ? A short list of potential reasons

Hemant K Chitale - Thu, 2018-11-15 20:14
Jonathan Lewis has published a short list of potential reasons why you might see a slowdown in SQL execution.  With newer releases 12.2, 18c and 19c, the list may have to be expanded.



Categories: DBA Blogs

AWS: Billing - how to delete a route 53

Dietrich Schroff - Thu, 2018-11-15 14:58
After playing around with AWS containers
i took a look at my billing page:

So let's delete this service.
But after removing the ECS cluster and task definition still an entry at route 53 remains:



The resource hostedzone/Z3JCO1N1BVHCKX can only be managed through servicediscovery.amazonaws.com (arn:aws:servicediscovery:eu-west-1:803404058350:namespace/ns-so7m3qbqbatzmlgn)


But the solution is the aws cli (for installation take a look here):
schroff@zerberus:~/AWS$ aws servicediscovery list-services
{

    "Services": [

        {

            "Id": "srv-46ffbkbwzupvblsb",

            "Arn": "arn:aws:servicediscovery:eu-west-1:803404058350:service/srv-46ffbkbwzupvblsb",

            "Name": "my-nginx-service"

        },

        {

            "Id": "srv-nicoewsbpufb3tlk",

            "Arn": "arn:aws:servicediscovery:eu-west-1:803404058350:service/srv-nicoewsbpufb3tlk",

            "Name": "my-ecs-service-on-fargate"

        }

    ]

}

schroff@zerberus:~/AWS$ aws servicediscovery delete-service --id srv-46ffbkbwzupvblsb
schroff@zerberus:~/AWS$ aws servicediscovery delete-service --id srv-nicoewsbpufb3tlk


and

schroff@zerberus:~/AWS$ aws servicediscovery list-namespaces

{

    "Namespaces": [

        {

            "Type": "DNS_PRIVATE",

            "Id": "ns-so7m3qbqbatzmlgn",

            "Arn": "arn:aws:servicediscovery:eu-west-1:803404058350:namespace/ns-so7m3qbqbatzmlgn",

            "Name": "local"

        }

    ]

}
Take the id and delete this namespace:
schroff@zerberus:~/AWS$ aws servicediscovery delete-namespace --id=ns-so7m3qbqbatzmlgn

{

    "OperationId": "4kdit33kf7kfuawscpfgifcrdktynen5-jog7l6h7"

}

And the the hosted zone was gone:

Oracle JET UI on Top of Oracle ADF With Visual Builder

Shay Shmeltzer - Thu, 2018-11-15 13:22

At Oracle OpenWorld this year I did a session about the future of Oracle ADF, and one of the demos I did there was showing the powerful combination of Oracle ADF backend with a new Oracle JET UI layer and how Oracle Visual Builder makes this integration very simple.

While we have many happy Oracle ADF customers, we do hear from some of them about new UI requirements that might justify thinking about adopting a new UI architecture for some modules. These type of requirements align with an industry trend towards adopting a more client centric UI architecture that leverages the power of JavaScript on the client. While ADF (which is more of a server centric architecture) does let you leverage JavaScript on the client and provides hook points for that in ADF Faces, some customers prefer a more "puristic" approach for new user interfaces that they are planning to build. Oracle's solution for such a UI architecture is based on Oracle JET - an open source set of libraries we developed and share with the community at http://oraclejet.org.

Oracle Visual Builder provides developers with a simpler approach to building Oracle JET based UIs - for both web and on-device mobile applications. Focusing on a visual UI design approach it drastically reduce the amount of manual coding you need to do to create JET based UIs. 

UIs that you build in Visual Builder connect at the back to REST services, and this is where you can leverage Oracle ADF. In version 12 of JDeveloper we introduced the ability to publish ADF Business Components as REST services through a simple wizard. Note that out-of-the-box you get a very powerful set of services that support things like query by example, pagination, sorting and more. If you haven't explored this functionality already, check out the videos showing how to do it here, and this video covering cloud hosting these services.

Once you have this ADF based REST services layer - you'll be glad to hear that in Visual Builder we have specific support to simplify consuming these REST services. Specifically - we understand the meta-data descriptions that these REST services provide and then are able to create services and endpoints mapping for you.

ADF Describe Dialog in Service Connection

You leverage our "Service from specification" dialog to add your ADF services to your Visual Builder app - and from that point on, it's quite simple to build new JET UIs accessing the data.

In the video below I show how simple it is to build a JET-based on-device mobile app that leverage a set of REST services that were created from Oracle JDeveloper 12. Check it out:

Categories: Development

num_index_keys

Jonathan Lewis - Thu, 2018-11-15 07:13

The title is the name of an Oracle hint that came into existence in Oracle 10.2.0.3 and made an appearance recently in a question on the rarely used “My Oracle Support” Community forum (you’ll need a MOS account to be able to read the original). I wouldn’t have found it but the author also emailed me the link asking if I could take a look at it.  (If you want to ask me for help – without paying me, that is – then posting a public question in the Oracle (ODC) General Database or SQL forums and emailing me a private link is the strategy most likely to get an answer, by the way.)

The question was about a very simple query using a straightforward index – with a quirky change of plan after upgrading from 10.2.0.3 to 12.2.0.1. Setting the optimizer_features_enable to ‘10.2.0.3’ in the 12.2.0.1 system re-introduced the 10g execution plan. Here’s the query:


SELECT t1.*
   FROM   DW1.t1
  WHERE   t1.C1 = '0001' 
    AND   t1.C2 IN ('P', 'F', 'C')
    AND   t1.C3 IN (
                    '18110034450001',
                    '18110034450101',
                    '18110034450201',
                    '18110034450301',
                    '18110034450401',
                    '18110034450501'
          );
 

Information supplied: t1 holds about 500 million rows at roughly 20 rows per block, the primary key index is (c1, c2, c3, c4), there are just a few values for each of c1, c2 and c4, while c3 is “nearly unique” (which, for clarity, was expanded to “the number of distinct values of c3 is virtually the same as the number of rows in the table”).

At the moment we don’t have any information about histograms and we don’t known whether or not “nearly unique” might still allow a few values of c3 to have a large number of duplicates, so that’s something we might want to follow up on later.

Here are the execution plans – the fast one (from 10g) first, then the slow (12c) plan – and you should look carefully at the predicate section of the two plans:


10g (pulled from memory with rowsource execution statistics enabled)
--------------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name             | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
--------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                  |      1 |        |      6 |00:00:00.01 |      58 |      5 |
|   1 |  INLIST ITERATOR             |                  |      1 |        |      6 |00:00:00.01 |      58 |      5 |
|   2 |   TABLE ACCESS BY INDEX ROWID| T1               |     18 |      5 |      6 |00:00:00.01 |      58 |      5 |
|*  3 |    INDEX RANGE SCAN          | PK_T1            |     18 |      5 |      6 |00:00:00.01 |      52 |      4 |
--------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T1"."C1"='0001' AND (("T1"."C2"='C' OR "T1"."C2"='F' OR
              "T1"."C2"='P')) AND (("C3"='18110034450001' OR "C3"='18110034450101' OR
              "C3"='18110034450201' OR "C3"='18110034450301' OR "C3"='18110034450401' OR
              "C3"='18110034450501')))

 

12c (from explain plan)
---------------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |                  |     1 |   359 |     7   (0)| 00:00:01 |
|   1 |  INLIST ITERATOR                     |                  |       |       |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| T1               |     1 |   359 |     7   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN                  | PK_T1            |     1 |       |     6   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T1"."C1"='0001' AND ("T1"."C2"='C' OR "T1"."C2"='F' OR
              "T1"."C2"='P'))
       filter("C3"='18110034450001' OR "C3"='18110034450101' OR
              "C3"='18110034450201' OR "C3"='18110034450301' OR
              "C3"='18110034450401' OR "C3"='18110034450501')
  

When comparing plans it’s better, of course, to present the same sources from the two systems, it’s not entirely helpful to have the generated plan from explain plan in one version and a run-time plan with stats in the other – given the choice I’d like to see the run-time from both. Despite this, I felt fairly confident that the prediction would match the run-time for 12c and that I could at least guess the “starts” figure for 12c.

The important thing to notice is the way that the access predicate in 10g has split into an access predicate followed by a filter predicate in 12c. So 12c is going to iterate three times (once for each of the values  ‘C’, ‘F’, ‘P’) and then walk a potentially huge linked list of index leaf blocks looking for 6 values of c3, while 10g is going to probe the index 18 times (3 combinations of c2 x six combinations of c3) to find “nearly unique” rows which means probably one leaf block per probe.

The 12c plan was taking minutes to run, the 10g plan was taking less than a second. The difference in execution time was probably the effect of the 12c plan ranging through (literally) thousands of index leaf blocks.

There are many bugs and anomalies relating to in-list iteration and index range scans and cardinality calculations – here’s a quick sample of v$system_fix_control in 12.2.0.1:


select optimizer_feature_enable ofe, sql_feature, bugno, description
from v$system_fix_control
where
	optimizer_feature_enable between '10.2.0.4' and '12.2.0.1'
and	(   sql_feature like '%CBO%'
	 or sql_feature like '%CARDINALITY%'
	)
and	(    lower(description) like '%list%'
	 or  lower(description) like '%iterat%'
	 or  lower(description) like '%multi%col%'
	)
order by optimizer_feature_enable, sql_feature, bugno
;

OFE        SQL_FEATURE                      BUGNO DESCRIPTION
---------- --------------------------- ---------- ----------------------------------------------------------------
10.2.0.4   QKSFM_CBO_5259048              5259048 undo unused inlist
           QKSFM_CBO_5634346              5634346 Relax equality operator restrictions for multicolumn inlists

10.2.0.5   QKSFM_CBO_7148689              7148689 Allow fix of bug 2218788 for in-list predicates

11.1.0.6   QKSFM_CBO_5139520              5139520 kkoDMcos: For PWJ on list dimension, use part/subpart bits

11.2.0.1   QKSFM_CBO_6818410              6818410 eliminate redundant inlist predicates

11.2.0.2   QKSFM_CBO_9069046              9069046 amend histogram column tracking for multicolumn stats

11.2.0.3   QKSFM_CARDINALITY_11876260    11876260 use index filter inlists with extended statistics
           QKSFM_CBO_10134677            10134677 No selectivity for transitive inlist predicate from equijoin
           QKSFM_CBO_11834739            11834739 adjust NDV for list partition key column after pruning
           QKSFM_CBO_11853331            11853331 amend index cost compare with inlists as filters
           QKSFM_CBO_12591120            12591120 check inlist out-of-range values with extended statistics

11.2.0.4   QKSFM_CARDINALITY_12828479    12828479 use dynamic sampling cardinality for multi-column join key check
           QKSFM_CARDINALITY_12864791    12864791 adjust for NULLs once for multiple inequalities on nullable colu
           QKSFM_CARDINALITY_13362020    13362020 fix selectivity for skip scan filter with multi column stats
           QKSFM_CARDINALITY_14723910    14723910 limit multi column group selectivity due to NDV of inlist column
           QKSFM_CARDINALITY_6873091      6873091 trim histograms based on in-list predicates
           QKSFM_CBO_13850256            13850256 correct estimates for transitive inlist predicate with equijoin

12.2.0.1   QKSFM_CARDINALITY_19847091    19847091 selectivity caching for inlists
           QKSFM_CARDINALITY_22533539    22533539 multi-column join sanity checks for table functions
           QKSFM_CARDINALITY_23019286    23019286 Fix cdn estimation with multi column stats on fixed data types
           QKSFM_CARDINALITY_23102649    23102649 correction to inlist element counting with constant expressions
           QKSFM_CBO_17973658            17973658 allow partition pruning due to multi-inlist iterator
           QKSFM_CBO_21057343            21057343 order predicate list
           QKSFM_CBO_22272439            22272439 correction to inlist element counting with bind variables

There are also a number of system parameters relating to inlists that are new (or have changed values) in 12.2.0.1 when compared with 10.2.0.3 – but I’m not going to go into those right now.

I was sufficiently curious about this anomaly that I emailed the OP to say I would be happy to take a look at the 10053 trace files for the query – the files probably weren’t going to be very large given that it was only a single table query – but in the end it turned out that I solved the problem before he’d had time to email them. (Warning – don’t email me a 10053 file on spec; if I want one I’ll ask for it.)

Based on the description I created an initial model of the problem – it took about 10 minutes to code:


rem     Tested on 12.2.0.1, 18.3.0.1

drop table t1 purge;

create table t1 (
	c1 varchar2(4) not null,
	c2 varchar2(1) not null,
	c3 varchar2(15) not null,
	c4 varchar2(4)  not null,
	v1 varchar2(250)
)
;

insert into t1
with g as (
	select rownum id 
	from dual
	connect by level <= 1e4 -- > hint to avoid wordpress format issue
)
select
	'0001',
	chr(65 + mod(rownum,11)),
	'18110034'||lpad(1+100*rownum,7,'0'),
	lpad(mod(rownum,9),4,'0'),
	rpad('x',250,'x')
from
	g,g
where
        rownum <= 1e5 -- > hint to avoid wordpress format issue
;


create unique index t1_i1 on t1(c1, c2, c3, c4);

begin
        dbms_stats.gather_table_stats(
                null,
                't1',
                method_opt => 'for all columns size 1'
        );
end;
/

alter session set statistics_level = all;
set serveroutput off

prompt	==========================
prompt	Default optimizer features
prompt	==========================

select
        /*+ optimizer_features_enable('12.2.0.1') */
	t1.*
FROM	t1
WHERE
	t1.c1 = '0001' 
AND	t1.c2 in ('H', 'F', 'C')
AND	t1.c3 in (
		'18110034450001',
		'18110034450101',
		'18110034450201',
		'18110034450301',
		'18110034450401',
		'18110034450501'
	)
;

select * from table(dbms_xplan.display_cursor(null,null,'cost allstats last'));

select 
        /*+ optimizer_features_enable('10.2.0.3') */
	t1.*
FROM	t1
WHERE
	t1.c1 = '0001' 
AND	t1.c2 in ('H', 'F', 'C')
AND	t1.c3 in (
		'18110034450001',
		'18110034450101',
		'18110034450201',
		'18110034450301',
		'18110034450401',
		'18110034450501'
	)
;

select * from table(dbms_xplan.display_cursor(null,null,'cost allstats last'));

alter session set statistics_level = all;
set serveroutput off

The two queries produced the same plan – regardless of the setting for optimizer_features_enable – it was the plan originally used by the OP’s 10g setting:


-------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |      1 |        |    20 (100)|      0 |00:00:00.01 |      35 |
|   1 |  INLIST ITERATOR             |       |      1 |        |            |      0 |00:00:00.01 |      35 |
|   2 |   TABLE ACCESS BY INDEX ROWID| T1    |     18 |      2 |    20   (0)|      0 |00:00:00.01 |      35 |
|*  3 |    INDEX RANGE SCAN          | T1_I1 |     18 |      2 |    19   (0)|      0 |00:00:00.01 |      35 |
-------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T1"."C1"='0001' AND (("T1"."C2"='C' OR "T1"."C2"='F' OR "T1"."C2"='H')) AND
              (("T1"."C3"='18110034450001' OR "T1"."C3"='18110034450101' OR "T1"."C3"='18110034450201' OR
              "T1"."C3"='18110034450301' OR "T1"."C3"='18110034450401' OR "T1"."C3"='18110034450501')))

There was one important difference between the 10g and the 12c plans – in 10g the cost of the table access (hence the cost of the total query) was 20; in 12c it jumped to 28 – maybe there’s a change in the arithmetic for costing the iterator, and maybe that’s sufficient to cause a problem.

Before going further it’s worth checking what the costs would look like (and, indeed, if the plan is possible in both versions) if we force Oracle into the “bad” plan. That’s where we finally get to the hint in the title of this piece. If I add the hint /*+ num_index_keys(t1 t1_i1 2) */ what’s going to happen ? (Technically I’ve included a hint to use the index, and specified the query block name to make sure Oracle doesn’t decide to switch to a tablescan):


select
        /*+
            optimizer_features_enable('12.2.0.1')
            index_rs_asc(@sel$1 t1@sel$1 (t1.c1 t1.c2 t1.c3 t1.c4))
            num_index_keys(@sel$1 t1@sel$1 t1_i1 2)
        */
        t1.*
FROM        t1
WHERE
        t1.c1 = '0001'
AND        t1.c2 in ('H', 'F', 'C')
AND        t1.c3 in (
                '18110034450001',
                '18110034450101',
                '18110034450201',
                '18110034450301',
                '18110034450401',
                '18110034450501'
        )
;

------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |
------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |       |      1 |        |   150 (100)|      0 |00:00:00.01 |     154 |      1 |
|   1 |  INLIST ITERATOR                     |       |      1 |        |            |      0 |00:00:00.01 |     154 |      1 |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| T1    |      3 |     18 |   150   (2)|      0 |00:00:00.01 |     154 |      1 |
|*  3 |    INDEX RANGE SCAN                  | T1_I1 |      3 |     18 |   142   (3)|      0 |00:00:00.01 |     154 |      1 |
------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T1"."C1"='0001' AND (("T1"."C2"='C' OR "T1"."C2"='F' OR "T1"."C2"='H')))
       filter(("T1"."C3"='18110034450001' OR "T1"."C3"='18110034450101' OR "T1"."C3"='18110034450201' OR
              "T1"."C3"='18110034450301' OR "T1"."C3"='18110034450401' OR "T1"."C3"='18110034450501'))

This was the plan from 12.2.0.1 – and again the plan for 10.2.0.3 was identical except for costs which became 140 for the index range scan and 141 for the table access. At first sight it looks like 10g may be using the total selectivity of the entire query as the scaling factor for the index clustering_factor to find the table cost while 12c uses the cost of accessing the table for one iteration (rounding up) before multiplying by the number of iterations.

Having observed this detail I thought I’d do a quick test of what happened by default if I requested 145 distinct values of c3. Both versions defaulted to the access/filter path rather than the pure access path – but again there was a difference in costs. The 10g index cost was 140 with a table access cost of 158, while 12c had an index cost of 179 and a table cost of 372. So both versions switch plans at some point – do they switch at the same point ? Reader, I could not resist temptation, so I ran a test loop. With my data set the 12c version switched paths at 61 values in the in-list and 10g switched at 53 values –

Conclusion: there’s been a change in the selectivity calculations for the use of in-list iterators, which leads to a change in costs, which can lead to a change in plans; the OP was just unlucky with his data set and stats. Possibly there’s something about his data or stats that makes the switch appear with a much smaller in-list than mine.

Footnote:

When I responded to the thread on MOSC with the suggestion that the problem was in part due to statistics and might be affected by out of date stats (or a histogram on the (low-frequency) c2 column) the OP noted that stats hadn’t been gathered since some time in August – and found that the 12c path changed to the efficient (10g) one after re-gathering stats on the table.

 

Oracle Accelerates Data Insights for Retailers with Oracle Digital Assistant

Oracle Press Releases - Thu, 2018-11-15 07:00
Press Release
Oracle Accelerates Data Insights for Retailers with Oracle Digital Assistant Integration of Core Retail Technology with Conversational AI Powers Targeted and Contextual Offers that Engage Customers and Drive Results

Redwood Shores, Calif.—Nov 15, 2018

Enabling retailers to build personalized customer experiences as well as voice-enabled assistants to help employees work smarter and more productively, Oracle Retail solutions are now integrated with Oracle Digital Assistant. Together, these offerings empower retailers to find answers to critical business questions such as, "What's the current margin on our new BOGO offer for trendsetters?" faster than ever before.

Announced at Oracle OpenWorld 2018, Oracle Digital Assistant leverages artificial-intelligence (AI) to understand context, derive intent, and identify and learn user behaviors and patterns to automate routine tasks proactively on behalf of the user. By integrating the technology with Oracle Retail Offer Optimization Cloud Service, analysts can easily streamline location-specific sales forecasting, promotional deployment and performance, approval automation and target prioritization.

To see these technologies in action, visit: https://youtu.be/UcDTQa7Ff-w.

"Most retailers think about conversational AI in the context of the store or e-commerce, however, retailers can now apply conversational AI to their core operations', via voice or text, to accelerate productivity and optimize processes," said Mike Webster, Senior Vice President and General Manager, Oracle Retail. "Smart digital interactions are an integral part of our everyday life as we query Alexa, Google Home and Siri for recommendations. This latest integration between Oracle Digital Assistant and Oracle Retail Offer Optimization Cloud Service brings the power and simplicity of voice to retail operations, speeding time to insight and action."

Built on Oracle Cloud Infrastructure, Oracle Digital Assistant goes well beyond standard chatbots available today that provide simple, single skilled, linear responses. By applying AI for natural language processing (NLP), natural language understanding (NLU) and machine learning (ML), Oracle is in a uniquely positioned to leverage its breadth and depth in enterprise applications to offer a digital assistant that can truly span the enterprise.

"Going forward digital assistants will transform how merchants, planners, and marketers collaborate, engage their company's information assets, and how they work," said Greg Girard, program director of intelligent product merchandising and marketing, IDC Retail Insights. "As digital assistants become more conversationally and analytically skillful and more aware of their users' intent and context we'll see more incisive decisions, made quicker, to deliver better business outcomes. Retailers should bring digital assistants into their digital transformation planning assumptions now."

In April, Oracle launched the next generation of promotion, markdown and offer optimization capabilities as a cloud service with the launch of Oracle Retail Offer Optimization Cloud Service. With these new updates retailers can analyze promotion and pricing decisions for the entire product lifecycle while providing consumers with targeted and contextual offers.

Contact Info
Matt Torres
Oracle
+1.415.595.1584
matt.torres@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

About Oracle Retail

Oracle provides retailers with a complete, open, and integrated suite of best-of-breed business applications, cloud services, and hardware that are engineered to work together and empower commerce. Leading fashion, grocery, and specialty retailers use Oracle solutions to anticipate market changes, simplify operations and inspire authentic brand interactions. For more information, visit our website at www.oracle.com/retail.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Matt Torres

  • +1.415.595.1584

[BLOG] 15 Must Know Things on Oracle EBS (R12) on Cloud for Beginners

Online Apps DBA - Thu, 2018-11-15 03:32

Are you a Beginner who wants to move ahead in the journey towards learning EBS on Cloud? If yes, then visit: https://k21academy.com/ebscloud16 to learn about: ✔Various Cloud Service Models ✔The 2 main tiers of Oracle EBS(R12) ✔The ways to deploy EBS on Cloud & much more… Are you a Beginner who wants to move ahead […]

The post [BLOG] 15 Must Know Things on Oracle EBS (R12) on Cloud for Beginners appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Pages

Subscribe to Oracle FAQ aggregator