Feed aggregator

Reading Execution Plans for Parallel DML

Pythian Group - Thu, 2017-02-23 20:36

Parallel execution in Oracle seems to be a topic of myths, mysteries and misunderstandings. In this post I hope to clear up at least one of them by diving into a discussion about parallel DML. Actually, I want to show you two things that can go wrong with it.

My demo setup and case are pretty simple, I just create two copies of dba_objects:

SQL> create table a as select * from dba_objects;

Table created.

SQL> create table b as select * from dba_objects;

Table created.

Now let’s delete from one of the tables with a subselect to the other one, and let’s make it parallel. When, if and how it makes sense to actually use parallel DML for a speedup is not the point of this post. In fact, this is a poor example as the overhead of extra communication and spinning up the processes just made the actual delete run slower.

SQL> delete /*+ PARALLEL(a 42) */ from a where object_id in (select object_id from b);

91277 rows deleted.

SQL> rollback;

And at first glance the execution plan for it seems to confirm that the operation was performed in parallel:

PLAN_TABLE_OUTPUT
-------------------------------------
SQL_ID	1q2m84vq594pj, child number 0
-------------------------------------
delete /*+ PARALLEL(a 42) */ from a where object_id in (select
object_id from b)

Plan hash value: 1475376193

--------------------------------------------------------------------------------------------------------------------
| Id  | Operation	 	   | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------------------
|   0 | DELETE STATEMENT	   |	      |       |       |   855 (100)|	      |        |      | 	   |
|   1 |  DELETE 		   | A	      |       |       | 	   |	      |        |      | 	   |
|   2 |   PX COORDINATOR	   |	      |       |       | 	   |	      |        |      | 	   |
|   3 |    PX SEND QC (RANDOM)	   | :TQ10002 | 91277 |   891K|   855	(1)| 00:00:01 |  Q1,02 | P->S | QC (RAND)  |
|*  4 |     HASH JOIN SEMI BUFFERED|	      | 91277 |   891K|   855	(1)| 00:00:01 |  Q1,02 | PCWP | 	   |
|   5 |      PX RECEIVE 	   |	      | 91277 |   445K|   427	(1)| 00:00:01 |  Q1,02 | PCWP | 	   |
|   6 |       PX SEND HASH	   | :TQ10000 | 91277 |   445K|   427	(1)| 00:00:01 |  Q1,00 | P->P | HASH	  |
|   7 |        PX BLOCK ITERATOR   |	      | 91277 |   445K|   427	(1)| 00:00:01 |  Q1,00 | PCWC | 	   |
|*  8 | 	TABLE ACCESS FULL  | A	      | 91277 |   445K|   427	(1)| 00:00:01 |  Q1,00 | PCWP | 	   |
|   9 |      PX RECEIVE 	   |	      | 91278 |   445K|   427	(1)| 00:00:01 |  Q1,02 | PCWP | 	   |
|  10 |       PX SEND HASH	   | :TQ10001 | 91278 |   445K|   427	(1)| 00:00:01 |  Q1,01 | S->P | HASH	  |
|  11 |        PX SELECTOR	   |	      |       |       | 	   |	      |  Q1,01 | SCWC | 	   |
|  12 | 	TABLE ACCESS FULL  | B	      | 91278 |   445K|   427	(1)| 00:00:01 |  Q1,01 | SCWP | 	   |
--------------------------------------------------------------------------------------------------------------------

But a closer look reveals that the actual delete that we wanted to run in parallel is performed only after (above and to the left in the execution plan) the parallel part coordinator is done with the query. Also, because we only specified parallelism for Table A, the scan of Table B is being performed sequentially. The results are then distributed to parallel workers, and hash joined against the parallel query of A. Those results are buffered and returned to the coordinator which then serially performs the actual deletes. Sounds silly? It is. And the second issue is that even though the hint gets ignored for the DELETE part, it is still applied to the query and join, forcing a half-parallel operation.

Enable parallel DML mode

The issue to be aware of with parallel DML in Oracle is that it is disabled by default and you have to explicitly enable it in the session before running parallel DML operations.

And at least DBMS_XPLAN in 12c will also warn you about not using parallel DML in it’s output:

Note
-----
   - Degree of Parallelism is 42 because of hint
   - PDML is disabled in current session

So let’s try the same thing again, but enable parallel DML this time:

SQL> alter session enable parallel dml;

Session altered.

SQL> delete /*+ PARALLEL(a 42) */ from a where object_id in (select object_id from b);

91277 rows deleted.

SQL> rollback;

PLAN_TABLE_OUTPUT
-------------------------------------
SQL_ID	1q2m84vq594pj, child number 1
-------------------------------------
delete /*+ PARALLEL(a 42) */ from a where object_id in (select
object_id from b)

Plan hash value: 2691386263

--------------------------------------------------------------------------------------------------------------------------
| Id  | Operation			 | Name     | Rows  | Bytes | Cost (%CPU)| Time     |	 TQ  |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------------------------
|   0 | DELETE STATEMENT		 |	    |	    |	    |	438 (100)|	    |	    |	    |		 |
|   1 |  PX COORDINATOR 		 |	    |	    |	    |		 |	    |	    |	    |		 |
|   2 |   PX SEND QC (RANDOM)		 | :TQ10003 | 91277 |	891K|	438   (1)| 00:00:01 |  Q1,03 | P->S | QC (RAND)  |
|   3 |    DELETE			 | A	    |	    |	    |		 |	    |  Q1,03 | PCWP |		 |
|   4 |     PX RECEIVE			 |	    | 91277 |	891K|	438   (1)| 00:00:01 |  Q1,03 | PCWP |		 |
|   5 |      PX SEND HASH (BLOCK ADDRESS)| :TQ10002 | 91277 |	891K|	438   (1)| 00:00:01 |  Q1,02 | P->P | HASH (BLOCK|
|*  6 |       HASH JOIN SEMI BUFFERED	 |	    | 91277 |	891K|	438   (1)| 00:00:01 |  Q1,02 | PCWP |		 |
|   7 |        PX RECEIVE		 |	    | 91277 |	445K|	 11   (0)| 00:00:01 |  Q1,02 | PCWP |		 |
|   8 | 	PX SEND HASH		 | :TQ10000 | 91277 |	445K|	 11   (0)| 00:00:01 |  Q1,00 | P->P | HASH	 |
|   9 | 	 PX BLOCK ITERATOR	 |	    | 91277 |	445K|	 11   (0)| 00:00:01 |  Q1,00 | PCWC |		 |
|* 10 | 	  TABLE ACCESS FULL	 | A	    | 91277 |	445K|	 11   (0)| 00:00:01 |  Q1,00 | PCWP |		 |
|  11 |        PX RECEIVE		 |	    | 91278 |	445K|	427   (1)| 00:00:01 |  Q1,02 | PCWP |		 |
|  12 | 	PX SEND HASH		 | :TQ10001 | 91278 |	445K|	427   (1)| 00:00:01 |  Q1,01 | S->P | HASH	 |
|  13 | 	 PX SELECTOR		 |	    |	    |	    |		 |	    |  Q1,01 | SCWC |		 |
|  14 | 	  TABLE ACCESS FULL	 | B	    | 91278 |	445K|	427   (1)| 00:00:01 |  Q1,01 | SCWP |		 |
--------------------------------------------------------------------------------------------------------------------------

Now we see the DELETE operation below the PX COORDINATOR, which means it gets executed in parallel by the parallel workers. B is still read serially because we only specified table A in the hint. Let me just add this 100% parallel plan for the sake of completeness…

PLAN_TABLE_OUTPUT
-------------------------------------
SQL_ID	02w21a0rgz02m, child number 1
-------------------------------------
delete /*+ PARALLEL(42) */ from a where object_id in (select object_id
from b)

Plan hash value: 149866034

--------------------------------------------------------------------------------------------------------------------------
| Id  | Operation			 | Name     | Rows  | Bytes | Cost (%CPU)| Time     |	 TQ  |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------------------------
|   0 | DELETE STATEMENT		 |	    |	    |	    |	 23 (100)|	    |	     |	    |		 |
|   1 |  PX COORDINATOR 		 |	    |	    |	    |		 |	    |	     |	    |		 |
|   2 |   PX SEND QC (RANDOM)		 | :TQ10003 | 91277 |	891K|	 23   (0)| 00:00:01 |  Q1,03 | P->S | QC (RAND)  |
|   3 |    DELETE			 | A	    |	    |	    |		 |	    |  Q1,03 | PCWP |		 |
|   4 |     PX RECEIVE			 |	    | 91277 |	891K|	 23   (0)| 00:00:01 |  Q1,03 | PCWP |		 |
|   5 |      PX SEND HASH (BLOCK ADDRESS)| :TQ10002 | 91277 |	891K|	 23   (0)| 00:00:01 |  Q1,02 | P->P | HASH (BLOCK|
|*  6 |       HASH JOIN SEMI BUFFERED	 |	    | 91277 |	891K|	 23   (0)| 00:00:01 |  Q1,02 | PCWP |		 |
|   7 |        PX RECEIVE		 |	    | 91277 |	445K|	 11   (0)| 00:00:01 |  Q1,02 | PCWP |		 |
|   8 | 	PX SEND HASH		 | :TQ10000 | 91277 |	445K|	 11   (0)| 00:00:01 |  Q1,00 | P->P | HASH	 |
|   9 | 	 PX BLOCK ITERATOR	 |	    | 91277 |	445K|	 11   (0)| 00:00:01 |  Q1,00 | PCWC |		 |
|* 10 | 	  TABLE ACCESS FULL	 | A	    | 91277 |	445K|	 11   (0)| 00:00:01 |  Q1,00 | PCWP |		 |
|  11 |        PX RECEIVE		 |	    | 91278 |	445K|	 11   (0)| 00:00:01 |  Q1,02 | PCWP |		 |
|  12 | 	PX SEND HASH		 | :TQ10001 | 91278 |	445K|	 11   (0)| 00:00:01 |  Q1,01 | P->P | HASH	 |
|  13 | 	 PX BLOCK ITERATOR	 |	    | 91278 |	445K|	 11   (0)| 00:00:01 |  Q1,01 | PCWC |		 |
|* 14 | 	  TABLE ACCESS FULL	 | B	    | 91278 |	445K|	 11   (0)| 00:00:01 |  Q1,01 | PCWP |		 |
--------------------------------------------------------------------------------------------------------------------------
Conclusion

Keep in mind that parallel DML has to be enabled at the session level specifically. And when reviewing execution plans with parallel DML, make sure the update, insert or delete operation is below the PX coordinator or else you will serialize again before performing the actual operation.

Categories: DBA Blogs

IRs with Subscriptions that might not work

Jeff Kemp - Thu, 2017-02-23 20:31

If you have an Interactive Report with the Subscription feature enabled, users can “subscribe” to the report, getting a daily email with the results of the report. Unfortunately, however, this feature doesn’t work as expected if it relies on session state – e.g. if the query uses bind variables based on page items to filter the records. In this case, the subscription will run the query with a default session state – Apex doesn’t remember what the page item values were when the user subscribed to the report.

This is a query I used to quickly pick out all the Interactive Reports that have the Subscription feature enabled but which might rely on session state to work – i.e. it relies on items submitted from the page, refers to a bind variable or to a system context:


select workspace, application_id, application_name,
page_id, region_name, page_items_to_submit
from apex_application_page_ir
where show_notify = 'Yes'
and (page_items_to_submit is not null
or regexp_like(sql_query,':[A-Z]','i')
or regexp_like(sql_query,'SYS_CONTEXT','i')
);

For these reports, I reviewed them and where appropriate, turned off the Subscription feature. Note that this query is not perfect and might give some false positives and negatives.


Filed under: Oracle

LISTAGG not giving distinct values

Tom Kyte - Thu, 2017-02-23 14:26
Hi Guys, If I use LISTAGG for a single table it gives distance values in that column. But if I use it for by joining multiple tables it gives all the values even though they are duplicating. Scenario: CREATE TABLE TEMP1(ID1 NUMBER,TYPE1 CHAR...
Categories: DBA Blogs

REDO LOG growing for SMON Background process

Tom Kyte - Thu, 2017-02-23 14:26
Hello Tom, We experienced a continuous growing on redo log for background processes, specifically: SMON: 309 Gb Now, 300 Gb Last week, 291 two weeks ago DBW0: 141 Gb Now, 135 Gb Last week, 128 two weeks ago DBW1: 118 Gb Now, 110 Gb Last we...
Categories: DBA Blogs

Oracle CAST function conversion for BOOLEAN TO VARCHAR

Tom Kyte - Thu, 2017-02-23 14:26
Hi Tom, As per Oracle docs on CAST function, http://docs.oracle.com/javadb/10.8.3.0/ref/rrefsqlj33562.html <b>A BOOLEAN value can be cast explicitly to any of the string types. The result is 'true', 'false', or null. Conversely, string type...
Categories: DBA Blogs

REGARDING copy an table from existing table

Tom Kyte - Thu, 2017-02-23 14:26
How to create an table from existing table and vommiting particular columns with the data from the table
Categories: DBA Blogs

send e-mail notification through external

Tom Kyte - Thu, 2017-02-23 14:26
Hi Tom, below function is used to send mail. but recently I upgraded DB to 12c from 11gR1/R2, I am not getting body part of mail. I am getting notification mail with subject line but missing the body part. please help. function : ---------- ...
Categories: DBA Blogs

SQL client for soap web-service (parse MTOM response)

Tom Kyte - Thu, 2017-02-23 14:26
Good day, dear colleagues! Sorry for bad English. I'm from Russia. There is a client in oracle pl/sql that using UTL_HTTP makes a request to the Web service and parses the response. Everything works perfectly. But in the particular case of the...
Categories: DBA Blogs

Oracle not using literal value for partition pruning during query execution

Tom Kyte - Thu, 2017-02-23 14:26
Hi Tom, We have a table with about 700 partitions. Our test query is <code>select count(*) from fs_1d f where f.fs_date = '03-JAN-2017' and f.feed_code = 'E4F1'</code> When we explain plan for this query, the plan was correct. It goes dire...
Categories: DBA Blogs

impdp full database from solaris to windows database

Tom Kyte - Thu, 2017-02-23 14:26
Hi, I am new to Oracle DB. I am trying to import an exported database from a Solaris oracle 11.2.0.1 to a windows 2008 oracle 11.2.0.1. I installed Oracle 11.2.0 in a windows 2008 OS with a sample database name orcl. Below are the steps I follow t...
Categories: DBA Blogs

DB link - Access specified table

Tom Kyte - Thu, 2017-02-23 14:26
Hi, For instance, i have a dblink db123 in database1 and t2 table in database2 , i want to access only one table t2@database2 from database1 by using dblink db123 , except t2@database2 that dblink db123 should not accessible/valid for ...
Categories: DBA Blogs

how to return varhar type from clob

Tom Kyte - Thu, 2017-02-23 14:26
Hi, using dbms_lob.substr(), I can extract upto 4k chars only, But I want to extract all characters (6k out of 6k chars). Importantly I should return char or varchar type. Like below query dbms_lob.substr(x, 6000, 1). Thanks
Categories: DBA Blogs

Fourteenth Anniversary For PeteFinnigan.com Limited And New Website

Pete Finnigan - Thu, 2017-02-23 14:26
Wow, has it really been fourteen years since I started PeteFinnigan.com Limited? - Time has gone so fast and business is getting better and better. We have great customers, great Oracle Security trainings and consulting projects meeting new people and....[Read More]

Posted by Pete On 23/02/17 At 06:33 PM

Categories: Security Blogs

DIY Parallelization with Oracle DBMS_DATAPUMP

Amis Blog - Thu, 2017-02-23 12:17

Oracle dbms_datapump provides a parallel option for exports and imports, but some objects cannot be processed in this mode. In a migration project from AIX 11gR2 to ODA X5-2 ( OL 5.9 ) 12c that included an initial load for Golden Gate, I had to deal with one of those objects, a 600G table with LOB fields, stored in the database as Basic Files ( = traditional LOB storage ).

By applying some DIY parallelization I was able to bring the export time back from 14 hours to 35 minutes.
Instrumental in this solution is the handy “detach” feature in the dbms_datapump package, and the use of dbms_rowid to “split” the table data in same sized chunks. The first allowed me to just define and start datapump jobs without having to wait till each one is finished, the second results in all jobs to end within just a short time of each other.

The following PL/SQL exports tables in 32 chunks with 32 concurrent datapump jobs. Feel free to adjust this “dop” and
schema as well as table names. Just one parameter is provided… it makes the export procedure as a whole wait
for the end of all exports, so some other action may start automatically ( e.g a file transfer ).

CREATE OR REPLACE PACKAGE Datapump_Parallel_Exp_Pck                                                                                                                                                                 
  IS                                                                                                                                                                                                  
    g_parallel   CONSTANT NUMBER       := 32;                                                                                                                               
    g_dmp_dir    CONSTANT VARCHAR2(25) := 'DATA_PUMP_DIR';                                                                                                                          
                                                                                                                                                                                                        
------------------------------------------------------------------------------------------------- 
PROCEDURE Exec_Export
   ( P_wait IN PLS_INTEGER := 0 );                                                                                                                                                                   
                                                                                                                                                                                                        
--------------------------------------------------------------------------------------------------                                                                                                      
END Datapump_Parallel_Exp_Pck;
/

SHOW ERRORS;


CREATE OR REPLACE PACKAGE BODY Datapump_Parallel_Exp_Pck                                                                                                                                                            
  IS                                                                                                                                                                                                    
                                                                                                                                                                                                        
-------------------------------------------------------------------------------------------------                                                                                                       
PROCEDURE Sleep                                                                                                                                                                                         
  (P_millisesconds IN NUMBER)                                                                                                                                                                           
 AS LANGUAGE JAVA                                                                                                                                                                                       
    NAME 'java.lang.Thread.sleep(int)';                                                                                                                                                                 
                                                                                                                                                                                                        
-------------------------------------------------------------------------------------------------                                                                                                       
FUNCTION Get_Current_Scn                                                                                                                                                                                
  RETURN NUMBER                                                                                                                                                                                         
    IS                                                                                                                                                                                                  
    v_ret NUMBER := 0;                                                                                                                                                                                  
BEGIN                                                                                                                                                                                                   
                                                                                                                                                                                                        
  SELECT current_scn                                                                                                                                                                                    
    INTO v_ret                                                                                                                                                                                          
  FROM v$database;                                                                                                                                                                                      
                                                                                                                                                                                                        
  RETURN v_ret;                                                                                                                                                                                         
                                                                                                                                                                                                        
  EXCEPTION                                                                                                                                                                                             
    WHEN OTHERS THEN                                                                                                                                                                                    
   RAISE_APPLICATION_ERROR( -20010, SQLERRM||' - '||DBMS_UTILITY.FORMAT_ERROR_BACKTRACE );                                                                                                              
END Get_Current_Scn;                                                                                                                                                                                    
                                                                                                                                                                                                        
-------------------------------------------------------------------------------------------------                                                                                                       
PROCEDURE Exp_Tables_Parallel                                                                                                                                                                   
  ( P_scn  IN NUMBER                                                                                                                                                                                    
  , P_dmp OUT VARCHAR2 )                                                                                                                                                                                
 IS                                                                                                                                                                                                     
   h1                  NUMBER(10);                                                                                                                                                                      
   v_dop               NUMBER := g_parallel;                                                                                                                                                            
   v_curr_scn          NUMBER := P_scn;                                                                                                                                                                 
   v_job_name_org      VARCHAR2(30)  := 'PX_'||TO_CHAR(sysdate,'YYYYMMDDHH24MISS');    -- PX: Parallel Execution                                                                                     
   v_job_name          VARCHAR2(30)  := v_job_name_org;                                                                                                                                                 
   v_dmp_file_name_org VARCHAR2(100) := lower(v_job_name||'.dmp');                                                                                                                                      
   v_dmp_file_name     VARCHAR2(100) := v_dmp_file_name_org;                                                                                                                                            
   v_log_file_name_org VARCHAR2(100) := lower(v_job_name||'.log');                                                                                                                                      
   v_log_file_name     VARCHAR2(100) := v_log_file_name_org;                                                                                                                                            
                                                                                                                                                                                                        
BEGIN                                                                                                                                                                                                   
                                                                                                                                                                                                        
-- drop master table for "orphaned job" if it exists                                                                                                                                                       
   for i in ( select 'DROP TABLE '||owner_name||'.'||job_name||' PURGE' stat                                                                                                                            
              from dba_datapump_jobs                                                                                                                                                                    
              where owner_name = USER                                                                                                                                                                   
                and instr(v_job_name, upper(job_name) ) > 0                                                                                                                                             
                and state = 'NOT RUNNING'                                                                                                                                                               
                and attached_sessions = 0 )                                                                                                                                                             
   loop                                                                                                                                                                                                 
     execute immediate i.stat;                                                                                                                                                                          
   end loop;                                                                                                                                                                                            
                                                                                                                                                                                                        
-- set out parameter                                                                                                                                                                                    
  P_dmp := v_dmp_file_name;                                                                                                                                                                             
                                                                                                                                                                                                        
-- start jobs in parallel                                                                                                                                                                               
  DBMS_OUTPUT.PUT_LINE('**** START SETTING DATAPUMP PARALLEL_TABLE_EXPORT JOBS ****' );                                                                                                                 
  for counter in 0 .. v_dop-1                                                                                                                                                                           
  loop                                                                                                                                                                                                  
    v_job_name      := v_job_name_org||'_'||lpad(counter+1,3,0);                                                                                                                                        
    v_dmp_file_name := v_dmp_file_name_org||'_'||lpad(counter+1,3,0);                                                                                                                                   
    v_log_file_name := v_log_file_name_org||'_'||lpad(counter+1,3,0);                                                                                                                                   
                                                                                                                                                                                                        
    h1 := dbms_datapump.open                                                                                                                                                                            
      ( operation => 'EXPORT'                                                                                                                                                                           
      , job_mode  => 'SCHEMA'                                                                                                                                                                           
      , job_name  => v_job_name                                                                                                                                                                         
      , version   => 'LATEST');                                                                                                                                                                         
   DBMS_OUTPUT.PUT_LINE( 'Successfully opened job: '||v_job_name);                                                                                                                                     
                                                                                                                                                                                                        
     dbms_datapump.set_parallel(handle  => h1, degree => 1);                                                                                                                                            
     dbms_datapump.set_parameter(handle => h1, name  => 'KEEP_MASTER', value => 0);                                                                                                                     
     dbms_datapump.set_parameter(handle => h1, name  => 'ESTIMATE', value => 'BLOCKS');                                                                                                                 
     dbms_datapump.set_parameter(handle => h1, name  => 'INCLUDE_METADATA', value => 0);                                                                                                                
     dbms_datapump.set_parameter(handle => h1, name  => 'METRICS', value => 1);                                                                                                                         
     dbms_datapump.set_parameter(handle => h1, name  => 'FLASHBACK_SCN', value => v_curr_scn);                                                                                                          
   DBMS_OUTPUT.PUT_LINE('Successfully set job parameters for job '||v_job_name);                                                                                                                        
                                                                                                                                                                                                        
-- export just these schemas                                                                                                                                                                            
     dbms_datapump.metadata_filter(handle => h1, name => 'SCHEMA_LIST', value => ' ''<SCHEMA01>'',''<SCHEMA02>'',''<SCHEMA03>'' ');                                                                                       
   DBMS_OUTPUT.PUT_LINE('Successfully set schemas for job '||v_job_name);                                                                                                                               
-- export tables only                                                                                                                                                                                   
     dbms_datapump.metadata_filter(handle => h1, name => 'INCLUDE_PATH_EXPR', value => q'[='TABLE']' );                                                                                                 
   DBMS_OUTPUT.PUT_LINE('Successfully set table export for job '||v_job_name);                                                                                                                          
-- export just these tables                                                                                                                                                                            
     dbms_datapump.metadata_filter(handle => h1, name => 'NAME_LIST', value => ' ''<TABLE01>'',''<TABLE02>'',''<TABLE03>'',''<TABLE03>'',''<TABLE04>'' ', object_path => 'TABLE');                                                                                                                                                                                                     
   DBMS_OUTPUT.PUT_LINE('Successfully set tables for job '||v_job_name);                                                                                                                                
-- export just a 1/v_dop part of the data                                                                                                                                                             
     dbms_datapump.data_filter(handle => h1, name => 'SUBQUERY', value => 'WHERE MOD(DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID), '||v_dop||')='||counter);                                                    
   DBMS_OUTPUT.PUT_LINE('Successfully set data filter for job '||v_job_name);                                                                                                                          
                                                                                                                                                                                                        
     dbms_datapump.add_file                                                                                                                                                                             
       ( handle => h1                                                                                                                                                                                   
       , filename => v_dmp_file_name                                                                                                                                                                    
       , directory => g_dmp_dir                                                                                                                                                               
       , filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE                                                                                                                                              
       , reusefile => 1 );                                                                                                                                                                              
   DBMS_OUTPUT.PUT_LINE('Successfully add dmp file: '||v_dmp_file_name);                                                                                                                               
                                                                                                                                                                                                        
     dbms_datapump.add_file                                                                                                                                                                             
       ( handle => h1                                                                                                                                                                                   
       , filename => v_log_file_name                                                                                                                                                                    
       , directory => g_dmp_dir                                                                                                                                                               
       , filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);                                                                                                                                             
   DBMS_OUTPUT.PUT_LINE('Successfully add log file: '||v_log_file_name );                                                                                                                              
                                                                                                                                                                                                        
     dbms_datapump.log_entry(handle => h1, message => 'Job '||(counter+1)||'/'||v_dop||' starting at '||to_char(sysdate, 'dd-mon-yyyy hh24:mi:ss')||' as of scn: '||v_curr_scn );                       
     dbms_datapump.start_job(handle => h1, skip_current => 0, abort_step => 0);                                                                                                                         
   DBMS_OUTPUT.PUT_LINE('Successfully started job '||(counter+1)||'/'||v_dop||' at '||to_char(sysdate,'dd-mon-yyyy hh24:mi:ss') ||' as of scn: '||v_curr_scn );                                        
                                                                                                                                                                                                        
     dbms_datapump.detach(handle => h1);                                                                                                                                                                
   DBMS_OUTPUT.PUT_LINE('Successfully detached from job' );                                                                                                                                            
                                                                                                                                                                                                        
  end loop;                                                                                                                                                                                             
  DBMS_OUTPUT.PUT_LINE('**** END SETTING DATAPUMP PARALLEL_TABLE_EXPORT JOBS ****' );                                                                                                                   
                                                                                                                                                                                                        
EXCEPTION                                                                                                                                                                                               
  WHEN OTHERS THEN                                                                                                                                                                                      
    dbms_datapump.detach(handle => h1);                                                                                                                                                                 
    DBMS_OUTPUT.PUT_LINE('Successfully detached from job' );                                                                                                                                            
    DBMS_OUTPUT.PUT_LINE('Error: '||SQLERRM||' - '||DBMS_UTILITY.FORMAT_ERROR_BACKTRACE );                                                                                                              
    DBMS_OUTPUT.PUT_LINE('**** END SETTING DATAPUMP PARALLEL_TABLE_EXPORT JOBS ****' );                                                                                                                 
    RAISE_APPLICATION_ERROR( -20010, SQLERRM||' - '||DBMS_UTILITY.FORMAT_ERROR_BACKTRACE );                                                                                                             
END Exp_Tables_Parallel;                                                                                                                                                                        
                                                                                                   
-------------------------------------------------------------------------------------------------                                                                                                       
PROCEDURE Exec_Export
   ( P_wait IN PLS_INTEGER := 0 )                                                                                                                                                                   
  IS                                                                                                                                                                                                    
  v_scn         NUMBER;                                                                                                                                                                                     
  v_dmp         VARCHAR2(200);
  export_done   PLS_INTEGER := 0;                                                                                                                                                                              
                                                                                                                                                                                                        
BEGIN                                                                                                                                                                                                   
                                                                                                                                                                                                        
-- get current scn                                                                                                                                                                                      
  v_scn := Get_Current_Scn;                                                                                                                                                                             
                                                                                                                                                                                                        
-- start parallel export processes + detach                                                                                                                                                             
  Exp_Tables_Parallel( v_scn, v_dmp );                                                                                                                                                       

  if P_wait = 1 then
-- wait till all parallel export processes are finished 
-- check every 5 minutes                                                                                                                                                                                    
    export_done := 0;
    loop                                                                                                           
      for i in ( select 1                                                                                                                                                                               
                 from ( select count(*) cnt                                                                                                                                                             
                        from user_tables                                                                                                                                                                
                        where instr(table_name,upper(replace(v_dmp,'.dmp'))) > 0 )                                                                                                                   
                 where cnt = 0 )                                                                                                                                                                        
      loop                                                                                                                                                                                              
        export_done := 1;                                                                                                                                                      
      end loop;
    
      if export_done = 1 then
        exit;
      end if;
      Sleep(300000);
    end loop; 
  end if;
                                                                                                                                                                                                        
EXCEPTION                                                                                                                                                                                               
  WHEN OTHERS THEN                                                                                                                                                                                      
    DBMS_OUTPUT.PUT_LINE('Error: '||SQLERRM||' - '||DBMS_UTILITY.FORMAT_ERROR_BACKTRACE );                                                                                                              
    RAISE_APPLICATION_ERROR( -20010, SQLERRM||' - '||DBMS_UTILITY.FORMAT_ERROR_BACKTRACE );                                                                                                             
END Exec_Export;                                                                                                                                                                                
                                                                                                                                                                                                        
--------------------------------------------------------------------------------------------------------                                                                                                
END Datapump_Parallel_Exp_Pck;
/

SHOW ERRORS;

The post DIY Parallelization with Oracle DBMS_DATAPUMP appeared first on AMIS Oracle and Java Blog.

BLOG Technoark 2017 Conference – Blockchain, beyond the bitcoin.

Yann Neuhaus - Thu, 2017-02-23 07:15

BLOG Technoark 2017 Conference – Blockchain, beyond the bitcoin.
and
How applying Blockchain for Enterprise security and data management.
BlockC1

I post a brief personal impression of the TechnoArk Conference 2017 in TechnoArk center of Sierre /Switzerland)
This TechnoArk Center is based on three essentials parts:

• The Ark Accelerator leverages new technologies and guides them to a commercial application. It is also responsible, if necessary, for patenting them.

• The Ark Incubator offers professional coaching to start-up companies. It strengthens the chances of success and promotes their insertion in the economic fabric.

• The Innovation department offers proactive support in the consolidation of your achievements, but especially in the development of new business potential.

Sum-up of this event :

Born with Bitcoins in 2008 and created by Natoshi Sakamoto, Blockchain is a technologic concept, and not a ”software”. Rather it’s a new techno and new decentralized organization.
Blockchain is in same time a peer2peer network, and a distributed database system, which allows to the historic listing of transactions between 2 parts, to become unfalsifiable.
The aim of this event was to show that we can use blockchain in several domains like aeronautic, health, lawyer, energy, eGovernment, gaming and so on.
One presentation per domain was carried out.
Keynotes were organized around case studies, among special interests:
Fintech : it is seen as a foundation techno but it will take time to be seen as a standard. There’s no real application, all in beta version. Smart contract can be used, they could allow real time transaction, immediate reconciliation between traders and sellers.
eHealth : blockchain can be used with Hyperledger software on follow up for oncology as it is a long term disease that implies multiple people, doctors, pharmacists, hospital executives…
eGov : Mike Gault from Guardtime explains how Estonian gov. already used blockchain for Enterprise security and data management.
Energy : Stephan Tual, (founder of the startup slock.it) has explained how blockchains could be used in the service of renewable energies. Notably with regard to the billing with energy consumed.
Aeronautic : Stephane Cheickh explained how blockchains could be useful in civil aviation supervision of the use of drones by businesses and individuals, but also in the management of baggage.
• Others keynotes belonged to a very particular field (like gaming, the legal one and so on)

What we heard :
Blockchain is more than a Technology; It’s a strategy …
Blockchain is the most disruptive technology I have ever seen…

Technical part:

As it is very difficult to find precise technical information about blockchains, let me give you some interesting elements to have in memory for the future markets and contracts:

Definition:
A blockchain is a ledger of records arranged in data batches called blocks (or blocs) that use cryptographic validation to link themselves together.
Put simply, each block references and identifies the previous block by a hashing function, forming an unbroken chain, hence the name.

BlockC2Source : Blockchain France ©

 

Blocks in details :

Block-d-C3

Source : Blockchain France ©

The operation of a ledger can be described like this:

BlockC4Source : Blockchain France ©

 

• We can immediately see that Blockchain’s ledger doesn’t store “data” itself but fingerprints only.
• The second point proves the existence of an object at precise moment, but not authenticity.

 

What is important to remember is that a blockchain is characterized by the 4 following features:

BlockC5Source : Blockchain France ©

• usually contains financial transactions
• is replicated across a number of systems in almost real-time
• usually exists over a peer-to-peer network
• uses cryptography and digital signatures to prove identity, authenticity and enforce read/write access rights
• can be written by certain participants
• can be read by certain participants, maybe a wider audience, and
• has mechanisms to make it hard to change historical records, or at least make it easy to detect when someone is trying to do so

 

Additionals informations :

A string of blocks is a kind of independent, transparent and permanent database coexisting in several places and shared by a community. That is why it is sometimes called a Mutual Distributed Book (MDL) too.
There is nothing new about MDL, the origins of which date back to the 1976 Diffie-Hellman research paper New Directions in Cryptography. But for a long time they were considered complicated and not quite safe.
It took the Blockchain implementation simpler in Bitcoin to change things. The permanence, security and distributed nature of Bitcoin ensured that it was a currency maintained by a growing community but controlled by absolutely no one and incapable of being manipulated.
Throughout this event, it has been objectively demonstrated that this concept has many advantages.
On the other hand, it is more complicated regarding the Swiss law (and law in general), which is not yet ready to absorb the paradigm shifts induced by this new technology. The impacts on society are also not to be overlooked.

Conclusion:
Today big companies are cautiously launching into the use of this concept because they believe that the first to use the blockchains will be ahead of the competition.
And if this concept was to become dangerous, the money invested would have served to better understand the danger.

Ludovic HAYE

dbi-services consultant

 

 

Cet article BLOG Technoark 2017 Conference – Blockchain, beyond the bitcoin. est apparu en premier sur Blog dbi services.

Oracle and Tech Mahindra Deliver Industry’s First VoLTE as a Service Offering

Oracle Press Releases - Thu, 2017-02-23 07:00
Press Release
Oracle and Tech Mahindra Deliver Industry’s First VoLTE as a Service Offering Oracle Communications and Tech Mahindra helping drive VoLTE adoption by bringing operators an affordable, powerful VoLTE solution

Redwood Shores, Calif.—Feb 23, 2017

Oracle today announced that Tech Mahindra, a leading system integrator for network infrastructure services, and Oracle Communications  have partnered to deliver an end-to-end VoLTE-as-a-Managed-Service solution based on Oracle’s IMS Core and Signaling products. The partnership, represents the industry’s first end-to-end VoLTE solution built on best-of-breed technology. The solution offers operators the ability to achieve a faster time to market with new VoLTE services, increased voice quality and greater network efficiency while significantly reducing cost and complexity.

Today’s connected world places considerable demands on traditional communication services and the underlying networks. As service providers grapple with the move to an all-IP future, the resources needed to upgrade networks and services is a significant obstacle. Wireless operators have long recognized the need to adopt VoLTE in order to remain relevant and prepare for interoperability with other networks in the future, but the price and difficulty of this adjustment has been prohibitive. 

Tech Mahindra’s VoLTE-as-a-Managed-Service solution, powered by Oracle Communications technology, simplifies the path to an all-IP network by offering a fully virtualized solution that runs on common off the shelf hardware rather than relying on proprietary networking equipment. A typical service provider with an LTE data network can expect to service its first Oracle-enabled VoLTE call within 3-6 months of deploying the solution, often at significant cost savings compared to traditional vendors and in-house solutions.

“The need to drive increased network efficiency and coverage while offering enhanced voice quality necessitates the move to Voice-over-Packet technologies,” said Manish Vyas, CEO Tech Mahindra Network Services. “Leveraging Oracle technology, Tech Mahindra is enabling service providers to adopt VoLTE in a simpler and more cost-effective way, with a powerful end-to-end pre-integrated solution that is virtualized and offers industry leading capabilities at each function.”

 The VoLTE-as-a-Managed-Service solution is built on Oracle products that are used today in service providers around the world. Designed, deployed and operated by Tech Mahindra, it empowers service providers to offer the VoLTE services their customers demand with reduced operational costs and without requiring any internal skillset realignment.

“Oracle Communications is laser-focused on accelerating service providers’ transformation toward the software-centric networks of the future,” said Douglas Suriano, Senior Vice President and General Manager at Oracle Communications. “Tech Mahindra brings valuable experience in managed services, and this partnership will enable us to deliver the industry’s first end-to-tend VoLTE solution to service providers globally.”

The Oracle Communications technologies supporting the new VoLTE as a Service offering include Oracle Communications Core Session Manager, Oracle Communications Session Border Controller, Oracle Communications Evolved Communications Application Server, Oracle Communications Policy Management, Oracle Communications Diameter Signaling Router and Oracle Communications Applications Orchestrator. To learn more about these products and other Oracle Communications offerings, visit: http://bit.ly/2kLCqqZ.

Contact Info
Katie Barron
Oracle
+1.202.904.1138
katie.barron@oracle.com
Shalini Singh
Tech Mahindra
+91.965.446.3108
shalini.singh7@techmahindra.com
About Tech Mahindra

Tech Mahindra represents the connected world, offering innovative and customer-centric information technology experiences, enabling Enterprises, Associates and the Society to Rise™. We are a USD 4.2 billion company with 117,000+ professionals across 90 countries, helping over 837 global customers including Fortune 500 companies. Our convergent, digital, design experiences, innovation platforms and reusable assets connect across a number of technologies to deliver tangible business value and experiences to our stakeholders. Tech Mahindra is amongst the Fab 50 companies in Asia (Forbes 2016 list).

We are part of the USD 17.8 billion Mahindra Group that employs more than 200,000 people in over 100 countries. The Group operates in the key industries that drive economic growth, enjoying a leadership position in tractors, utility vehicles, after- market, information technology and vacation ownership.

Connect with us on www.techmahindra.com

About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Katie Barron

  • +1.202.904.1138

Shalini Singh

  • +91.965.446.3108

BBVA Banks on Oracle to Deliver a Better Mobile Experience to Customers

Oracle Press Releases - Thu, 2017-02-23 07:00
Press Release
BBVA Banks on Oracle to Deliver a Better Mobile Experience to Customers Spanish financial services provider chooses Oracle to enable customers to open accounts with mobile devices

Redwood Shores, Calif.—Feb 23, 2017

Differentiating itself from competitors, while offering an enhanced experience to customers, Spanish bank, BBVA, is using Oracle Communications  technology to enable customers to open new accounts via their mobile devices in minutes.

The banking industry is under heavy scrutiny to validate and protect customer information.  BBVA has chosen a solution with comprehensive security features to enhance efforts to meet EU compliance requirements for confidential documentation and secure management of personal data, as well as standards for authentication, reporting and monitoring. BBVA chose Oracle Communications WebRTC Session Controller and Quobis Sippo WebRTC Application Controller as the foundation for its new platform because the technology is easily configured and integrates directly with the company’s existing internal systems.

“We live in an age of convenience where people can do everything from their mobile phones, whether it is to open a new account or to pay,” said Ignacio Teulon Ramírez, Digital Transformation - Customer Experience Director, BBVA. “We want to provide our customers with services in the way they prefer to consume them, and we want to provide them the best experience possible.”

Today, BBVA can provide a rich, real-time audio and video experience on a mobile phone or tablet. Jointly delivered by Quobis and in partnership with BT, the solution enables BBVA to validate customers’ identity so customers and prospects can quickly open a new account. The sessions can also be recorded for compliance purposes.

“Digital technologies are giving the financial services industry the opportunity to leap forward and provide products and services that match the digital lifestyle of their customers,” said Doug Suriano, senior vice president and general manager, Oracle Communications. “Our project with BBVA shows how large banks can differentiate themselves by creating a new banking experience. They have a clear vision and an understanding of their customers’ needs, as well as the technology that allows them to innovate while integrating seamlessly with their existing systems.”

Quobis and BT are Gold level members of the Oracle PartnerNetwork (OPN).

Contact Info
Katie Barron
Oracle
+1.202.904.1138
katie.barron@oracle.com
Kristin Reeves
Blanc & Otus
+1.415.856.5145
kristin.reeves@blancandotus.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

About Quobis

Quobis is leading the industry of browser-based communication solutions for services providers and enterprises with its award-winning Sippo product familiy. For more information about Quobis visit www.quobis.com

About BT

BT is one of the world’s leading providers of communications services and solutions, serving customers in 180 countries. For more information about BT visit http://www.bt.com.

About BBVA

BBVA is a customer-centric global financial services group founded in1857. The Group is the largest financial institution in Spain and Mexico and it has leading franchises in South America and the Sunbelt Region of the United States; and it is also the leading shareholder in Garanti, Turkey’s biggest bank for market capitalization. Its diversified business is focused on high-growth markets and it relies on technology as a key sustainable competitive advantage. Corporate responsibility is at the core of its business model. BBVA fosters financial education and inclusion, and supports scientific research and culture. It operates with the highest integrity, a long-term vision and applies the best practices.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Katie Barron

  • +1.202.904.1138

Kristin Reeves

  • +1.415.856.5145

Ravello CS with Big Data VM

Big Data Lite Virtual Machine is the most straight-forward way to start learning about Oracle Big Data Stack or even develop demos/proof of concepts on top of it (don't forget however that BD Lite...

We share our skills to maximize your revenue!
Categories: DBA Blogs

OBIEE 12c Time Dimension: Logical Sequence Number

Rittman Mead Consulting - Thu, 2017-02-23 02:59
 Logical Sequence Number

The key component of any successful OBIEE implementation is the metadata model known as the repository (RPD). Doing it right is sometimes considered "black magic", and small mistakes in the RPD can impact all the exposed Subject Areas, resulting in poor performances or, even worse, wrong results.

Working an RPD requires dedicated knowledge of the tool and we are sharing it in our RPD modelling training both for OBIEE 11g and OBIEE 12c.

If you ever worked on RPD modelling, one of the settings you surely encountered is the Time dimension. This blog post written back in 2007 explains the process of setting up a time dimension for OBIEE 10g. The process didn't have any major modifications until recently when, in 12.2.1, Logical Sequence Numbers were introduced. As per Oracle's documentation this new feature "optimizes time series functions and in some cases improves query time", and in this post we'll see how to configure it and its impact on the time-series calculations. The examples shown below are based on Oracle Sampleapp v607, a really good source of modelling and front-end examples.

Usual Time-series Query Behaviour

Time-series functions like Ago,ToDate, and more recently PeriodRolling, allow end users to compare results coming from different moments just by specifying the level in the time dimension hierarchy and the number of periods to look backwards or forwards. As example if you needed to compare current month sales revenue with the previous month figure you'll end up writing a formula like

 AGO("F0 Sales Base Measures"."1- Revenue","H0 Time"."Month", 1)

Where:

  • AGO: is the Time-series function being called
  • "F0 Sales Base Measures"."1- Revenue": is the metric
  • "H0 Time"."Month": is the time hierarchy level
  • 1: is the amount of periods (months in our case) to look back in history

Once the time-series metric has been created, it can be used in an analysis like the following to compare Revenue of 2015-04 with the one of the previous month.

 Logical Sequence Number

The analysis generates the following Logical SQL which basically lists the columns retrieved and the filters applied.

SELECT  
   0 s_0,
   "A - Sample Sales"."Time"."T02 Per Name Month" s_1,
   "A - Sample Sales"."Base Facts"."1- Revenue" s_2,
   "A - Sample Sales"."Time Series"."127  Mth Ago Rev  (Fix Time Lvl)" s_3
FROM "A - Sample Sales"  
WHERE  
("Time"."T02 Per Name Month" = '2015 / 04')
ORDER BY 2 ASC NULLS LAST  
FETCH FIRST 5000001 ROWS ONLY  

The translation to source SQL hugely depends on the data model created in the RPD and on the data source type. In our example an Oracle SQL gets generated containing the following steps:

  • Sequence Generation: a RANK function is used to created a dense sequence based on Per_Name_Month, the chronological key defined in the time-hierarchy for the month level. Mth_Key is also part of the query since it's used in the join between dimension and fact table.
WITH  
OBICOMMON0 AS (select DENSE_RANK() OVER ( ORDER BY T653.Per_Name_Month) as c1,  
     T653.Mth_Key as c2,
     T653.Per_Name_Month as c3
from  
     BISAMPLE.SAMP_TIME_MTH_D T653 /* D02 Time Month Grain */ ),
  • Period Shifting: The sequence generated above is now shifted by the number of periods defined in the column formula (D1.c1 + 1), in our example 1.
SAWITH0 AS (select D1.c1 + 1 as c1,  
     D1.c2 as c2,
     D1.c3 as c3
from  
     OBICOMMON0 D1),
  • Ago Period Query: Using the period shifted query the historical record (or set of records) is retrieved.
SAWITH1 AS (select distinct D1.c1 as c1,  
     D1.c3 as c2
from  
     OBICOMMON0 D1),
SAWITH2 AS (select sum(T418.Revenue) as c1,  
     D3.c2 as c2
from  
     BISAMPLE.SAMP_REVENUE_F T418 /* F10 Billed Rev */ ,
     SAWITH0 D4,
     SAWITH1 D3
where  ( T418.Bill_Mth_Key = D4.c2 and D3.c1 = D4.c1 and D3.c2 = '2015 / 04' )  
group by D3.c2, D4.c3),  

The period shifted query usage is explained visually by the image below

 Logical Sequence Number

  • Selected Period Query: the query for the selected period, in our case 2015-04, is executed using standard time dimension
SAWITH3 AS (select sum(T418.Revenue) as c1,  
     T653.Per_Name_Month as c2
from  
     BISAMPLE.SAMP_TIME_MTH_D T653 /* D02 Time Month Grain */ ,
     BISAMPLE.SAMP_REVENUE_F T418 /* F10 Billed Rev */ 
where  ( T418.Bill_Mth_Key = T653.Mth_Key and T653.Per_Name_Month = '2015 / 04' )  
group by T653.Per_Name_Month)
  • Resultsets joining: Results coming from Ago Period and Selected Period queries are then joined with an outer join.
select D1.c1 as c1, D1.c2 as c2, D1.c3 as c3, D1.c4 as c4 from ( select D1.c1 as c1,  
     D1.c2 as c2,
     D1.c3 as c3,
     D1.c4 as c4
from  
     (select 0 as c1,
               coalesce( D1.c2, D2.c2) as c2,
               D2.c1 as c3,
               D1.c1 as c4,
               ROW_NUMBER() OVER (PARTITION BY coalesce( D1.c2, D2.c2) ORDER BY coalesce( D1.c2, D2.c2) ASC) as c5
          from 
               SAWITH2 D1 full outer join SAWITH3 D2 On D1.c2 = D2.c2
     ) D1
where  ( D1.c5 = 1 )  
order by c2 ) D1 where rownum <= 5000001  

As you can see, it's a rather complex set of instructions that involves several steps including analytical functions like the DENSE_RANK() in the sequence generation. In our case we have been lucky that the source system provided the DENSE_RANK() function; with other sources the sequence generation needs to be calculated directly by the BI Server (OBIS) with a possible degradation of performances.

What is Logical Sequence Number?

The Sequence Generation step mentioned above can sometimes be the bottleneck of the whole query especially when the time dimension is complex or huge in volume since DENSE_RANK() is a costly analytical function.

All that OBIEE's Time-series need to work is a pure sequence, or in Oracle's words:

Sequence numbers are enumerations of time dimensional members at a certain level. The enumeration must be dense (no gaps) and must correspond to a real time order. For example, months in a year can be enumerated from 1 to 12.

Then what if we can find a way of pre-calculating them and storing in the table or calculate them on the fly but using functions less expensive than a DENSE_RANK()?

This is the idea behind the Logical Sequence Number (LSN): a way of avoiding the BI Server (OBIS) needing to execute the DENSE_RANK(), by passing either a pre-calculated sequence column in the source table or a cheaper-cost function to calculate it on the fly based on existing columns.

The formula behind a Logical Sequence Number must resolve in a number (no varchar, or dates are allowed) and can either be:

  • Absolute: when the sequence is consistent and doesn't need any external reference, e.g. calendar years are a self-defined sequence
  • Relative: when the sequence is relative to a parent level, e.g. creating a sequence for months between 1 and 12 would need the calendar year as parent level to provide correct cross-years references

As the word already says "Sequence Numbers" must be sequential, no gaps can be included. Year and month number are good examples of this.

But what if we wanted to provide an absolute sequence number at month level?
One could think about using a month key in the YYYYMM format but this is not a sequence: 201701 <> 201612+1. The best way of generating the sequence number would be to add it as a column in the database table.

Will take as example the SAMP_TIME_MTH_D table that can be found in Sampleapp v607 containing MONTH_KEY column in the YYYYMM format.
 Logical Sequence Number

I'll add to SAMP_TIME_MTH_D a column MONTH_ABS_SEQ containing the absolute sequence number. The formula behind the column is exactly what Oracle was doing under the covers using a DENSE_RANK.

SELECT MTH_KEY, DENSE_RANK() OVER (ORDER BY MTH_KEY) MONTH_ABS_SEQ FROM SAMP_TIME_MTH_D  

And the end result as expected being

 Logical Sequence Number

How are Logical Sequence Number Used?

Previously I described the two types of Logical Sequence Numbers: Absolute and Relative, each one has it use case:

  • PERIODROLLING: This function uses absolute LSN to calculate the starting Sequence Number based on the selected one: e.g. looking at the previous image a 6 month rolling starting from 2009-02 will include data from sequences in the range [9-14].
  • TO_DATE: uses relative LSN, e.g A YearToDate Measure shown by month will use the month relative LSN to calculate the previous months in the same year.
  • AGO: Ago function uses both absolute and relative: Absolute numbers are used if the grain of the query and the one of the AGO are at the same level e.g. Yearly analysis of Actual vs Previous Year. However when the grain of the shift in the ago is higher than the grain of the query Relative LSN are used, e.g. Monthly analysis of Actual vs Previous Year.
How to Configure Logical Sequence Numbers

Logical Sequence Number is a new feature and as such it requires additional settings in the time dimension hierarchy to be working. In our case we'll add two sequences, an absolute at calendar year level and a relative at calendar month level.
We'll add the sequences directly as formulas in the repository however those formulas should be pushed down as columns in the database table if optimal performances are sought.

In order to set the Logical Sequence Numbers we need to open the RPD (the SampleApp one in this test), and select the Time dimension we want to change.
 Logical Sequence Number

After selecting any level apart from the Grand Total (top of the hierarchy) a new tab called "Sequence Numbers" should be visible. However if, like in our case, none of the columns at that level are integer or doubles, the sequence number selector is disabled.

 Logical Sequence Number

In order to enable the selector we first need to create a sequence column in our dimension and bring it at the correct level in the hierarchy. For the Year Level there is already an integer column named "T35 Cal Year" which can be used as sequence. We need simply to drag the column at Year level in the Time hierarchy and set it as Absolute Sequence.

 Logical Sequence Number

I can do the same with the Month level in the hierarchy and the "T32 Cal Month" column. Note that the column contains only the months enumeration from 1 till 12 so we need to set the sequence number as Relative to the level Year.

 Logical Sequence Number

Please note that both absolute and relative LSN can be (and should be) entered since as discussed above each have a different use cases. In addition relative LSN should be set for all logical parents level in the hierarchy since they will be used only if the grain of the time shift matches the one of the parent level. For example a Monthly LSN based on Year logical level will only be used in AGO functions having a year shift and not in case of Quarterly shift.

For an optimal usage every level of the time hierarchy should have one absolute and a relative LSN for each of the parents level in the hierarchy.

Impact on SQL

It's time now to review the SQL generated by our analysis and check the differences with the old-school time-series query.

When creating an analysis at year level like the following
 Logical Sequence Number

As Expected the BI Server (OBIS) uses the CAL_YEAR column as sequence instead of the DENSE_RANK() function over the PER_NAME_YEAR column.

WITH  
OBICOMMON0 AS (select T795.CAL_YEAR as c1,  
     T795.QTR_KEY as c2,
     T795.PER_NAME_YEAR as c3
from  
     BISAMPLE.SAMP_TIME_QTR_D T795 /* D03 Time Quarter Grain */ )

While when using the TO_DATE both the relative sequence is used, like in the following example where the measure "166 Revenue Year To Date" is defined by the formula:

TODATE("01 - Sample App"."F0 Sales Base Measures"."1- Revenue",  "01 - Sample App"."H0 Time"."Year" )  

 Logical Sequence Number

The following query gets generated, note the usage of Cal_Year and Cal_Month in the sequence generation query instead of the DENSE_RANK() function as per RPD settings mentioned above.

WITH  
OBICOMMON0 AS (select T653.Cal_Year as c1,  
     T653.Cal_Month as c2,
     T653.Per_Name_Month as c3,
     T653.Per_Name_Year as c4
from  
     BISAMPLE.SAMP_TIME_MTH_D T653 /* D02 Time Month Grain */ )
...
Are Logical Sequence Numbers Useful?

Most of the times the bottleneck when using Time Series is not in the sequence generation, since the time dimension cardinality is rarely big enough to produce a noticeable delay in the query time, but rather in the poorly managed calculations made on query time on top of massive fact tables.

Don't expect LSN to solve all your performance problems with Time Series. However, the usage of Logical Sequence Numbers provides to OBI a way of pre-cooking part of the calculation and so in theory should help performance. The small effort required to set them up centrally in the time hierarchy is covered by the benefits during query time, without having to touch any pre-defined time-series calculation.

If you do have performance problems with your OBIEE system, or would like to ensure that a system you’re building will be performant from the outset, please get in touch to find out more about our Performance Analytics service!
We also provide expert OBIEE training, implementations, QA and health checks - to find out more about how we can help you, please contact us!

Categories: BI & Warehousing

Reminder: Upgrade WebCenter Portal 11.1.1.8 to 11.1.1.9

Steven Chan - Thu, 2017-02-23 02:06

Oracle Fusion Middleware products get new Patch Set updates.  When a new Patch Set has been released, a 12 month Grace Period for the previous Patch Set begins.  Once that Grace Period ends, no new patches for the previous Patch Set will be released.

For more details, see:

Oracle WebCenter Portal was formerly called "OracleWebCenter Suite ".  WebCenter Portal 11.1.1.8 was released in April 2013.  WebCenter Portal 11.1.1.9 was released in May 2015, which means that the Grace Period for WebCenter Portal 11.1.1.8 ended after May 2016. 

All E-Business Suite users running WebCenter Portal 11.1.1.8 should upgrade to WebCenter Portal 11.1.1.9 to remain under Error Correction Support. WebCenter Portal 11.1.1.x is covered by Premier Support to December 2018, and covered by Extended Support to December 2021.

Related Articles

Categories: APPS Blogs

Pages

Subscribe to Oracle FAQ aggregator