Feed aggregator

Connection between Mysql and SQL Developer Datamodeler

Tom Kyte - 2 hours 25 min ago
Hi Tom, Im trying to connect Oracle SQL Developer Datamodeler with Mysql but nothing work. I can connect Oracle SQL Developer with Mysql with an J Connector, but when I do the same steps on SQL Datamodeler (add an extension in the Third Party JDBC) ...
Categories: DBA Blogs

How to copy a DBMS_XMLDOM.domnode between DBMS_XMLDOM.domdocument objects

Tom Kyte - 2 hours 25 min ago
I'm having trouble using the DBMS_XMLDOM package. I can hardly find more online than the API (which isn't that great) and a few "here's how you build an XML document" introductions, with no tutorials on more advanced uses. I am trying to write a file...
Categories: DBA Blogs

Database Queries Slow After DB Re-Import

Tom Kyte - 2 hours 25 min ago
After initial creation of user and import of db, queries were fast. Used following commands: " CREATE USER TESTUSR IDENTIFIED BY TESTUSR default tablespace TESTTAB quota unlimited on TESTTAB ACCOUNT UNLOCK; GRANT CREATE SESSION TO TESTUSR; GRAN...
Categories: DBA Blogs

RMAN using dbms_pipe

Tom Kyte - 2 hours 25 min ago
Hello I wrote a PL/SQL package which allows me to control RMAN out of SQL*Plus using the RMAN syntax. To implement it I used the dbms_pipe/dbms_scheduler functionality which starts the external rman process on OS level and call the rman binary ...
Categories: DBA Blogs

Direct path read temp wait event issue

Tom Kyte - 2 hours 25 min ago
Hi Tom, I have series of job which are configured daily & every day its been completed within expected time 7 mins. But last 4 days back this job is behaving like any thing its taking more than 2 hours. Its going for more sort operations & waiting...
Categories: DBA Blogs

select from table type

Tom Kyte - 2 hours 25 min ago
I have created a type below is the code of the type <code>create or replace type tt_name as table of varchar2(250);</code> Now I want to perform select on this type in a plsql code select listagg(column_name,';') within group(order by colum...
Categories: DBA Blogs

When an index seek operator is not always your friend

Yann Neuhaus - Mon, 2016-09-26 12:40

Did you ever consider an index seek as an issue? . Well, let’s talk about a story with one of my customers where the context is pretty simple: a particular query that is out of the range of the application’s performance requirements (roughly 200ms of execution time in average). The execution plan of the query was similar to what you may see in the picture below:

 

blog 104 - 1 - query execution plan

At first glance, no obvious way to improve the performance of the above query right? The query was similar to the following one (with some exceptions with the real context but it does not matter in our case):

DECLARE @id VARCHAR(10) = 'LPKL';

SELECT 
	[Num_DUT],
	[Etat],
	[Description],
	Etape_Process
FROM 
	[dbo].[evenements]
WHERE 
	actif IS NULL 
	AND [date] >= '20160101'
	AND SUBSTRING(DM_Param, 12, 4) = @id

 

Here  the definition of the dbo.evenements table.

create table dbo.evenements
(
	[date] datetime,
	Actif BIT NULL,
	Etape_Process VARCHAR(50),
	DM_Param VARCHAR(50),
	Num_DUT INT,
	[Etat] VARCHAR(10),
	[Description] VARCHAR(50)
)

 

Let’s set quickly the context. This table is constantly filled up by information from different sensors. The question that came in my mind at this moment was why an index seek is used here regarding the query predicate? After all, we may noticed a parallel execution plan (cost threshold for parallelism is by defaut) that leads to ask questions about the index seek’s behavior. The index used in this query was as follows:

CREATE NONCLUSTERED INDEX [idx_dbi_Evenements_actif] ON [dbo].[evenements]
(
	[Actif] ASC,
	[date] ASC,
	[Etape_Process] ASC,
	[DM_Param] ASC
)
INCLUDE ( 	[Num_DUT],
	[Etat],
	[Description]) 
WHERE [date] >= '20160101'

Regarding the index definition, if we take a look closely at the WHERE clause of the query, we may assume that using a seek operator in the execution is a little bit tricky.

blog 104 - 2 - seek operator info

Indeed, predicting the selectivity of the SUBTRING part of the clause presents a big challenge for SQL Server in this case. The cardinality estimation seems to be wrong event after updating the dbo.evenements statistics.

blog 104 - 3 - cardinality estimate issue

So the next question that came in mind concerned the index seek operation itself. Is it really an index seek? I based my assumption upon the use of parallelism in the execution plan. Why using parallelism if I just have to get 2100 rows which represent only 0.2 % of all the data in my case? I remembered an old article written by Paul White in 2011 with the title When is a Seek not a Seek?

So referring to this article I moved directly on the output of the SET STATISTICS IO

Table ‘evenements’. Scan count 5, logical reads 10590, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Well, the situation is more obvious here if we take a look at the number of pages included to the nonclustered index.

blog 104 - 5 - index pages

In fact, each seek is a partial scan to search the corresponding values that satisfy SUBSTRING(DM_Param, 12, 4) predicate at the leaf level. Why 5 scans here? Well my assumption here is the SQL Engine is using 4 threads in parallel to perform a partial scan as shown below:

blog 104 - 51 - parallel threads

Let’s do some math here. Each thread fetches approximatively 2600 pages regardless all the details, so we are not so far from the total logical reads displayed by the SET STATISTISC IO previously (4 x 2600 = 10400).

That said, we identified the cause of the high query cost but we did not respond to the main question: Are we able to reduce it? Well, to answer the question, let’s come back to the query semantic. At this point, we get stuck by the current predicate. After some further discussions with the customer and the data semantic of the table dbo.evenements, we identified that the DM_Param column doesn’t meet to the first normal form because it does not contains atomic values (the sensor identifier + some additional parameters) making it non sargable.

Moreover, to introduce another difficultly the SUBSTRING() function arguments were purely dynamic in this and must be adjusted to extract the correct identifier value regarding the line record. Finally we decided to update the initial scheme to meet the normal form rules. Thus, we introduced an additional table to store sensors identifiers and we used a non-semantic primary key to join the dbo.evenements table as well. Of course the NUM_DT column values should be updated accordingly to the new scheme.

Better for performance? Let’s have a look at the new execution plan

blog 104 - 7 - new execution plan

The execution cost dropped under the threshold of parallelism value in such way that we were able to use a serializable plan.

What about IO statistics?

Table ‘evenements’. Scan count 1, logical reads 28, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table ‘identifier’. Scan count 0, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

The above output is meaningful by itself. In the context of my customer we successfully reduced the execution time from 200 – 300s to 2-3ms on average. Of course a big improvement at the price of some changes from the application side that concern insert / update / delete operations. But once again we concluded after prototyping of new insert / update / delete operations that they were not really impacted in terms of performance.

Happy performance troubleshooting!

 

 

 

Cet article When an index seek operator is not always your friend est apparu en premier sur Blog dbi services.

SQLLoader DIRECT option and Unique Index

Hemant K Chitale - Mon, 2016-09-26 09:59
The DIRECT parameter for SQLLoader Command-Line enables Direct Path Load which uses a Direct Path API instead of a regular INSERT statement to load data into the target table.

However, one needs to know how it handles a Unique Index on the target table.
It actually leaves the  UNUSABLE if, duplicate values are loaded. The Index rebuild fails but the duplicate values remain in the table.

Here is a quick demo  (this in 12.1.0.2 MultiTenant). I first setup the target table with a Unique Index.

[oracle@ora12102 Desktop]$ sqlplus hemant/hemant@PDB1

SQL*Plus: Release 12.1.0.2.0 Production on Mon Sep 26 22:36:51 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Mon Sep 26 2016 22:26:16 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> show con_id

CON_ID
------------------------------
3
SQL> create table test_sqlldr_direct
2 (id_column number,
3 data_column varchar2(15))
4 /

Table created.

SQL> create unique index test_sqlldr_direct_u1 on test_sqlldr_direct(id_column);

Index created.

SQL> insert into test_sqlldr_direct values (1, 'First Row');

1 row created.

SQL> commit;

Commit complete.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options
[oracle@ora12102 Desktop]$

Next, I setup the datafile with a duplicate record and the controlfile.

[oracle@ora12102 Desktop]$ ls -l
total 8
-rw-r--r-- 1 oracle oinstall 40 Sep 26 22:40 load_data.dat
-rw-r--r-- 1 oracle oinstall 165 Sep 26 22:45 load_control.ctl
[oracle@ora12102 Desktop]$ cat load_data.dat
2,'Second Row'
3,'Third Row'
3,'Oops !'
[oracle@ora12102 Desktop]$ cat load_control.ctl
LOAD DATA
INFILE load_data.dat
APPEND INTO TABLE TEST_SQLLDR_DIRECT
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
TRAILING NULLCOLS
(
id_column,
data_column)
[oracle@ora12102 Desktop]$

I am now ready to run a Direct Path Load.

[oracle@ora12102 Desktop]$ sqlldr hemant/hemant@pdb1 control=load_control.ctl direct=TRUE

SQL*Loader: Release 12.1.0.2.0 - Production on Mon Sep 26 22:47:09 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Path used: Direct

Load completed - logical record count 3.

Table TEST_SQLLDR_DIRECT:
3 Rows successfully loaded.

Check the log file:
load_control.log
for more information about the load.
[oracle@ora12102 Desktop]$

What is that ? 3 rows loaded successfully ?  So, the duplicate row also did get loaded ?  Let's check the log file.

[oracle@ora12102 Desktop]$ ls -ltr
total 12
-rw-r--r-- 1 oracle oinstall 40 Sep 26 22:40 load_data.dat
-rw-r--r-- 1 oracle oinstall 165 Sep 26 22:45 load_control.ctl
-rw-r--r-- 1 oracle oinstall 1833 Sep 26 22:47 load_control.log
[oracle@ora12102 Desktop]$ cat load_control.log

SQL*Loader: Release 12.1.0.2.0 - Production on Mon Sep 26 22:47:09 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Control File: load_control.ctl
Data File: load_data.dat
Bad File: load_data.bad
Discard File: none specified

(Allow all discards)

Number to load: ALL
Number to skip: 0
Errors allowed: 50
Continuation: none specified
Path used: Direct

Table TEST_SQLLDR_DIRECT, loaded from every logical record.
Insert option in effect for this table: APPEND
TRAILING NULLCOLS option in effect

Column Name Position Len Term Encl Datatype
------------------------------ ---------- ----- ---- ---- ---------------------
ID_COLUMN FIRST * , O(") CHARACTER
DATA_COLUMN NEXT * , O(") CHARACTER

The following index(es) on table TEST_SQLLDR_DIRECT were processed:
index HEMANT.TEST_SQLLDR_DIRECT_U1 was made unusable due to:
ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found

Table TEST_SQLLDR_DIRECT:
3 Rows successfully loaded.
0 Rows not loaded due to data errors.
0 Rows not loaded because all WHEN clauses were failed.
0 Rows not loaded because all fields were null.

Bind array size not used in direct path.
Column array rows : 5000
Stream buffer bytes: 256000
Read buffer bytes: 1048576

Total logical records skipped: 0
Total logical records read: 3
Total logical records rejected: 0
Total logical records discarded: 0
Total stream buffers loaded by SQL*Loader main thread: 1
Total stream buffers loaded by SQL*Loader load thread: 0

Run began on Mon Sep 26 22:47:09 2016
Run ended on Mon Sep 26 22:47:11 2016

Elapsed time was: 00:00:01.88
CPU time was: 00:00:00.01
[oracle@ora12102 Desktop]$

Did you notice the section in the log file that says :
The following index(es) on table TEST_SQLLDR_DIRECT were processed:
index HEMANT.TEST_SQLLDR_DIRECT_U1 was made unusable due to:
ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found

Apparently, the Index is left UNUSABLE.

[oracle@ora12102 Desktop]$ sqlplus hemant/hemant@pdb1

SQL*Plus: Release 12.1.0.2.0 Production on Mon Sep 26 22:50:51 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Mon Sep 26 2016 22:47:09 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> select status from user_indexes
2 where index_name = 'TEST_SQLLDR_DIRECT_U1'
3 /

STATUS
--------
UNUSABLE

SQL> select * from test_sqlldr_direct order by 1;

ID_COLUMN DATA_COLUMN
---------- ---------------
1 First Row
2 'Second Row'
3 'Third Row'
3 'Oops !'

SQL> alter index test_sqlldr_direct_u1 rebuild;
alter index test_sqlldr_direct_u1 rebuild
*
ERROR at line 1:
ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found


SQL>

We can see the duplicated row for ID_COLUMN=3 and find that the Index cannot be rebuilt.  Oracle has allowed duplicate rows to load and left the Index UNUSABLE.

So, if you are planning to use DIRECT=TRUE and have a Unique Index, make sure you check the status of the Index and/or check the Log file before you proceed with processing the data.


Conversely, here is how the data is handled without DIRECT=TRUE :.

SQL> truncate table test_sqlldr_direct;

Table truncated.

SQL> insert into test_sqlldr_direct values (1,'First Row');

1 row created.

SQL> select status from user_indexes
2 where index_name = 'TEST_SQLLDR_DIRECT_U1'
3 /

STATUS
--------
VALID

SQL>

[oracle@ora12102 Desktop]$ rm load_control.log
[oracle@ora12102 Desktop]$ sqlldr hemant/hemant@PDB1 control=load_control.ctl

SQL*Loader: Release 12.1.0.2.0 - Production on Mon Sep 26 22:59:58 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Path used: Conventional
Commit point reached - logical record count 3

Table TEST_SQLLDR_DIRECT:
2 Rows successfully loaded.

Check the log file:
load_control.log
for more information about the load.
[oracle@ora12102 Desktop]$
[oracle@ora12102 Desktop]$ ls -ltr
total 16
-rw-r--r-- 1 oracle oinstall 40 Sep 26 22:40 load_data.dat
-rw-r--r-- 1 oracle oinstall 165 Sep 26 22:45 load_control.ctl
-rw-r--r-- 1 oracle oinstall 11 Sep 26 22:59 load_data.bad
-rw-r--r-- 1 oracle oinstall 1668 Sep 26 22:59 load_control.log
[oracle@ora12102 Desktop]$ cat load_data.bad
3,'Oops !'
[oracle@ora12102 Desktop]$ cat load_control.log

SQL*Loader: Release 12.1.0.2.0 - Production on Mon Sep 26 22:59:58 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Control File: load_control.ctl
Data File: load_data.dat
Bad File: load_data.bad
Discard File: none specified

(Allow all discards)

Number to load: ALL
Number to skip: 0
Errors allowed: 50
Bind array: 64 rows, maximum of 256000 bytes
Continuation: none specified
Path used: Conventional

Table TEST_SQLLDR_DIRECT, loaded from every logical record.
Insert option in effect for this table: APPEND
TRAILING NULLCOLS option in effect

Column Name Position Len Term Encl Datatype
------------------------------ ---------- ----- ---- ---- ---------------------
ID_COLUMN FIRST * , O(") CHARACTER
DATA_COLUMN NEXT * , O(") CHARACTER

Record 3: Rejected - Error on table TEST_SQLLDR_DIRECT.
ORA-00001: unique constraint (HEMANT.TEST_SQLLDR_DIRECT_U1) violated


Table TEST_SQLLDR_DIRECT:
2 Rows successfully loaded.
1 Row not loaded due to data errors.
0 Rows not loaded because all WHEN clauses were failed.
0 Rows not loaded because all fields were null.


Space allocated for bind array: 33024 bytes(64 rows)
Read buffer bytes: 1048576

Total logical records skipped: 0
Total logical records read: 3
Total logical records rejected: 1
Total logical records discarded: 0

Run began on Mon Sep 26 22:59:58 2016
Run ended on Mon Sep 26 22:59:58 2016

Elapsed time was: 00:00:00.07
CPU time was: 00:00:00.00
[oracle@ora12102 Desktop]$

SQL> select * from test_sqlldr_direct
2 order by id_column
3 /

ID_COLUMN DATA_COLUMN
---------- ---------------
1 First Row
2 'Second Row'
3 'Third Row'

SQL>
SQL> select status from user_indexes
2 where index_name = 'TEST_SQLLDR_DIRECT_U1'
3 /

STATUS
--------
VALID

SQL>

The duplicate row was rejected and went to the BAD file and was REJECTED.
.
.
.
So, the next time you chose to use SQLLDR DIRECT=TRUE for its performance benefits, make sure you know how to validate the status of Unique Indexes and check the log file ! Else, you might allow duplicates if the incoming data has not been cleansed before loading into the database.
.
.
.
If you are running 11g, the behaviour is the same in 11g/
.
.
.
.

Categories: DBA Blogs

Version Control for PL/SQL

Gerger Consulting - Mon, 2016-09-26 07:13
Today, we are happy to announce the new version of Gitora, Gitora 2.0, the version control tool for PL/SQL developers.



Gitora helps PL/SQL developers manage their source code in Git easily. It helps them lock database objects to prevent edits from other users regardless of the editor they use, create, change branches, merge code from other schemas or databases and see the source code in the database change instantly, generate diff scripts automatically and automate processes for continuous integration.

Gitora is the easiest and fastest way to put your source code in version control and improve your database development practices. It requires no client machine installation (all installation is server side), and is free to use for one database.

Learn more and download at http://www.gitora.com
Categories: Development

Oracle Marketing Cloud Extends Leadership with Next Generation of Cross-Channel Orchestration Canvas

Oracle Press Releases - Mon, 2016-09-26 07:00
Press Release
Oracle Marketing Cloud Extends Leadership with Next Generation of Cross-Channel Orchestration Canvas Marketers are able to more quickly and easily deliver personalized experiences at scale

Redwood Shores, Calif.—Sep 26, 2016

Oracle today announced industry-first innovations to the Oracle Marketing Cloud that will transform the way consumer marketers interact with customers across channels. First to market in 2009 with the ‘Program Canvas’, these new enhancements make it easier for marketers to develop personalized customer experiences across web, mobile web, display, mobile apps and email, and to deliver these experiences based on unified consumer identities from different channels. For the first time, brands will be able to orchestrate a consistent and personalized experience to consumers at scale.

Consumer marketers are under intense pressure to deliver consistent and personalized experiences, but are challenged with working in teams that operate in silos using only channel specific customer data. Without the context of a unified consumer identity, marketers are just scaling batch and blast practices across multiple channels. To address this challenge, Oracle has introduced the next evolution in campaign management with its ‘Orchestration Canvas’, which helps brands align and manage marketing activities across channels and gain a holistic view of the customer. Oracle is making it quicker and easier to deliver the best experiences by offering a single platform for marketers to test, optimize and execute marketing campaigns for any digital touch point across the customer journey.

“Effectively orchestrating cross-channel marketing activities can be extremely complex for brands, but it is an area that marketing must tackle as discrepancies in experiences, and sometimes prices, erode trust and distance customers from a brand,” said Steve Krause, Group Vice President Product Management, Oracle Marketing Cloud. “To help our customers address these challenges, we have broken down the silos that typically separate marketing teams with our new cross-channel marketing solution. This will help improve collaboration and empower marketing to orchestrate relevant real-time experiences.”

The ‘Orchestration Canvas’ is a simple, yet sophisticated user interface of the Oracle Marketing Cloud that enables consumer marketers to efficiently manage experiences at every touch point throughout the entire consumer lifecycle. Additional enhancements include a Mobile App Visual Editor where marketers can test and optimize mobile experiences across their mobile apps and as a part of the cross-channel experience. With the new additions to the Oracle Marketing Cloud, marketers can now benefit from:

  • Adaptive Customer Path Creation and Optimization: The customer purchase path is unpredictable, inconsistent and spread across an increasingly diverse set of digital and offline channels. Orchestration Canvas allows marketers to create event- and behavior-driven adaptive paths that let customers dictate the next step in their interaction with a brand. Customers optimize their own path to purchase, creating less friction in the buying process and better buying experiences.
  • True Cross Channel Orchestration: Messages across channels can be generic, repetitive and irrelevant, which can weaken a customer relationship. Orchestration Canvas helps marketers connect all of a customer's identities across channels to a single, unified profile with a rich understanding of each individual. Marketers can use these enhanced profiles to provide more personalized and relevant customer experiences across channels.
  • Powerful, Marketer-Friendly Mobile Testing: Marketers can use a code-free, drag and drop interface to test and optimize mobile application experiences using everything from simple A/B tests, to complex Multivariate Testing and Segment Personalization. In addition, marketers use rich customer preference and behavior data from mobile campaigns to inform other marketing campaigns being managed on the Oracle Marketing Cloud.
  • Integrated Stack and Ecosystem: Marketers can unify customer data and applications through Oracle Marketing Cloud's integrated stack, helping teams across the customer experience chain – from marketing to sales, commerce and customer service - bring a rich, unified and personalized experience to customers. In addition, marketers can use prebuilt integrations with the Oracle Customer Experience (CX) Cloud to leverage preference and behavioral data from across all interactions to further enhance customer profiles.

An end-to-end solution that includes Oracle Commerce, Oracle Configure, Price, and Quote Cloud (Oracle CPQ Cloud), Oracle Marketing Cloud, Oracle Sales Cloud, Oracle Service Cloud, and Oracle Social Cloud, Oracle CX Cloud is one of the industry’s most complete CX solutions. Part of Oracle Applications Cloud, Oracle CX Cloud empowers organizations to improve experiences, enhance loyalty, differentiate their brands, and drive measurable results by creating consistent, connected, and personalized brand experiences across all channels and devices.

Contact Info
Erik Kingham
Oracle
+1.650.506.8298
erik.kingham@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Erik Kingham

  • +1.650.506.8298

Order by - varchar column storing numeric values

Tom Kyte - Mon, 2016-09-26 06:06
Hi Tom I have a table 'LOCS' with 2 columns of varchar datatype (col1 is varchar yet at the moment we have only numeric values stored): col1 col2 2272 ABC 22722 ABCD 1000 dgdfg 10001 dfm Now, when I query the table: select * from locs...
Categories: DBA Blogs

Record count on basis of insert/delete triggers

Tom Kyte - Mon, 2016-09-26 06:06
Hi, We have a pagination requirement across the UI foothold where we need to show the total# of records for significant db facts off the tables in view of numbers. And the query are not utilizing query filter criteria as triggerd by a menu option ...
Categories: DBA Blogs

dbms_job taking a long time

Tom Kyte - Mon, 2016-09-26 06:06
Hi, There is a SQL program whose job is to pick up data from multiple tables based on an application ID and populate an MIS table. There are 25-30 queries written to fetch data from multiple tables for an application ID. Data is pulled into col...
Categories: DBA Blogs

How to set the INITTRANS value when Isolation level as Serializable

Tom Kyte - Mon, 2016-09-26 06:06
Hi Tom, We current need to set the serializable isolation level, and will enable the ROWDEPENDENCIES when create the table like as below. My question is what is an optimal value for INITRANS? Or what should we consider when set this parameter? Tha...
Categories: DBA Blogs

PK Histogram

Jonathan Lewis - Mon, 2016-09-26 04:25

One of the little myths of Oracle appeared on the Oracle-L list server a few days ago – the one that says: “you don’t need a histogram on a single column unique/primary key”.

Not only can a histogram be helpful on a column that’s declared to hold unique values, the optimizer may even spot the need automatically. It’s a little unusual (and probably the result of poor programming practice) but it does happen. Here’s an example demonstrating the principle:


rem
rem     Script:         pk_histogram.sql
rem     Author:         Jonathan Lewis
rem
rem     Last tested
rem             12.1.0.2
rem             11.2.0.4
rem             11.1.0.7
rem             10.2.0.5
rem

create table t1
as
with generator as (
        select  --+ materialize
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        rownum  id
from
        generator       v1,
        generator       v2
where
        rownum <= 1e4
;

insert into t1 select id + 1e6 from t1;
insert into t1 select id + 1e7 from t1;

alter table t1 add constraint t1_pk primary key(id);

select
        /*+ dynamic_sampling(0) */
        *
from    t1
where
        id between 12000 and 13000
;

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1'
        );
end;
/


select
        column_name, sample_size,
        num_distinct, num_nulls, density,
        histogram, num_buckets
from   
        user_tab_cols
where
        table_name = 'T1'
order by
        column_name
;

I’ve created a small data set some large gaps in the ranges of values used, then queried the data with a range-based predicate that will return no rows. After that I’ve let Oracle do it’s own thing with gathering stats (I think all the preferences are at their defaults). This is the result I got:

COLUMN_NAME          SAMPLE_SIZE NUM_DISTINCT  NUM_NULLS    DENSITY HISTOGRAM       NUM_BUCKETS
-------------------- ----------- ------------ ---------- ---------- --------------- -----------
ID                          5555        40000          0 .000024752 HEIGHT BALANCED         254

Oracle will gather a histogram automatically if it can see that the distribution of the data you are querying is skewed; but people tend to think of “skewed” as meaning there is wide variation in the number of rows for a given value and forget that it can also mean a variation in the number of rows within a given size of range. All I’ve done with my example is cue Oracle to the fact that I have big gaps in the range of known values – so it has produced a histogram that let’s it know where the gaps are.

Note: The results above are from 11.2.0.4, in 12.1.0.2 I got a HYBRID histogram, on 10.2.0.5 the sample size was 40,000


Making Hadoop easier

Pat Shuff - Mon, 2016-09-26 02:07
Last week we looked at provisioning a Hadoop server and realized that the setup was a little complex and somewhat difficult. This is what people typically do the first time when they want to provision a service. They download the binaries (or source if you are really crazy) and install everything from scratch. Our recommendation is to do everything this way the first time. It does help you get a better understanding of how the setup works and dependencies. For example, Hadoop 2.7.3 required Java 1.8 or greater. If we go with Hadoop 2.7.2 we can get by with Java 1.7.

Rather than going through all of the relationships, requirements, and libraries needed to get something working we are going to do what we would typically do to spin up a server if we suddenly need one up and running. We go to a service that provides pre-compiled and pre-configured public domain code sandboxes and get everything running that way. The service of choice for the Oracle Compute Cloud is Bitnami We can search for a Hadoop configuration and provision it into our IaaS foundation. Note that we could do the same using the Amazon EMR and get the same results. The key difference between the two are configurations, number of servers, and cost. We are going to go through the Bitnami deployment on the Oracle Cloud in this blog.

Step 1 Search for Hadoop on http://oracle.bitnami.com and launch the instance into your region of choice.

Step 2 Configure and launch the instance. We give the instance a name, we increase the default disk size from 10 GB to 60 GB to have room for data, we go with the hadoop 2.7.2-1 version, select Oracle Linux 6.7 as the OS (Ubuntu is an alternative), and go with a small OC3 footprint for the compute size. Don't change the security rules. A new one will be generated for you as well as the ssh keys when you provision through this service.

Step 3 Log into your instance. To do this you will need ssh and use the keys that bitnami generates for you. The instance creation takes 10-15 minutes and should show you a screen with the ip address and have links for you to download the keys.

Step 4 Once you have access to the master system you can execute the commands that we did last week. The only key difference with this implementation is that you will need to install java-1.8 with a yum install because by default the development kit is not installed and we need the jar functionality as part of configuration. The steps needed to repeat our tests from the previous blog entry.

 --- setup hdfs file system 
   hdfs namenode -format
   hdfs getconf -namenodes
   hdfs dfs -mkdir input
   cp /opt/bitnami/hadoop/etc/hadoop/*.xml input
   hdfs dfs -put input/*.xml input
 --- setup simple test with wordcount
   hdfs dfs -mkdir wordcount
   hdfs dfs -mkdir wordcount/input
   mkdir ~/wordcount
   mkdir ~/wordcount/input
   vi file01
   mv file01 ~/wordcount/input
   vi ~/wordcount/input/file02
   hdfs dfs -put ~/wordcount/input/* wordcount/input
   vi WordCount.java
 --- install java-1.8 to get all of the libraries
   sudo yum install java-1.8\*
 --- create ec.jar file
   export HADOOP_CLASSPATH=/opt/bitnami/java/lib/tools.jar
   hadoop com.sun.tools.javac.Main WordCount.java
   jar cf wc.jar WordCount*.class
   hadoop jar wc.jar WordCount wordcount/input wordcount/output
   hadoop fs -cat wordcount/output/part-r-00000
 --- download data and test pig
   mkdir data
   cd data
   w get http://stat-computing.org/dataexpo/2009/1987.csv.bz2
   w get http://stat-computing.org/dataexpo/2009/1988.csv.bz2
   bzip2 -d 1987.csv.bz2
   bzip2 -d 1988.csv.bz2
   hdfs dfs -mkdir airline
   hdfs dfs -copyFromLocal 19*.csv airline
   vi totalmiles.pig
   pig totalmiles.pig
   hdfs dfs -cat data/totalmiles/part-r-00000

Note that we can do the exact same thing using Amazon AWS. They have a MapReduce product called EMR. If you go to the main console, click on EMR at the bottom of the screen, you can create a Hadoop cluster. Once you get everything created and can ssh into the master you can repeat the steps above.

I had a little trouble with the WordCount.java program in that the library version was a little different. The JVM_1.7 libraries had a problem linking and adding the JVM_1.8 binaries did not properly work with the Hadoop binaries. You also need to change the HADOOP_CLASSPATH to point to the proper tools.jar file since it is in a different location from the Bitnami install. I think with a little tweaking it would all work. The pig sample code works with no problem so we were able to test that without changing anything.

In summary, provisioning a Hadoop server or cluster in the cloud is very easy if someone else has done the heavy lifting and pre-configured a server or group of servers for you. I was able to provision two clusters before lunch, run through the exercises, and still have time to go through it again to verify. Using a service like private Marketplaces, Bitnami, or the AWS Marketplace makes it much simpler to deploy sandbox images.

Oracle 12cR2: IS_ROLLING_INVALID in V$SQL

Yann Neuhaus - Sun, 2016-09-25 11:51

In a previous post I published a test case to show when a cursor is not shared anymore after a rolling invalidation. Basically the dbms_stats marks the cursor as ‘rolling invalid’ and the next execution marks it as ‘rolling invalid executed’. Looking at 12cR2 there is a little enhancement in V$SQL with an additional column displays those states.

Note that 12cR2 full documentation is not yet available, but you can test this on the Exadata Express Cloud Service.

I set the invalidation period to 5 seconds instead of 5 hours to show the behavior without waiting

17:43:52 SQL> alter system set "_optimizer_invalidation_period"=5;
System altered.

I’ll run a statement with dbms_sql in order to separate parse and execute phases

17:43:53 SQL> variable c number
17:43:53 SQL> exec :c := dbms_sql.open_cursor;
PL/SQL procedure successfully completed.
17:43:53 SQL> exec dbms_sql.parse(:c, 'select (cast(sys_extract_utc(current_timestamp) as date)-date''1970-01-01'')*24*3600 from DEMO' , dbms_sql.native );
PL/SQL procedure successfully completed.

Here is the cursor from V$SQL including the new IS_ROLLING_INVALID column:

17:43:53 SQL> select invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time,is_rolling_invalid from v$sql where sql_id='61x2h0y9zv0r6';
 
INVALIDATIONS LOADS PARSE_CALLS EXECUTIONS FIRST_LOAD_TIME LAST_LOAD_TIME LAST_ACTIVE_TIME IS_ROLLING_INVALID
------------- ---------- ----------- ---------- ------------------- ------------------- ------------------ ------------------
0 1 1 0 2016-09-25/17:43:53 2016-09-25/17:43:53 25-SEP-16 17:43:52 N

Statement is parsed (one parse call + load) but IS_ROLLING_INVALID is N

Now I execute it:

17:43:53 SQL> exec dbms_output.put_line( dbms_sql.execute(:c) );
0
PL/SQL procedure successfully completed.
 
17:43:53 SQL> select invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time,is_rolling_invalid from v$sql where sql_id='61x2h0y9zv0r6';
 
INVALIDATIONS LOADS PARSE_CALLS EXECUTIONS FIRST_LOAD_TIME LAST_LOAD_TIME LAST_ACTIVE_TIME IS_ROLLING_INVALID
------------- ---------- ----------- ---------- ------------------- ------------------- ------------------ ------------------
0 1 1 1 2016-09-25/17:43:53 2016-09-25/17:43:53 25-SEP-16 17:43:52 N

Statement has one execution.

I’m now gathering statistics with default rolling invalidation:

17:43:53 SQL> exec dbms_stats.gather_table_stats(user,'DEMO');
PL/SQL procedure successfully completed.
 
17:43:53 SQL> select invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time,is_rolling_invalid from v$sql where sql_id='61x2h0y9zv0r6';
 
INVALIDATIONS LOADS PARSE_CALLS EXECUTIONS FIRST_LOAD_TIME LAST_LOAD_TIME LAST_ACTIVE_TIME IS_ROLLING_INVALID
------------- ---------- ----------- ---------- ------------------- ------------------- ------------------ ------------------
0 1 1 1 2016-09-25/17:43:53 2016-09-25/17:43:53 25-SEP-16 17:43:52 Y

The cursor is now marked as rolling invalid (IS_ROLLING_INVALID=”Y”) but wait, this is not a “Y”/”N” boolean, there’s another possible value.

I execute the statement again (no parse call, only execution):

17:43:53 SQL> exec dbms_output.put_line( dbms_sql.execute(:c) );
0
PL/SQL procedure successfully completed.
 
17:43:53 SQL> select invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time,is_rolling_invalid from v$sql where sql_id='61x2h0y9zv0r6';
 
INVALIDATIONS LOADS PARSE_CALLS EXECUTIONS FIRST_LOAD_TIME LAST_LOAD_TIME LAST_ACTIVE_TIME IS_ROLLING_INVALID
------------- ---------- ----------- ---------- ------------------- ------------------- ------------------ ------------------
0 1 1 2 2016-09-25/17:43:53 2016-09-25/17:43:53 25-SEP-16 17:43:52 X

Cursor is now marked as rolling invalid executed (“X”) and this is where the rolling window starts (which I’ve set to 5 seconds instead of 5 hours)

I wait 5 seconds and the cursor has not changed:

17:43:58 SQL> select invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time,is_rolling_invalid from v$sql where sql_id='61x2h0y9zv0r6';
 
INVALIDATIONS LOADS PARSE_CALLS EXECUTIONS FIRST_LOAD_TIME LAST_LOAD_TIME LAST_ACTIVE_TIME IS_ROLLING_INVALID
------------- ---------- ----------- ---------- ------------------- ------------------- ------------------ ------------------
0 1 1 2 2016-09-25/17:43:53 2016-09-25/17:43:53 25-SEP-16 17:43:52 X
 

I execute it again (no parse call, only re-execute the cursor):

17:43:58 SQL> exec dbms_output.put_line( dbms_sql.execute(:c) );
0
PL/SQL procedure successfully completed.

For this execution, a new child has been created:

17:43:58 SQL> select invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time,is_rolling_invalid from v$sql where sql_id='61x2h0y9zv0r6';
 
INVALIDATIONS LOADS PARSE_CALLS EXECUTIONS FIRST_LOAD_TIME LAST_LOAD_TIME LAST_ACTIVE_TIME IS_ROLLING_INVALID
------------- ---------- ----------- ---------- ------------------- ------------------- ------------------ ------------------
0 1 1 2 2016-09-25/17:43:53 2016-09-25/17:43:53 25-SEP-16 17:43:52 X
0 1 0 1 2016-09-25/17:43:53 2016-09-25/17:43:57 25-SEP-16 17:43:57 N

So rolling invalidation do not require a parse call. Execution can start the rolling window and set the invalidation timestamp, and first execution after this timestamp creates a new child cursor.

I’ll now test what happens with parse calls only.

I set a longer rolling window (2 minutes) here:

17:43:58 SQL> exec dbms_stats.gather_table_stats(user,'DEMO');
PL/SQL procedure successfully completed.
 
17:43:58 SQL> alter system set "_optimizer_invalidation_period"=120;
System altered.

The last child has been marked as rolling invalid but not yet executed in this state:

17:43:58 SQL> select invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time,is_rolling_invalid from v$sql where sql_id='61x2h0y9zv0r6';
 
INVALIDATIONS LOADS PARSE_CALLS EXECUTIONS FIRST_LOAD_TIME LAST_LOAD_TIME LAST_ACTIVE_TIME IS_ROLLING_INVALID
------------- ---------- ----------- ---------- ------------------- ------------------- ------------------ ------------------
0 1 1 2 2016-09-25/17:43:53 2016-09-25/17:43:53 25-SEP-16 17:43:52 X
0 1 0 1 2016-09-25/17:43:53 2016-09-25/17:43:57 25-SEP-16 17:43:57 Y

From a new session I open another cursor:

17:43:58 SQL> connect &_user./demo@&_connect_identifier
Connected.
17:43:58 SQL> exec :c := dbms_sql.open_cursor;
PL/SQL procedure successfully completed.

And run several parse calls without execute, one every 10 seconds:

17:43:58 SQL> exec for i in 1..12 loop dbms_sql.parse(:c, 'select (cast(sys_extract_utc(current_timestamp) as date)-date''1970-01-01'')*24*3600 from DEMO' , dbms_sql.native ); dbms_lock.sleep(10); end loop;
PL/SQL procedure successfully completed.

So two minutes later I see that I have a new child created during the rolling window:

17:45:58 SQL> select invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time,is_rolling_invalid from v$sql where sql_id='61x2h0y9zv0r6';
 
INVALIDATIONS LOADS PARSE_CALLS EXECUTIONS FIRST_LOAD_TIME LAST_LOAD_TIME LAST_ACTI IS_ROLLING_INVALID
------------- ---------- ----------- ---------- ------------------- ------------------- --------- ------------------
0 1 1 2 2016-09-25/17:43:53 2016-09-25/17:43:53 25-SEP-16 X
0 1 3 1 2016-09-25/17:43:53 2016-09-25/17:43:57 25-SEP-16 Y
0 1 9 0 2016-09-25/17:43:53 2016-09-25/17:44:27 25-SEP-16 N

Here, at the third parse call (17:44:27) during the invalidation window, a new child cursor has been created. The old one is still marked as rolling invalid (“Y”), but not ‘rolling invalid executed’ (“X”) because it has not been executed.

So it seems that both parse or execute are triggering the rolling invalidation, and the IS_ROLLING_INVALID displays which one.

An execute will now execute the new cursor:

17:45:58 SQL> exec dbms_output.put_line( dbms_sql.execute(:c) );
 
PL/SQL procedure successfully completed.
 
17:45:58 SQL> select invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time,is_rolling_invalid from v$sql where sql_id='61x2h0y9zv0r6';
 
INVALIDATIONS LOADS PARSE_CALLS EXECUTIONS FIRST_LOAD_TIME LAST_LOAD_TIME LAST_ACTI IS_ROLLING_INVALID
------------- ---------- ----------- ---------- ------------------- ------------------- --------- ------------------
0 1 1 2 2016-09-25/17:43:53 2016-09-25/17:43:53 25-SEP-16 X
0 1 3 1 2016-09-25/17:43:53 2016-09-25/17:43:57 25-SEP-16 Y
0 1 9 1 2016-09-25/17:43:53 2016-09-25/17:44:27 25-SEP-16 N

Of course, when new cursors have been created we can see the reason in V$SQL_SHARED_CURSOR:

17:45:58 SQL> select child_number,reason from v$sql_shared_cursor where sql_id='61x2h0y9zv0r6';
 
CHILD_NUMBER REASON
------------ --------------------------------------------------------------------------------
0 <ChildNode><ChildNumber>0</ChildNumber><ID>33</ID><reason>Rolling Invalidate Win
dow Exceeded(2)</reason><size>0x0</size><details>already_processed</details></Ch
ildNode><ChildNode><ChildNumber>0</ChildNumber><ID>33</ID><reason>Rolling Invali
date Window Exceeded(3)</reason><size>2x4</size><invalidation_window>1472658232<
/invalidation_window><ksugctm>1472658237</ksugctm></ChildNode>
 
1 <ChildNode><ChildNumber>1</ChildNumber><ID>33</ID><reason>Rolling Invalidate Win
dow Exceeded(2)</reason><size>0x0</size><details>already_processed</details></Ch
ildNode><ChildNode><ChildNumber>1</ChildNumber><ID>33</ID><reason>Rolling Invali
date Window Exceeded(3)</reason><size>2x4</size><invalidation_window>1472658266<
/invalidation_window><ksugctm>1472658268</ksugctm></ChildNode>
 
2

The last child cursor has been created at 5:44:28 (invalidation_window=1472658268) because invalidation timestamp (ksugctm=1472658266)

So what?

We love Oracle because it’s not a black box. And it’s good to see that they continue in this way by exposing in V$ views information that can be helpful for troubleshooting.

Rolling invalidation has been introduced for dbms_stats because we have to gather statistics and we don’t want hard parse storms after that.
But remember that invalidation can also occur with DDL such as create, alter, drop, comment, grant, revoke.

You should avoid running DDL when application is running. However, we may have to do some of those operations online. It would be nice to have the same rolling invalidation mechanisms and it seems that it will be possible:


SQL> show parameter invalid
 
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cursor_invalidation string IMMEDIATE
 
SQL> alter session set cursor_invalidation=XXX;
ERROR:
ORA-00096: invalid value XXX for parameter cursor_invalidation, must be from among IMMEDIATE, DEFERRED

That’s interesting. I’ll explain which DDL can use that in a future blog post.

 

Cet article Oracle 12cR2: IS_ROLLING_INVALID in V$SQL est apparu en premier sur Blog dbi services.

Replacement of environment variables or properties in Bash

Darwin IT - Sun, 2016-09-25 10:47
Earlier I wrote about the automatic installation of Fusion Middleware components using response files. A thing that lacked in my scripts was that although I had a FMW_HOME variable set in my enviroment shell script, the response files had the location hard coded in them. At the time I hadn't had the chance to figure out how to do property/variable replacement in shell. I do know how to do it with ANT. But I figured that installing ANT for only this was a bit too much, since with the installation of FMW you already get ANT as a module.

For an upgrade of my scripts to FMW 12.2.1.1, I did a Google-search on it and found: http://stackoverflow.com/questions/415677/how-to-replace-placeholders-in-a-text-file. The top 2 suggestions were:

  1. sed -e "s/\${i}/1/" -e "s/\${word}/dog/" template.txt
  2. i=32 word=foo envsubst < template.txt
Although the first option was favoured by many and considered the answer on the querstion, I personally favour the second. It turns out that sed does not accept references to the environment variables as a replacement. And that makes the replacements hardcoded again. The second does accept environment variable references. Actually, if the variable-reference in the template file  is already present in the environment, no actual replacement assignment have to be provided.

So let's say my response file template looks like:

[ENGINE]

#DO NOT CHANGE THIS.
Response File Version=1.0.0.0.0

[GENERIC]

#Set this to true if you wish to skip software updates
DECLINE_AUTO_UPDATES=true

#
MOS_USERNAME=

#
MOS_PASSWORD=<SECURE VALUE>

#If the Software updates are already downloaded and available on your local system, then specify the path to the directory where these patches are available and set SPECIFY_DOWNLOAD_LOCATION to true
AUTO_UPDATES_LOCATION=

#
SOFTWARE_UPDATES_PROXY_SERVER=

#
SOFTWARE_UPDATES_PROXY_PORT=

#
SOFTWARE_UPDATES_PROXY_USER=

#
SOFTWARE_UPDATES_PROXY_PASSWORD=<SECURE VALUE>

#The oracle home location. This can be an existing Oracle Home or a new Oracle Home
ORACLE_HOME=${FMW_HOME}

#Set this variable value to the Installation Type selected. e.g. Fusion Middleware Infrastructure, Fusion Middleware Infrastructure With Examples.
INSTALL_TYPE=Fusion Middleware Infrastructure

#Provide the My Oracle Support Username. If you wish to ignore Oracle Configuration Manager configuration provide empty string for user name.
MYORACLESUPPORT_USERNAME=

#Provide the My Oracle Support Password
MYORACLESUPPORT_PASSWORD=<SECURE VALUE>

#Set this to true if you wish to decline the security updates. Setting this to true and providing empty string for My Oracle Support username will ignore the Oracle Configuration Manager configuration
DECLINE_SECURITY_UPDATES=true

#Set this to true if My Oracle Support Password is specified
SECURITY_UPDATES_VIA_MYORACLESUPPORT=false

#Provide the Proxy Host
PROXY_HOST=

#Provide the Proxy Port
PROXY_PORT=

#Provide the Proxy Username
PROXY_USER=

#Provide the Proxy Password
PROXY_PWD=<SECURE VALUE>

#Type String (URL format) Indicates the OCM Repeater URL which should be of the format [scheme[Http/Https]]://[repeater host]:[repeater port]
COLLECTOR_SUPPORTHUB_URL=



Saved as 'fmw_12.2.1.1.0_infrastructure.rsp.tpl'; note the reference ORACLE_HOME=${FMW_HOME}. And I have set FMW_HOME with an fmw12c_env.sh script, as described in former posts. Then I only have to do:
envsubst < fmw_12.2.1.1.0_infrastructure.rsp.tpl >>fmw_12.2.1.1.0_infrastructure.rsp
To have the file copied to fmw_12.2.1.1.0_infrastructure.rsp with a replaced FMW_HOME variable:

...
#
SOFTWARE_UPDATES_PROXY_PASSWORD=<SECURE VALUE>

#The oracle home location. This can be an existing Oracle Home or a new Oracle Home
ORACLE_HOME=/u01/app/oracle/FMW12211

#Set this variable value to the Installation Type selected. e.g. Fusion Middleware Infrastructure, Fusion Middleware Infrastructure With Examples.
INSTALL_TYPE=Fusion Middleware Infrastructure
...

Couldn't be more simple, I'd say. Nice thing is that this enables me to do more directives. So, learned something again, from a question dated 7,5 years ago...

Thoughts/Info on Essbase/EssCS after Oracle Open World 2016

Tim Tow - Sat, 2016-09-24 21:59
I was at Oracle Open World last week and have some notes to share on the upcoming EssCS (Essbase Cloud Service) product and on Essbase in general.

EssCS will be Infrastructure as a Service (IaaS) product and will be priced on the number of cores and memory.  I believe there will be a metered and an unmetered pricing as well (metered meaning pay per cycle or some other usage measure).  According to presentations at Open World, which did have safe harbor statements meaning "do not make decisions based on this info as it could change", there will be options for 1, 2, 4, or 16 physical cores ("OCPU's") and 7.5 to 15 Gb of RAM.  In addition, it will be an updated version of Essbase that is not the current on-prem version.  It will feature, among other things:


  • The new Java Agent running in Weblogic that moves security from the essbase.sec file to a relational database.
  • Simplified security
  • ASO / BSO / Hybrid
  • Sandboxing and scenario management - what if's without copying data with workflow for approval to merge into given scenarios
  • Cloud-based UI
  • Ability to specify/create an Essbase cube from Excel 
  • A cube template library ("app store" type of thing)
  • A web-based outline editor (though most editing should be done in Excel)
  • EssCLI (Essbase Command Line Interface) - a sort of EPMAutomate for Essbase
  • The Essbase Java API and a new REST API (which is currently being engineered)

I do not remember hearing any dollar amount for EssCS at Open World.  I expect availability in the next 3 to 6 months though it wouldn't surprise me if it were to slip further.

As far as on-prem Essbase updates, I would expect that the updates we see in EssCS will go on-prem as part of the EPM 2017 on-prem release which Oracle currently believes will be delivered late in 2017 (also subject to safe harbor, etc).

As far as how Oracle is selling Essbase, Essbase is now firmly in the BI organization and is being sold by the BI reps; EPM reps do not generally sell Essbase.  To the Essbase team, EPM is important as they are an internal customer, but EPM is not their only customer.  As such, I saw at least one presentation that promoted the idea of customers writing custom planning applications using Essbase.  While some people I talked with thought that approach muddled the EPM message for customers, I see it as a situation where if they don't compete in the custom market, then someone else will.  As someone who frequently is involved in complex applications where the EPM Planning product may not be a fit, I am thrilled to see that message from Oracle.




Categories: BI & Warehousing

Video Tutorial: XPLAN_ASH Active Session History - Part 12

Randolf Geist - Sat, 2016-09-24 17:27
The final part of the video tutorial explaining the XPLAN_ASH Active Session History functionality continuing the actual walk-through of the script output.


This is the final part of this tutorial, but there are more tutorials coming - about configuring the script, script internals and also the Rowsource Statistics mode of the script.

Pages

Subscribe to Oracle FAQ aggregator