Feed aggregator

Give Employees More Reasons to Stick Around

Linda Fishman Hoyle - Wed, 2016-06-01 18:40

A Guest Post by Andy Campbell, HCM strategic director (pictured left)

Winning over and retaining the best talent has never been easy, but employers today are finding it harder than ever to find people with the right skills to fill key vacancies. If businesses are to keep growing and evolving they need new ways to attract and engage the talented employees that will take them on that journey.

Interestingly, recent research from Mercer suggests that some businesses may be overestimating their ability to fill critical roles. Nine out ten organisations anticipate the competition for talent will increase in 2016, and 70% are confident that they can fill critical roles with internal candidates. The kicker is that nearly one-third (28%) of employees who responded to the survey plan to leave their current job in the next 12 months.

Unless businesses take action, the people they have pegged to ensure their future success may not be around when the time comes.

Many Companies Aren’t in a Position to Compete

Part of the issue is that many companies aren’t currently in a position to compete for talent successfully. Mercer’s research also reveals that 85% of businesses admit their talent management programs and policies need an overhaul, but instigating and managing these changes requires support from leadership. In addition, delivering modern HR requires modern HR systems that are more in keeping with the demands of a technology-savvy workforce.

Mercer is clear in its findings: Companies looking to retain the best workers should update and expand their employee development programs and other processes. And yet only 4% of HR professionals said the HR function is viewed as a strategic partner within their organisation.

Oracle Research: What Employees Really Want

It’s a conclusion that we at Oracle agree with completely. Our own Simply Talent: A Western European Perspective uncovered similar trends in Western Europe. We found that most employees desire more proactive, regular interactions with their managers, improvements that are underpinned by better employee engagement initiatives, but that once again very few (3%) workers see the HR department as having an important role in driving employee engagement.

Clearly there remains a gap between the engagement programs being run by the business and HR, and what employees now expect from their employers.

Turn Talk Into Action

So how can companies create a culture that attracts the best people and keeps them on board? The first step is to invest in understanding what motivates and engages their employees. With HR at the helm, managers should be mapping out the employee journey from on-boarding to retirement to see where points of friction may arise so they can do whatever is possible to mitigate them. This approach is similar to the way retailers and customer service teams map out the customer journey to make it more satisfying and convenient.

Regular dialogue with employees is also essential. The days of annual reviews are long gone; today’s employees need to feel they are both appreciated and that their needs are understood, and this can only happen if line managers and senior team members have more frequent, focused discussions with employees about their career path and progress.

Most importantly, these discussions must be followed by action. If employees feel nothing comes from their talks with managers they will be more inclined to seek a company that delivers on its promises. Our research found that less than one-third of employees believe their company is proactive about engaging with them. The time has come for a sea-change in how businesses communicate with their people.

Employees are every company’s most precious resource—in deed, as well as in word. HR must take the lead in fostering an employee-focused work culture where development, training, education, and support are treated as a priority and a true dialogue exists between employees and their managers. Those companies that adapt to this new reality will be in the best possible position to ensure they have the right talent to achieve their future ambitions.

OBIEE 12c – Extended Subject Areas (XSA) and the Data Set Service

Rittman Mead Consulting - Wed, 2016-06-01 18:17

One of the big changes in OBIEE 12c for end users is the ability to upload their own data sets and start analysing them directly, without needing to go through the traditional data provisioning and modelling process and associated leadtimes. The implementation of this is one of the big architectural changes of OBIEE 12c, introducing the concept of the Extended Subject Areas (XSA), and the Data Set Service (DSS).

In this article we’ll see some of how XSA and DSS work behind the scenes, providing an important insight for troubleshooting and performance analysis of this functionality.

What is an XSA?

An Extended Subject Area (XSA) is made up of a dataset, and associated XML data model. It can be used standalone, or “mashed up” in conjunction with a “traditional” subject area on a common field

How is an XSA Created?

At the moment the following methods are available:

  1. “Add XSA” in Visual Analzyer, to upload an Excel (XLSX) document

  2. CREATE DATASET logical SQL statement, that can be run through any interface to the BI Server, including ‘Issue Raw SQL’, nqcmd, JDBC calls, and so on

  3. Add Data Source in Answers. Whilst this option shouldn’t actually be present according to a this doc, it will be for any users of 12.2.1 who have uploaded the SampleAppLite BAR file so I’m including it here for completeness.

Under the covers, these all use the same REST API calls directly into datasetsvc. Note that these are entirely undocumented, and only for internal OBIEE component use. They are not intended nor supported for direct use.

How does an XSA work?

External Subject Areas (XSA) are managed by the Data Set Service (DSS). This is a java deployment (datasetsvc) running in the Managed Server (bi_server1), providing a RESTful API for the other OBIEE components that use it.

The end-user of the data, whether it’s Visual Analyzer or the BI Server, send REST web service calls to DSS, storing and querying datasets within it.

Where is the XSA Stored?

By default, the data for XSA is stored on disk in SINGLETON_DATA_DIRECTORY/components/DSS/storage/ssi, e.g. /app/oracle/biee/user_projects/domains/bi/bidata/components/DSS/storage/ssi

[oracle@demo ssi]$ ls -lrt /app/oracle/biee/user_projects/domains/bi/bidata/components/DSS/storage/ssi|tail -n5
-rw-r----- 1 oracle oinstall    8495 2015-12-02 18:01 7e43a80f-dcf6-4b31-b898-68616a68e7c4.dss
-rw-r----- 1 oracle oinstall  593662 2016-05-27 11:00 1beb5e40-a794-4aa9-8c1d-5a1c59888cb4.dss
-rw-r----- 1 oracle oinstall  131262 2016-05-27 11:12 53f59d34-2037-40f0-af21-45ac611f01d3.dss
-rw-r----- 1 oracle oinstall 1014459 2016-05-27 13:04 a4fc922d-ce0e-479f-97e4-1ddba074f5ac.dss
-rw-r----- 1 oracle oinstall 1014459 2016-05-27 13:06 c93aa2bd-857c-4651-bba2-a4f239115189.dss

They’re stored using the format in which they were created, which is XLSX (via VA) or CSV (via CREATE DATASET)

[oracle@demo ssi]$ head 53f59d34-2037-40f0-af21-45ac611f01d3.dss
"7 Megapixel Digital Camera","2010 Week 27",44761.88
"MicroPod 60Gb","2010 Week 27",36460.0
"MP3 Speakers System","2010 Week 27",36988.86
"MPEG4 Camcorder","2010 Week 28",32409.78
"CompCell RX3","2010 Week 28",33005.91

There’s a set of DSS-related tables installed in the RCU schema BIPLATFORM, which hold information including the XML data model for the XSA, along with metadata such as the user that uploaded the file, when they uploaded, and then name of the file on disk:

How Can the Data Set Service be Configured?

The configuration file, with plenty of inline comments, is at ORACLE_HOME/bi/endpointmanager/jeemap/dss/DSS_REST_SERVICE.properties. From here you can update settings for the data set service including upload limits as detailed here.

XSA Performance

Since XSA are based on flat files stored in disk, we need to be very careful in their use. Whilst a database may hold billions of rows in a table with with appropriate indexing and partitioning be able to provide sub-second responses, a flat file can quickly become a serious performance bottleneck. Bear in mind that a flat file is just a bunch of data plopped on disk – there is no concept of indices, blocks, partitions — all the good stuff that makes databases able to do responsive ad-hoc querying on selections of data.

If you’ve got a 100MB Excel file with thousands of cells, and want to report on just a few of them, you might find it laggy – because whether you want to report on them on or not, at some point OBIEE is going to have to read all of them regardless. We can see how OBIEE is handling XSA under the covers by examining the query log. This used to be called nqquery.log in OBIEE 11g (and before), and in OBIEE 12c has been renamed obis1-query.log.

In this example here I’m using an Excel worksheet with 140,000 rows and 78 columns. Total filesize of the source XLSX on disk is ~55Mb.

First up, I’ll build a query in Answers with a couple of the columns:

The logical query uses the new XSA syntax:

SELECT
   0 s_0,
   XSA('prodney'.'MOCK_DATA_bigger_55Mb')."Columns"."first_name" s_1,
   XSA('prodney'.'MOCK_DATA_bigger_55Mb')."Columns"."foo" s_2
FROM XSA('prodney'.'MOCK_DATA_bigger_55Mb')
ORDER BY 2 ASC NULLS LAST
FETCH FIRST 5000001 ROWS ONLY

The query log shows

Rows 144000, bytes 13824000 retrieved from database query
Rows returned to Client 200

So of the 55MB of data, we’re pulling all the rows (144,000) back to the BI Server for it to then perform the aggregation on it, resulting in the 200 rows returned to the client (Presentation Services). Note though that the byte count is lower (13Mb) than the total size of the file (55Mb).

As well as aggregation, filtering on XSA data also gets done by the BI Server. Consider this example here, where we add a predicate:

xsa14

In the query log we can see that all the data has to come back from DSS to the BI Server, in order for it to filter it:

Rows 144000, bytes 23040000 retrieved from database
Physical query response time 24.195 (seconds),
Rows returned to Client 0

Note the time taken by DSS — nearly 25 seconds. Compare this later on to when we see the XSA data served from a database, via the XSA Cache.

In terms of BI Server (not XSA) caching, the query log shows that a cache entry was written for the above request:

Query Result Cache: [59124] The query for user 'prodney' was inserted into the query result cache. The filename is '/app/oracle/biee/user_projects/domains/bi/servers/obis1/cache/NQS__736113_56359_0.TBL'

If I refresh the query in Answers, the data is fetched anew (per this changed behaviour in OBIEE 12c), and the cache repopulated. If I clear the Presentation Services cache and re-open the analysis, I get the results from the BI Server cache, and it doesn’t have to refetch the data from the Data Set Service.

Since the cache has two columns in, an attribute and a measure, I wondered if running a query with just the fact rolled up might hit the cache (since it has all the data there that it needs)

Unfortunately it didn’t, and to return a single row of data required BI Server to fetch all the rows again – although looking at the byte count it appears it does prune the columns required since it’s now just over 2Mb of data returned this time:

Rows 144000, bytes 2304000 retrieved from database
Rows returned to Client 1

Interestingly if I build an analysis with several more of the columns from the file (in this example, ten of a total of 78), the data returned from the DSS to BI Server (167Mb) is greater than that of the original file (55Mb).

Rows 144000, bytes 175104000
Rows returned to Client 1000

And this data coming back from the DSS to the BI Server has to go somewhere – and if it’s big enough it’ll overflow to disk, as we can see when I run the above:

$ ls -l /app/oracle/biee/user_projects/domains/bi/servers/obis1/tmp/obis_temp
[...]
-rwxrwx--- 1 oracle oinstall 2910404 2016-06-01 14:08 nQS_AG_22345_7503_7c9c000a_50906091.TMP
-rwxrwx--- 1 oracle oinstall   43476 2016-06-01 14:08 nQS_AG_22345_7504_7c9c000a_50906091.TMP
-rw------- 1 oracle oinstall 6912000 2016-06-01 14:08 nQS_AG_22345_7508_7c9c000a_50921949.TMP
-rw------- 1 oracle oinstall  631375 2016-06-01 14:08 nQS_EX_22345_7506_7c9c000a_50921652.TMP
-rw------- 1 oracle oinstall 3670016 2016-06-01 14:08 nQS_EX_22345_7507_7c9c000a_50921673.TMP
[...]

You can read more about BI Server’s use of temporary files and the impact that it can have on system performance and particularly I/O bandwidth in this OTN article here.

So – as the expression goes – “buyer beware”. XSA is an excellent feature, but used in its default configuration with files stored on disk it has the potential to wreak havoc if abused.

XSA Caching

If you’re planning to use XSA seriously, you should set up the database-based XSA Cache. This is described in detail in the PDF document attached to My Oracle Support note OBIEE 12c: How To Configure The External Subject Area (XSA) Cache For Data Blending| Mashup And Performance (Doc ID 2087801.1).

In a proper implementation you would follow in full the document, including provisioning a dedicated schema and tablespace for holding the data (to make it easier to manage and segregate from other data), but here I’m just going to use the existing RCU schema (BIPLATFORM), along with the Physical mapping already in the RPD (10 - System DB (ORCL)):

In NQSConfig.INI, under the XSA_CACHE section, I set:

ENABLE = YES;

# The schema and connection pool where the XSA data will be cached.
PHYSICAL_SCHEMA = "10 - System DB (ORCL)"."Catalog"."dbo";
CONNECTION_POOL = "10 - System DB (ORCL)"."UT Connection Pool";

And restart the BI Server:

/app/oracle/biee/user_projects/domains/bi/bitools/bin/stop.sh -i obis1 && /app/oracle/biee/user_projects/domains/bi/bitools/bin/start.sh -i obis1

Per the document, note that in the BI Server log there’s an entry indicating that the cache has been successfully started:

[101001] External Subject Area cache is started successfully using configuration from the repository with the logical name ssi.
[101017] External Subject Area cache has been initialized. Total number of entries: 0 Used space: 0 bytes Maximum space: 107374182400 bytes Remaining space: 107374182400 bytes. Cache table name prefix is XC2875559987.

Now when I re-run the test XSA analysis from above, returning three columns, the BI Server goes off and populates the XSA cache table:

-- Sending query to database named 10 - System DB (ORCL) (id: <<79879>> XSACache Create table Gateway), connection pool named UT Connection Pool, logical request hash b4de812e, physical request hash 5847f2ef:
CREATE TABLE dbo.XC2875559987_ZPRODNE1926129021 ( id3209243024 DOUBLE PRECISION, first_n[..]

Or rather, it doesn’t, because PHYSICAL_SCHEMA seems to want the literal physical schema, rather than the logical physical one (?!) that the USAGE_TRACKING configuration stanza is happy with in referencing the table.

Properties: description=<<79879>> XSACache Create table Exchange; producerID=0x1561aff8; requestID=0xfffe0034; sessionID=0xfffe0000; userName=prodney;
[nQSError: 17001] Oracle Error code: 1918, message: ORA-01918: user 'DBO' does not exist

I’m trying to piggyback on SA511’s existing configruation, which uses catalog.schema notation:

Instead of the more conventional approach to have the actual physical schema (often used in conjunction with ‘Require fully qualified table names’ in the connection pool):

So now I’ll do it properly, and create a database and schema for the XSA cache – I’m still going to use the BIPLATFORM schema though…

Updated NQSConfig.INI:

[ XSA_CACHE ]

ENABLE = YES;

# The schema and connection pool where the XSA data will be cached.
PHYSICAL_SCHEMA = "XSA Cache"."BIEE_BIPLATFORM";
CONNECTION_POOL = "XSA Cache"."XSA CP";

After refreshing the analysis again, there’s a successful creation of the XSA cache table:

-- Sending query to database named XSA Cache (id: <<65685>> XSACache Create table Gateway), connection pool named XSA CP, logical request hash 9a548c60, physical request hash ccc0a410: [[
CREATE TABLE BIEE_BIPLATFORM.XC2875559987_ZPRODNE1645894381 ( id3209243024 DOUBLE PRECISION, first_name2360035083 VARCHAR2(17 CHAR), [...]

as well as a stats gather:

-- Sending query to database named XSA Cache (id: <<65685>> XSACache Collect statistics Gateway), connection pool named XSA CP, logical request hash 9a548c60, physical request hash d73151bb:
BEGIN DBMS_STATS.GATHER_TABLE_STATS(ownname => 'BIEE_BIPLATFORM', tabname => 'XC2875559987_ZPRODNE1645894381' , estimate_percent => 5 , method_opt => 'FOR ALL COLUMNS SIZE AUTO' ); END;

Although I do note that it is used a fixed estimate_percent instead of the recommended AUTO_SAMPLE_SIZE. The table itself is created with a fixed prefix (as specified in the obis1-diagnostic.log at initialisation), and holds a full copy of the XSA (not just the columns in the query that triggered the cache creation):

With the dataset cached, the query is then run and the query log shows a XSA cache hit

External Subject Area cache hit for 'prodney'.'MOCK_DATA_bigger_55Mb'/Columns :
Cache entry shared_cache_key = 'prodney'.'MOCK_DATA_bigger_55Mb',
table name = BIEE_BIPLATFORM.XC2875559987_ZPRODNE2128899357,
row count = 144000,
entry size = 201326592 bytes,
creation time = 2016-06-01 20:14:26.829,
creation elapsed time = 49779 ms,
descriptor ID = /app/oracle/biee/user_projects/domains/bi/servers/obis1/xsacache/NQSXSA_BIEE_BIPLATFORM.XC2875559987_ZPRODNE2128899357_2.CACHE

with the resulting physical query fired at the XSA cache table (replacing what would have gone against the DSS web service):

-- Sending query to database named XSA Cache (id: <<65357>>), connection pool named XSA CP, logical request hash 9a548c60, physical request hash d3ed281d: [[
WITH
SAWITH0 AS (select T1000001.first_name2360035083 as c1,
     T1000001.last_name3826278858 as c2,
     sum(T1000001.foo2363149668) as c3
from
     BIEE_BIPLATFORM.XC2875559987_ZPRODNE1645894381 T1000001
group by T1000001.first_name2360035083, T1000001.last_name3826278858)
select D1.c1 as c1, D1.c2 as c2, D1.c3 as c3, D1.c4 as c4 from ( select 0 as c1,
     D102.c1 as c2,
     D102.c2 as c3,
     D102.c3 as c4
from
     SAWITH0 D102
order by c2, c3 ) D1 where rownum <= 5000001

It’s important to point out the difference of what’s happening here: the aggregation has been pushed down to the database, meaning that the BI Server doesn’t have to. In performance terms, this is a Very Good Thing usually.

Rows 988, bytes 165984 retrieved from database query
Rows returned to Client 988

Whilst it doesn’t seem to be recorded in the query log from what I can see, the data returned from the XSA Cache also gets inserted into the BI Server cache, and if you open an XSA-based analysis that’s not in the presentation services cache (a third cache to factor in!) you will get a cache hit on the BI Server cache. As discussed earlier in this article though, if an analysis is built against an XSA for which a BI Server cache entry exists that with manipulation could service it (eg pruning columns or rolling up), it doesn’t appear to take advantage of it – but since it’s hitting the XSA cache this time, it’s less of a concern.

If you change the underlying data in the XSA

The BI Server does pick this up and repopulates the XSA Cache.

The XSA cache entry itself is 192Mb in size – generated from a 55Mb upload file. The difference will be down to data types and storage methods etc. However, that it is larger in the XSA Cache (database) than held natively (flat file) doesn’t really matter, particularly if the data is being aggregated and/or filtered, since the performance benefit of pushing this work to the database will outweigh the overhead of storage space. Consider this example here, where I run an analysis pulling back 44 columns (of the 78 in the spreadsheet) and hit the XSA cache, it runs in just over a second, and transfers from the database a total of 5.3Mb (the data is repeated, so rolls up):

Rows 1000, bytes 5576000 retrieved from database
Rows returned to Client 1000

If I disable the XSA cache and run the same query, we see this:

Rows 144000, bytes 801792000 Retrieved from database
Physical query response time 22.086 (seconds)
Rows returned to Client 1000

That’s 764Mb being sent back for the BI Server to process, which it does by dumping a whole load to disk in temporary work files:

$  ls -l /app/oracle/biee/user_projects/domains/bi/servers/obis1/tmp/obis_temp
[...]]
-rwxrwx--- 1 oracle oinstall 10726190 2016-06-01 21:04 nQS_AG_29733_261_ebd70002_75835908.TMP
-rwxrwx--- 1 oracle oinstall   153388 2016-06-01 21:04 nQS_AG_29733_262_ebd70002_75835908.TMP
-rw------- 1 oracle oinstall 24192000 2016-06-01 21:04 nQS_AG_29733_266_ebd70002_75862509.TMP
-rw------- 1 oracle oinstall  4195609 2016-06-01 21:04 nQS_EX_29733_264_ebd70002_75861716.TMP
-rw------- 1 oracle oinstall 21430272 2016-06-01 21:04 nQS_EX_29733_265_ebd70002_75861739.TMP

As a reminder – this isn’t “Bad”, it’s just not optimal (response time of 50 seconds vs 1 second), and if you scale that kind of behaviour by many users with many datasets, things could definitely get hairy for all users of the system. Hence – use the XSA Cache.

As a final point, with the XSA Cache being in the database the standard range of performance optimisations are open to us – indexing being the obvious one. No indexes are built against the XSA Cache table by default, which is fair enough since OBIEE has no idea what the key columns on the data are, and the point of mashups is less to model and optimise the data but to just get it up there in front of the user. So you could index the table if you knew the key columns that were going to be filtered against, or you could even put it into memory (assuming you’ve licensed the option).

The MoS document referenced above also includes further performance recommendations for XSA, including the use of RAM Disk for XSA cache metadata files, as well as the managed server temp folder

Summary

External Subject Areas are great functionality, but be aware of the performance implications of not being able to push down common operations such as filtering and aggregation. Set up XSA Caching if you are going to be using XSA properly.

If you’re interested in the direction of XSA and the associated Data Set Service, this slide deck from Oracle’s Socs Cappas provides some interesting reading. Uploading Excel files into OBIEE looks like just the beginning of what the Data Set Service is going to enable!

The post OBIEE 12c – Extended Subject Areas (XSA) and the Data Set Service appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

ORA-00918 column ambiguously defined

VitalSoftTech - Wed, 2016-06-01 17:49
What is the cause of the "ORA-00918 column ambiguously defined" error? How do I resolve this?
Categories: DBA Blogs

New Blog: E-Business Suite and Oracle Cloud

Steven Chan - Wed, 2016-06-01 17:06

Our team has just launched a new blog that will cover all aspects of running E-Business Suite on Oracle Cloud, including what you can do right now, and what you can plan for as our offerings evolve:

Cliff Godwin pictureThis new blog starts with an article from Cliff Godwin, Senior Vice President, Oracle E-Business Suite Development, that describes how this new blog will help E-Business Suite customers find a practical path to the cloud.

This new blog provides an excellent way of sharing your requirements and feedback with our teams focusing on this rapidly-evolving area.  I encourage everyone to monitor this blog for the latest updates on running E-Business Suite on the Oracle Cloud.  You can subscribe by email, too!

Categories: APPS Blogs

BOF: A Sample Application For Testing Oracle Security

Pete Finnigan - Wed, 2016-06-01 17:05

In my Oracle security training classes I use a couple of sample applications for various demonstrations. I teach people how to perform security audits of Oracle databases, secure coding in PL/SQL, designing audit trail solutions and locking down Oracle. We....[Read More]

Posted by Pete On 10/03/16 At 11:07 AM

Categories: Security Blogs

OBIEE 12c – Extended Subject Areas (XSA) and the Data Set Service

Rittman Mead Consulting - Wed, 2016-06-01 14:06
OBIEE 12c – Extended Subject Areas (XSA) and the Data Set Service

One of the big changes in OBIEE 12c for end users is the ability to upload their own data sets and start analysing them directly, without needing to go through the traditional data provisioning and modelling process and associated leadtimes. The implementation of this is one of the big architectural changes of OBIEE 12c, introducing the concept of the Extended Subject Areas (XSA), and the Data Set Service (DSS).

In this article we'll see some of how XSA and DSS work behind the scenes, providing an important insight for troubleshooting and performance analysis of this functionality.

What is an XSA?

An Extended Subject Area (XSA) is made up of a dataset, and associated XML data model. It can be used standalone, or "mashed up" in conjunction with a "traditional" subject area on a common field

OBIEE 12c – Extended Subject Areas (XSA) and the Data Set Service

How is an XSA Created?

At the moment the following methods are available:

  1. "Add XSA" in Visual Analzyer, to upload an Excel (XLSX) document OBIEE 12c – Extended Subject Areas (XSA) and the Data Set Service

  2. CREATE DATASET logical SQL statement, that can be run through any interface to the BI Server, including 'Issue Raw SQL', nqcmd, JDBC calls, and so on OBIEE 12c – Extended Subject Areas (XSA) and the Data Set Service

  3. Add Data Source in Answers. Whilst this option shouldn't actually be present according to a this doc, it will be for any users of 12.2.1 who have uploaded the SampleAppLite BAR file so I'm including it here for completeness. OBIEE 12c – Extended Subject Areas (XSA) and the Data Set Service OBIEE 12c – Extended Subject Areas (XSA) and the Data Set Service

Under the covers, these all use the same REST API calls directly into datasetsvc. Note that these are entirely undocumented, and only for internal OBIEE component use. They are not intended nor supported for direct use.

How does an XSA work?

External Subject Areas (XSA) are managed by the Data Set Service (DSS). This is a java deployment (datasetsvc) running in the Managed Server (bi_server1), providing a RESTful API for the other OBIEE components that use it.

OBIEE 12c – Extended Subject Areas (XSA) and the Data Set Service

The end-user of the data, whether it's Visual Analyzer or the BI Server, send REST web service calls to DSS, storing and querying datasets within it.

Where is the XSA Stored?

By default, the data for XSA is stored on disk in SINGLETONDATADIRECTORY/components/DSS/storage/ssi, e.g. /app/oracle/biee/user_projects/domains/bi/bidata/components/DSS/storage/ssi

[oracle@demo ssi]$ ls -lrt /app/oracle/biee/user_projects/domains/bi/bidata/components/DSS/storage/ssi|tail -n5  
-rw-r----- 1 oracle oinstall    8495 2015-12-02 18:01 7e43a80f-dcf6-4b31-b898-68616a68e7c4.dss
-rw-r----- 1 oracle oinstall  593662 2016-05-27 11:00 1beb5e40-a794-4aa9-8c1d-5a1c59888cb4.dss
-rw-r----- 1 oracle oinstall  131262 2016-05-27 11:12 53f59d34-2037-40f0-af21-45ac611f01d3.dss
-rw-r----- 1 oracle oinstall 1014459 2016-05-27 13:04 a4fc922d-ce0e-479f-97e4-1ddba074f5ac.dss
-rw-r----- 1 oracle oinstall 1014459 2016-05-27 13:06 c93aa2bd-857c-4651-bba2-a4f239115189.dss

They're stored using the format in which they were created, which is XLSX (via VA) or CSV (via CREATE DATASET)

[oracle@demo ssi]$ head 53f59d34-2037-40f0-af21-45ac611f01d3.dss  
"7 Megapixel Digital Camera","2010 Week 27",44761.88
"MicroPod 60Gb","2010 Week 27",36460.0
"MP3 Speakers System","2010 Week 27",36988.86
"MPEG4 Camcorder","2010 Week 28",32409.78
"CompCell RX3","2010 Week 28",33005.91

There's a set of DSS-related tables installed in the RCU schema BIPLATFORM, which hold information including the XML data model for the XSA, along with metadata such as the user that uploaded the file, when they uploaded, and then name of the file on disk:

OBIEE 12c – Extended Subject Areas (XSA) and the Data Set Service

How Can the Data Set Service be Configured?

The configuration file, with plenty of inline comments, is at ORACLEHOME/bi/endpointmanager/jeemap/dss/DSSREST_SERVICE.properties. From here you con update settings for the data set service including upload limits as detailed here.

XSA Performance

Since XSA are based on flat files stored in disk, we need to be very careful in their use. Whilst a database may hold billions of rows in a table with with appropriate indexing and partitioning be able to provide sub-second responses, a flat file can quickly become a serious performance bottleneck. Bear in mind that a flat file is just a bunch of data plopped on disk - there is no concept of indices, blocks, partitions -- all the good stuff that makes databases able to do responsive ad-hoc querying on selections of data.

If you've got a 100MB Excel file with thousands of cells, and want to report on just a few of them, you might find it laggy - because whether you want to report on them on or not, at some point OBIEE is going to have to read all of them regardless. We can see how OBIEE is handling XSA under the covers by examining the query log. This used to be called nqquery.log in OBIEE 11g (and before), and in OBIEE 12c has been renamed obis1-query.log.

In this example here I'm using an Excel worksheet with 140,000 rows and 78 columns. Total filesize of the source XLSX on disk is ~55Mb.

First up, I'll build a query in Answers with a couple of the columns:

OBIEE 12c – Extended Subject Areas (XSA) and the Data Set Service

The logical query uses the new XSA syntax:

SELECT  
   0 s_0,
   XSA('prodney'.'MOCK_DATA_bigger_55Mb')."Columns"."first_name" s_1,
   XSA('prodney'.'MOCK_DATA_bigger_55Mb')."Columns"."foo" s_2
FROM XSA('prodney'.'MOCK_DATA_bigger_55Mb')  
ORDER BY 2 ASC NULLS LAST  
FETCH FIRST 5000001 ROWS ONLY
The query log shows
Rows 144000, bytes 13824000 retrieved from database query
Rows returned to Client 200
So of the 55MB of data, we're pulling all the rows (144,000) back to the BI Server for it to then perform the aggregation on it, resulting in the 200 rows returned to the client (Presentation Services). Note though that the byte count is lower (13Mb) than the total size of the file (55Mb). As well as aggregation, filtering on XSA data also gets done by the BI Server. Consider this example here, where we add a predicate: OBIEE 12c – Extended Subject Areas (XSA) and the Data Set Service In the query log we can see that all the data has to come back from DSS to the BI Server, in order for it to filter it:
Rows 144000, bytes 23040000 retrieved from database  
Physical query response time 24.195 (seconds),  
Rows returned to Client 0
Note the time taken by DSS -- nearly 25 seconds. Compare this later on to when we see the XSA data served from a database, via the XSA Cache. In terms of BI Server (not XSA) caching, the query log shows that a cache entry was written for the above request:
Query Result Cache: [59124] The query for user 'prodney' was inserted into the query result cache. The filename is '/app/oracle/biee/user_projects/domains/bi/servers/obis1/cache/NQS__736113_56359_0.TBL'
If I refresh the query in Answers, the data is fetched anew (per this changed behaviour in OBIEE 12c), and the cache repopulated. If I clear the Presentation Services cache and re-open the analysis, I get the results from the BI Server cache, and it doesn't have to refetch the data from the Data Set Service. Since the cache has two columns in, an attribute and a measure, I wondered if running a query with just the fact rolled up might hit the cache (since it has all the data there that it needs) OBIEE 12c – Extended Subject Areas (XSA) and the Data Set Service Unfortunately it didn't, and to return a single row of data required BI Server to fetch all the rows again - although looking at the byte count it appears it does prune the columns required since it's now just over 2Mb of data returned this time:
Rows 144000, bytes 2304000 retrieved from database
Rows returned to Client 1
Interestingly if I build an analysis with several more of the columns from the file (in this example, ten of a total of 78), the data returned from the DSS to BI Server (167Mb) is greater than that of the original file (55Mb). OBIEE 12c – Extended Subject Areas (XSA) and the Data Set Service
Rows 144000, bytes 175104000
Rows returned to Client 1000
And this data coming back from the DSS to the BI Server has to go somewhere - and if it's big enough it'll overflow to disk, as we can see when I run the above:
  
$ ls -l /app/oracle/biee/user_projects/domains/bi/servers/obis1/tmp/obis_temp
[...]
-rwxrwx--- 1 oracle oinstall 2910404 2016-06-01 14:08 nQS_AG_22345_7503_7c9c000a_50906091.TMP
-rwxrwx--- 1 oracle oinstall   43476 2016-06-01 14:08 nQS_AG_22345_7504_7c9c000a_50906091.TMP
-rw------- 1 oracle oinstall 6912000 2016-06-01 14:08 nQS_AG_22345_7508_7c9c000a_50921949.TMP
-rw------- 1 oracle oinstall  631375 2016-06-01 14:08 nQS_EX_22345_7506_7c9c000a_50921652.TMP
-rw------- 1 oracle oinstall 3670016 2016-06-01 14:08 nQS_EX_22345_7507_7c9c000a_50921673.TMP
[...]

You can read more about BI Server's use of temporary files and the impact that it can have on system performance and particularly I/O bandwidth in this OTN article here.

So - as the expression goes - "buyer beware". XSA is an excellent feature, but used in its default configuration with files stored on disk it has the potential to wreak havoc if abused.

XSA Caching

If you're planning to use XSA seriously, you should set up the database-based XSA Cache. This is described in detail in the PDF document attached to My Oracle Support note OBIEE 12c: How To Configure The External Subject Area (XSA) Cache For Data Blending| Mashup And Performance (Doc ID 2087801.1).

In a proper implementation you would follow in full the document, including provisioning a dedicated schema and tablespace for holding the data (to make it easier to manage and segregate from other data), but here I'm just going to use the existing RCU schema (BIPLATFORM), along with the Physical mapping already in the RPD (10 - System DB (ORCL)):

In NQSConfig.INI, under the XSA_CACHE section, I set:

ENABLE = YES;

# The schema and connection pool where the XSA data will be cached.
PHYSICAL_SCHEMA = "10 - System DB (ORCL)"."Catalog"."dbo";
CONNECTION_POOL = "10 - System DB (ORCL)"."UT Connection Pool";
And restart the BI Server:
  
/app/oracle/biee/user_projects/domains/bi/bitools/bin/stop.sh -i obis1 && /app/oracle/biee/user_projects/domains/bi/bitools/bin/start.sh -i obis1

Per the document, note that in the BI Server log there's an entry indicating that the cache has been successfully started:

[101001] External Subject Area cache is started successfully using configuration from the repository with the logical name ssi.
[101017] External Subject Area cache has been initialized. Total number of entries: 0 Used space: 0 bytes Maximum space: 107374182400 bytes Remaining space: 107374182400 bytes. Cache table name prefix is XC2875559987.
Now when I re-run the test XSA analysis from above, returning three columns, the BI Server goes off and populates the XSA cache table:
-- Sending query to database named 10 - System DB (ORCL) (id: <<79879>> XSACache Create table Gateway), connection pool named UT Connection Pool, logical request hash b4de812e, physical request hash 5847f2ef:  
CREATE TABLE dbo.XC2875559987_ZPRODNE1926129021 ( id3209243024 DOUBLE PRECISION, first_n[..]  

Or rather, it doesn't, because PHYSICALSCHEMA seems to want the literal physical schema, rather than the logical physical one (?!) that the USAGETRACKING configuration stanza is happy with in referencing the table.

Properties: description=<<79879>> XSACache Create table Exchange; producerID=0x1561aff8; requestID=0xfffe0034; sessionID=0xfffe0000; userName=prodney;  
[nQSError: 17001] Oracle Error code: 1918, message: ORA-01918: user 'DBO' does not exist

I'm trying to piggyback on SA511's existing configruation, which uses catalog.schema notation:

OBIEE 12c – Extended Subject Areas (XSA) and the Data Set Service

Instead of the more conventional approach to have the actual physical schema (often used in conjunction with 'Require fully qualified table names' in the connection pool):

OBIEE 12c – Extended Subject Areas (XSA) and the Data Set Service

So now I'll do it properly, and create a database and schema for the XSA cache - I'm still going to use the BIPLATFORM schema though...

OBIEE 12c – Extended Subject Areas (XSA) and the Data Set Service

Updated NQSConfig.INI:

[ XSA_CACHE ]

ENABLE = YES;

# The schema and connection pool where the XSA data will be cached.
PHYSICAL_SCHEMA = "XSA Cache"."BIEE_BIPLATFORM";
CONNECTION_POOL = "XSA Cache"."XSA CP";
After refreshing the analysis again, there's a successful creation of the XSA cache table:
-- Sending query to database named XSA Cache (id: <<65685>> XSACache Create table Gateway), connection pool named XSA CP, logical request hash 9a548c60, physical request hash ccc0a410: [[  
CREATE TABLE BIEE_BIPLATFORM.XC2875559987_ZPRODNE1645894381 ( id3209243024 DOUBLE PRECISION, first_name2360035083 VARCHAR2(17 CHAR), [...]  

as well as a stats gather:

-- Sending query to database named XSA Cache (id: <<65685>> XSACache Collect statistics Gateway), connection pool named XSA CP, logical request hash 9a548c60, physical request hash d73151bb:  
BEGIN DBMS_STATS.GATHER_TABLE_STATS(ownname => 'BIEE_BIPLATFORM', tabname => 'XC2875559987_ZPRODNE1645894381' , estimate_percent => 5 , method_opt => 'FOR ALL COLUMNS SIZE AUTO' ); END;  

Although I do note that it is used a fixed estimatepercent instead of the recommended AUTOSAMPLE_SIZE. The table itself is created with a fixed prefix (as specified in the obis1-diagnostic.log at initialisation), and holds a full copy of the XSA (not just the columns in the query that triggered the cache creation):

OBIEE 12c – Extended Subject Areas (XSA) and the Data Set Service

With the dataset cached, the query is then run and the query log shows a XSA cache hit

External Subject Area cache hit for 'prodney'.'MOCK_DATA_bigger_55Mb'/Columns :
Cache entry shared_cache_key = 'prodney'.'MOCK_DATA_bigger_55Mb',
table name = BIEE_BIPLATFORM.XC2875559987_ZPRODNE2128899357,
row count = 144000,
entry size = 201326592 bytes,
creation time = 2016-06-01 20:14:26.829,
creation elapsed time = 49779 ms,
descriptor ID = /app/oracle/biee/user_projects/domains/bi/servers/obis1/xsacache/NQSXSA_BIEE_BIPLATFORM.XC2875559987_ZPRODNE2128899357_2.CACHE
with the resulting physical query fired at the XSA cache table (replacing what would have gone against the DSS web service):
-- Sending query to database named XSA Cache (id: <<65357>>), connection pool named XSA CP, logical request hash 9a548c60, physical request hash d3ed281d: [[  
WITH  
SAWITH0 AS (select T1000001.first_name2360035083 as c1,  
     T1000001.last_name3826278858 as c2,
     sum(T1000001.foo2363149668) as c3
from  
     BIEE_BIPLATFORM.XC2875559987_ZPRODNE1645894381 T1000001
group by T1000001.first_name2360035083, T1000001.last_name3826278858)  
select D1.c1 as c1, D1.c2 as c2, D1.c3 as c3, D1.c4 as c4 from ( select 0 as c1,  
     D102.c1 as c2,
     D102.c2 as c3,
     D102.c3 as c4
from  
     SAWITH0 D102
order by c2, c3 ) D1 where rownum <= 5000001  

It's important to point out the difference of what's happening here: the aggregation has been pushed down to the database, meaning that the BI Server doesn't have to. In performance terms, this is a Very Good Thing usually.

Rows 988, bytes 165984 retrieved from database query
Rows returned to Client 988
Whilst it doesn't seem to be recorded in the query log from what I can see, the data returned from the XSA Cache also gets inserted into the BI Server cache, and if you open an XSA-based analysis that's not in the presentation services cache (a third cache to factor in!) you will get a cache hit on the BI Server cache. As discussed earlier in this article though, if an analysis is built against an XSA for which a BI Server cache entry exists that with manipulation could service it (eg pruning columns or rolling up), it doesn't appear to take advantage of it - but since it's hitting the XSA cache this time, it's less of a concern. If you change the underlying data in the XSA OBIEE 12c – Extended Subject Areas (XSA) and the Data Set Service The BI Server does pick this up and repopulates the XSA Cache. The XSA cache entry itself is 192Mb in size - generated from a 55Mb upload file. The difference will be down to data types and storage methods etc. However, that it is larger in the XSA Cache (database) than held natively (flat file) doesn't really matter, particularly if the data is being aggregated and/or filtered, since the performance benefit of pushing this work to the database will outweigh the overhead of storage space. Consider this example here, where I run an analysis pulling back 44 columns (of the 78 in the spreadsheet) and hit the XSA cache, it runs in just over a second, and transfers from the database a total of 5.3Mb (the data is repeated, so rolls up):
Rows 1000, bytes 5576000 retrieved from database
Rows returned to Client 1000
If I disable the XSA cache and run the same query, we see this:
Rows 144000, bytes 801792000 Retrieved from database
Physical query response time 22.086 (seconds)
Rows returned to Client 1000
That's 764Mb being sent back for the BI Server to process, which it does by dumping a whole load to disk in temporary work files:
$  ls -l /app/oracle/biee/user_projects/domains/bi/servers/obis1/tmp/obis_temp  
[...]]
-rwxrwx--- 1 oracle oinstall 10726190 2016-06-01 21:04 nQS_AG_29733_261_ebd70002_75835908.TMP
-rwxrwx--- 1 oracle oinstall   153388 2016-06-01 21:04 nQS_AG_29733_262_ebd70002_75835908.TMP
-rw------- 1 oracle oinstall 24192000 2016-06-01 21:04 nQS_AG_29733_266_ebd70002_75862509.TMP
-rw------- 1 oracle oinstall  4195609 2016-06-01 21:04 nQS_EX_29733_264_ebd70002_75861716.TMP
-rw------- 1 oracle oinstall 21430272 2016-06-01 21:04 nQS_EX_29733_265_ebd70002_75861739.TMP

As a reminder - this isn't "Bad", it's just not optimal (response time of 50 seconds vs 1 second), and if you scale that kind of behaviour by many users with many datasets, things could definitely get hairy for all users of the system. Hence - use the XSA Cache.

As a final point, with the XSA Cache being in the database the standard range of performance optimisations are open to us - indexing being the obvious one. No indexes are built against the XSA Cache table by default, which is fair enough since OBIEE has no idea what the key columns on the data are, and the point of mashups is less to model and optimise the data but to just get it up there in front of the user. So you could index the table if you knew the key columns that were going to be filtered against, or you could even put it into memory (assuming you've licensed the option).

The MoS document referenced above also includes further performance recommendations for XSA, including the use of RAM Disk for XSA cache metadata files, as well as the managed server temp folder

Summary

External Subject Areas are great functionality, but be aware of the performance implications of not being able to push down common operations such as filtering and aggregation. Set up XSA Caching if you are going to be using XSA properly.

If you're interested in the direction of XSA and the associated Data Set Service, this slide deck from Oracle's Socs Cappas provides some interesting reading. Uploading Excel files into OBIEE looks like just the beginning of what the Data Set Service is going to enable!

Categories: BI & Warehousing

What Are Oracle SQL Aggregate Functions?

Complete IT Professional - Wed, 2016-06-01 06:00
Oracle SQL aggregate functions are very useful, and are some of the most commonly used functions. Learn what Oracle SQL aggregate functions are and what they do in this article. What Are Oracle SQL Aggregate Functions? Aggregate functions are functions that allow you to view a single piece of data from multiple pieces of data. […]
Categories: Development

Oracle Database 12c SQL by Jason Price

Pat Shuff - Wed, 2016-06-01 02:07
Given that we have a database in Amazon RDS and Oracle PaaS we can go through some books from Oracle Press and see if anything breaks running through a book. Let's start with something simple, Oracle Database 12c SQL by Jason Price, published by Oracle Press. This is an introductory book that goes through the basic data types, sql commands, and an introduction of XML at the end of the book. The material should be relatively straightforward and not have any issues or problems executing the sample code. The sample code can be downloaded from Oracle Press Books by searching for the book title and downloading the Chapter 1 sample code. This will give us a way to load a table with data and execute code against the table. We will use SQL Developer to execute the code from d:\workshops\sql books\SQL and see what works and what does not work.

To get started, we need to use our Amazon RDS instance and SQL Developer that we installed yesterday. We connect with the user oracle to port 1521 after opening up the port to anyone. From this connection we can execute sql code in the main part of the SQL Developer window and load the sample code to execute. We can test the connection with the following command

select sysdate from dual;

we can follow along the book and create the user store, load the schema into the database, and look at the examples throughout the book.

Everything worked on Amazon RDS. We were able to create users, grant them audit functionality, execute XML code, and generally do everything listed in the book. The audit did not report back as expected but this could have been a user error. According to the Amazon RDS Documentation audting should work. We might not have had something set properly to report back the right information.

In summary, the Amazon RDS is a good platform to learn how to program 12c SQL and the various user level commands. If you go through a book like Oracle Database 12c SQL everything should work. This or the Oracle PaaS equivalent make an excellent sandbox that you can use for a day or two and turn off minimizing your cost of experimenting.

Paris Province Oracle Meetup

Tim Hall - Tue, 2016-05-31 23:50

paris-province-oracle-meetupThe reason for me being in Paris was to speak at the Paris Province Oracle Meetup. Breaking my journey to the Netherlands with a quick trip to Paris was a really easy way to connect with more people.

The Paris meetup is very similar to those found in other cities around the world, including Oracle Midlands in my home town. We all gathered at about 19:00 in the AVNET office in Paris and I did two talks with a short break between them. The first talk was about pluggable databases and the second one was about running Oracle databases in the cloud.

I like these local meetups. They feel a lot less formal and more personal than some (but not all) conferences. It just feels more natural to me. I really enjoyed doing the talks and the crowd seemed to respond well to them, which was nice. I’ll definitely be back again, if they will have me. Maybe next time I will get to do some sightseeing in Paris too.

Birmingham to Paris

Tim Hall - Tue, 2016-05-31 23:15

airplane-flying-through-clouds-smallThe day started at a rather civilised time of 06:00, which makes a change. I usually book early flights, forgetting I have to get to the airport a couple of hours early, then regret the decision later.

Storing Date Values As Numbers (The Numbers)

Richard Foote - Tue, 2016-05-31 19:45
In my last couple of posts, I’ve been discussing how storing date data in a character based column is a really really bad idea. In a follow-up question, I was asked if storing dates in NUMBER format was a better option. The answer is that it’s probably an improvement from storing dates as strings but it’s […]
Categories: DBA Blogs

Announcing the Dodeca Spreadsheet Management System, Version 7 and the Dodeca Excel Add-In for Essbase

Tim Tow - Tue, 2016-05-31 14:54
After 18 months of hard work, Applied OLAP is proud to announce the general availability of the Dodeca Spreadsheet Management System, version 7, and the all-new Dodeca Excel Add-In for Essbase.

The Dodeca Spreadsheet Management System provides customers the ability to automate spreadsheet functionality, reducing the risk of spreadsheet errors while increasing productivity.  It combines unprecedented ease-of-use for business users using spreadsheets for planning, budgeting, forecasting, reporting and analysis tasks.  It also provides a robust, programmable development environment enabling companies to create spreadsheet applications tailored to their specific needs.

The new Dodeca Excel Add-In for Essbase was created as a drop-in replacement for the classic Essbase add-in and is the world’s only Excel add-in focused exclusively on Essbase.  The new add-in supports the most common actions used by Essbase traditionalists, supports the corresponding VBA functions, and includes a built-in Excel ribbon.  Early adopters have also been impressed by the speed of retrieving data, commenting they found the Dodeca Excel Add-In as fast as, or even faster, than the classic Excel Add-In for Essbase.  It is supported for Excel 2010, 2013, and 2016 and for Essbase 9.3.1 and higher.




Dodeca 7 includes new features and functionality for security, selectors, logging, and enhanced workbook scripting.

The enhanced security features add the ability to more easily manage users, roles, and permissions to access a Dodeca application.  The new security features include:
  • User Management – You can now track, monitor, and control user access to a Dodeca application and more easily monitor your Dodeca user base.  This feature enables customers to control individual user access to a specific application, enable users for admin access, and manage user mapping to roles while also tracking metrics such as the first, last, and count of user logins.  This feature also logs metrics on the users system to enable Dodeca administrators to more easily support their users.



    Here is an example of the metrics stored for each user record.

  • User Roles – In addition to provisioning roles via Essbase, Microsoft Active Directory, or LDAP groups using Dodeca authentication services, you can now create your own groups directly in Dodeca and map them to users.  In addition, these new roles can be configured to be additive to the roles provided by other authentication services.  
  • Application Session Timeout – You can now prevent users from keeping a Dodeca session open indefinitely by specifying an inactivity timeout or by specifying a designated shutdown time.

The new logging features provide the ability for customers to both easily track what their users are running in the system and assist our support team if and when support is needed.  The new logging features include:
  • View Usage Logging – View usage is now automatically logged in the server.  The logs include not only identifying information for the view and the user, but also include performance information, error information, and tokens used in the view generation.

  • Client-side Request and Response XML Logging – The XML traffic traveling between the client and the server may now be logged to a directory on the client machine.  This logging expands on the Server-side Request and Response XML Logging to make it easier to gather the XML traffic for a single user.  In turn, the XML captured can be used by our support team to easily replicate transactions for our developers if and when necessary.

The selector enhancements include improvements to both view selectors and member selectors.  The improvements include:
  • View Selector Relational Hierarchy Generation – You may now populate the entire View Hierarchy, or alternatively, populate one or more branches, based on the results of a relational query.

  • View Selector Hierarchy Item Access Filters – Previously, you could control the View Hierarchies available to a given user.  This new filter provides the ability to control which view hierarchy items are presented in the view selector based on the current user’s roles.

  • Relational Treeview Point-of-View Selector List– You may now configure a hierarchy of point-of-view items based on the results of a relational query.

  • Enhanced Essbase Treeview Point-of-View Selector List Expansion and Selection Filtering – You can now filter Essbase member content and selectable status based on one or more specified filters including generation number, level number, member name, alias, or shared status.
View editing enhancements make it even easier to create and modify Views in Dodeca:
  • Enhanced SQLPassthroughDataSet Query Editing – You can now define token values to use for testing tokenized SQL queries in the Test Data Set utility.
  • Improved Token Editor – You can now view and edit tokens and their values in an improved grid layout.

Workbook scripting functionality adds more flexibility to Dodeca via 108 events that provide the opportunity for customers to extend Dodeca, 116 configurable methods (actions that can be taken in response to the events), and 138 functions that provide extended information to the workbook script.

The new workbook script functionality includes:
  • New Events
    • BeforeBuildExecute - Allows a workbook script to cancel the build before the view is covered.
    • BeforeRefreshExecute - Allows a workbook script to cancel the refresh before the view is covered.
    • Shown - Raised when the view is initially shown.  The event is raised before the framework applies the view’s AutoBuildOnOpen property, which allows a workbook script to set the property dynamically.
  • New Methods
    • ExportToExcel – Provides the ability to export view and point-of-view information in an exported Excel file.  This information is used to regenerate the view upon Excel file import which enables off-line data entry in Excel with subsequent database updates in Dodeca.
    • SetSelectorConfiguration - Provides the ability to add or remove a point-of-view selector to/from a view dynamically.
  • Enhanced Methods
    • CallWebService – Added a new RESTRequest overload, which allows web service calls to made to RESTful web services.
    • SendEmail - Added a new ServletSMTP overload, which sends email from the Dodeca server rather than from the client.  This is useful in circumstances in which SMTP mail must come from an approved IP address.
    • SetEntry – Added a new Added FormulaArray overload which allows Excel array formulas to be entered programmatically.
    • SetFill - Added a new Clear overload, which clears the fill color and pattern from the specified range.
  • New Functions
    • ColumnWidth - Returns the column width based on the active cell or a given cell.
    • RowHeight - Returns the row height based on the active cell or a given cell.
    • DataPointHasCellNote - Returns a boolean indicating if the active or specified data cell has Essbase LRO cell notes associated with it.
    • IsInCharacterRange - Returns a boolean indicating whether the specified string is limited to the specified character range.  This function can be used to detect multi-byte characters in a string.
  • Enhanced Functions
    • SheetCount - Added an optional IncludeHiddenSheets argument, which controls whether the returned count includes both visible and hidden sheets or only visible sheets.
For more information, we recommend you download and read the latest Dodeca release notes from the registered section of our website.  As always, you may also reach out to us via email or call us at
256.885.4371.

Categories: BI & Warehousing

Changes in BI Server Cache Behaviour in OBIEE 12c : OBIS_REFRESH_CACHE

Rittman Mead Consulting - Tue, 2016-05-31 13:59
 OBIS_REFRESH_CACHE

The OBIEE BI Server cache can be a great way of providing a performance boost to response times for end users - so long as it's implemented carefully. Done wrong, and you're papering over the cracks and heading for doom; done right, and it's the 'icing on the cake'. You can read more about how to use it properly here, and watch a video I did about it here. In this article we'll see how the BI Server cache has changed in OBIEE 12c in a way that could prove somewhat perplexing to developers used to OBIEE 11g.

The BI Server cache works by inspecting queries as they are sent to the BI Server, and deciding if an existing cache entry can be used to provide the data. This can include direct hits (i.e. the same query being run again), or more advanced cases, where a subset or aggregation of an existing cache entry could be used. If a cache entry is used then a trip to the database is avoided and response times will typically be better - particularly if more than one database query would have been involved, or lots of additional post-processing on the BI Server.

When an analysis or dashboard is run, Presentation Services generates the necessary Logical SQL to return the data needed, and sends this to the BI Server. It's at this point that the cache will, or won't, kick in. The BI Server will accept Logical SQL from other sources than Presentation Services - in fact, any JDBC or ODBC client. This is useful as it enables us to validate behaviour that we're observing and see how it can apply elsewhere.

When you build an Analysis in OBIEE 11g (and before), the cache will be used if applicable. Each time you add a column, or hit refresh, you'll get an entry back from the cache if one exists. This has benefits - speed - but disadvantages too. When the data in the database changes, you will still get a cache hit, regardless. The only way to force OBIEE to show you the latest version of the data is to purge the cache first. You can target cache purges based on databases, tables, or even specific queries - but you do need to purge it.

What's changed in OBIEE 12c is that when you click "Refresh" on an Analysis or Dashboard, the query is re-run against the source and the cache re-populated. Even if you have an existing cache entry, and even if the underlying data has not changed, if you hit Refresh, the cache will not be used. Which kind of makes sense, since "refresh" probably should indeed mean that.

Digging into OBIEE Cache Behaviour

Let's prove this out. I've got SampleApp v506 (OBIEE 11.1.1.9), and SampleApp v511 (OBIEE 12.2.1). First off, I'll clear the cache on each, using call saPurgeAllCache();, run via Issue SQL:

 OBIS_REFRESH_CACHE

 OBIS_REFRESH_CACHE

Then I can use another BI Server procedure call to view the current cache contents (new in 11.1.1.9), call NQS_GetAllCacheEntries(). For this one particularly make sure you've un-ticked "Use Oracle BI Presentation Services Cache". This is different from the BI Server cache which is the subject of this article, and as the name implies is a cache that Presentation Services keeps.

 OBIS_REFRESH_CACHE

I've confirmed that the BI Server cache is enabled on both servers, in NQSConfig.INI

###############################################################################
#
#  Query Result Cache Section
#
###############################################################################


[CACHE]

ENABLE = YES;  # This Configuration setting is managed by Oracle Enterprise Manager Fusion Middleware Control
Now I create a very simple analysis in both 11g and 12c, showing a list of Airline Carriers and their Codes:  OBIS_REFRESH_CACHE After clicking Results, a cache entry is inserted on each respective system:  OBIS_REFRESH_CACHE  OBIS_REFRESH_CACHE Of particular interest is the create time, last used time, and number of times used:  OBIS_REFRESH_CACHE If I now click Refresh in the Analysis window:  OBIS_REFRESH_CACHE We see this happen to the caches:  OBIS_REFRESH_CACHE In OBIEE 11g the cache entry is used - but in OBIEE 12c it's not. The CreatedTime is evidently not populated correctly, so instead let's dive over to the query log (nqquery/obis1-query in 11g/12c respectively). In OBIEE 11g we've got:
-- SQL Request, logical request hash:  
7c365697  
SET VARIABLE QUERY_SRC_CD='Report',PREFERRED_CURRENCY='USD';SELECT  
   0 s_0,
   "X - Airlines Delay"."Carrier"."Carrier Code" s_1,
   "X - Airlines Delay"."Carrier"."Carrier" s_2
FROM "X - Airlines Delay"  
ORDER BY 1, 3 ASC NULLS LAST, 2 ASC NULLS LAST  
FETCH FIRST 5000001 ROWS ONLY

-- Cache Hit on query: [[
Matching Query: SET VARIABLE QUERY_SRC_CD='Report',PREFERRED_CURRENCY='USD';SELECT  
   0 s_0,
   "X - Airlines Delay"."Carrier"."Carrier Code" s_1,
   "X - Airlines Delay"."Carrier"."Carrier" s_2
FROM "X - Airlines Delay"  
ORDER BY 1, 3 ASC NULLS LAST, 2 ASC NULLS LAST  
FETCH FIRST 5000001 ROWS ONLY
Whereas 12c is:
-- SQL Request, logical request hash:  
d53f813c  
SET VARIABLE OBIS_REFRESH_CACHE=1,QUERY_SRC_CD='Report',PREFERRED_CURRENCY='USD';SELECT  
   0 s_0,
   "X - Airlines Delay"."Carrier"."Carrier Code" s_1,
   "X - Airlines Delay"."Carrier"."Carrier" s_2
FROM "X - Airlines Delay"  
ORDER BY 3 ASC NULLS LAST, 2 ASC NULLS LAST  
FETCH FIRST 5000001 ROWS ONLY

-- Sending query to database named X0 - Airlines Demo Dbs (ORCL) (id: <<320369>>), connection pool named Aggr Connection, logical request hash d53f813c, physical request hash a46c069c: [[
WITH  
SAWITH0 AS (select T243.CODE as c1,  
     T243.DESCRIPTION as c2
from  
     BI_AIRLINES.UNIQUE_CARRIERS T243 /* 30 UNIQUE_CARRIERS */ )
select D1.c1 as c1, D1.c2 as c2, D1.c3 as c3 from ( select 0 as c1,  
     D1.c1 as c2,
     D1.c2 as c3
from  
     SAWITH0 D1
order by c3, c2 ) D1 where rownum <= 5000001

-- Query Result Cache: [59124] The query for user 'prodney' was inserted into the query result cache. The filename is '/app/oracle/biee/user_projects/domains/bi/servers/obis1/cache/NQS__736117_52416_27.TBL'.
Looking closely at the 12c output shows three things:
  1. OBIEE has run a database query for this request, and not hit the cache
  2. A cache entry has clearly been created again as a result of this query
  3. The Logical SQL has a request variable set: OBIS_REFRESH_CACHE=1 This is evidently added it by Presentation Services at runtime, since the Advanced tab of the analysis shows no such variable being set:  OBIS_REFRESH_CACHE
Let's save the analysis, and experiment further. Evidently, the cache is being deliberately bypassed when the Refresh button is clicked when building an analysis - but what about when it is opened from the Catalog? We should see a cache hit here too:  OBIS_REFRESH_CACHE Nope, no hit.  OBIS_REFRESH_CACHE But, in the BI Server query log, no entry either - and the same on 11g. The reason being .... Presentation Service's cache. D'oh! From Administration > Manage Sessions I select Close All Cursors which forces a purge of the Presentation Services cache. When I reopen the analysis from the Catalog view, now I get a cache hit, in both 11g and 12c:  OBIS_REFRESH_CACHE The same happens (successful cache hit) for the analysis used in a Dashboard being opened, having purged the Presentation Services cache first.  OBIS_REFRESH_CACHE So at this point, we can say that OBIEE 11g and 12c both behave the same with the cache when opening analyses/dashboards, but differ when refreshing the analysis. In OBIEE 12c when an analysis is refreshed the cache is deliberately bypassed. Let's check on refreshing a dashboard:  OBIS_REFRESH_CACHE Same behaviour as with analyses - in 11g the cache is hit, in 12c the cache is bypassed and repopulated  OBIS_REFRESH_CACHE To round this off, let's doublecheck the behaviour of the new request variable that we've found, OBIS_REFRESH_CACHE. Since it appears that Presentation Services is adding it in at runtime, let's step over to a more basic way of interfacing with the BI Server - nqcmd. Whilst we could probably use Issue SQL (as we did above for querying the cache) I want to avoid any more behind-the-scenes funny business from Presentation Services. In OBIEE 12c, I run nqcmd:
/app/oracle/biee/user_projects/domains/bi/bitools/bin/nqcmd.sh -d AnalyticsWeb -u prodney -p Admin123
Enter Q to enter a query, as follows:
SET VARIABLE OBIS_REFRESH_CACHE=1,QUERY_SRC_CD='Report',PREFERRED_CURRENCY='USD';SELECT    0 s_0,    "X - Airlines Delay"."Carrier"."Carrier Code" s_1,    "X - Airlines Delay"."Carrier"."Carrier" s_2 FROM "X - Airlines Delay" ORDER BY 3 ASC NULLS LAST, 2 ASC NULLS LAST FETCH FIRST 5000001 ROWS ONLY
In `obis1-query.log' there's the cache bypass and populate:
Query Result Cache: [59124] The query for user 'prodney' was inserted into the query result cache. The filename is '/app/oracle/biee/user_projects/domains/bi/servers/obis1/cache/NQS__736117_53779_29.TBL'.
If I run it again without the OBIS_REFRESH_CACHE variable:
SET VARIABLE QUERY_SRC_CD='Report',PREFERRED_CURRENCY='USD';SELECT    0 s_0,    "X - Airlines Delay"."Carrier"."Carrier Code" s_1,    "X - Airlines Delay"."Carrier"."Carrier" s_2 FROM "X - Airlines Delay" ORDER BY 3 ASC NULLS LAST, 2 ASC NULLS LAST FETCH FIRST 5000001 ROWS ONLY
We get the cache hit as expected:
-------------------- Cache Hit on query: [[
Matching Query: SET VARIABLE OBIS_REFRESH_CACHE=1,QUERY_SRC_CD='Report',PREFERRED_CURRENCY='USD';SELECT    0 s_0,    "X - Airlines Delay"."Carrier"."Carrier Code" s_1,    "X - Airlines Delay"."Carrier"."Carrier" s_2 FROM "X - Airlines Delay" ORDER BY 3 ASC NULLS LAST, 2 ASC NULLS LAST FETCH FIRST 5000001 ROWS ONLY
Created by:     prodney
Out of interest I ran the same two tests on 11g -- both resulted in a cache hit, since it presumably ignores the unrecognised variable. Summary In OBIEE 12c, if you click "Refresh" on an analysis or dashboard, OBIEE Presentation Services forces a cache-bypass and cache-reseed, ensuring that you really do see the latest version of the data from source. It does this using the request variable, new in OBIEE 12c, OBIS_REFRESH_CACHE. Footnote Courtesy of Steve Fitzgerald: As per the presentation server xsd, you can also revert the behavior (not sure why one would, but you can) in the instanceconfig.xml
<Cache>  
<Query>
<RefreshIncludeBIServerCache>false</RefreshIncludeBIServerCache>
</Query>
</Cache>

Categories: BI & Warehousing

Comment on SOLUTION: A Strange Condition in the Where Clause by lotharflatz

Oracle Riddle Blog - Tue, 2016-05-31 13:18

Hi Martin,

thanks. Good Point. Just hard to find the right name for the column. I will use Mohamed’s “DDL optimized” column instead.

Lothar

Like

Categories: DBA Blogs

Comment on SOLUTION: A Strange Condition in the Where Clause by Martin Preiss

Oracle Riddle Blog - Tue, 2016-05-31 11:14

Lothar,
a nice example and certainly a feature with interesting (and sometimes strange) effects. But I am not sure if I would use the term “virtual” in this context since – if my memory serves me well – the column is technically not a virtual column. Though I would certainly agree that storing the defaults only in the dictionary for rows that have been there before the addition of the column makes these values somehow virtual. Mohamed Houri has written an instructive OTN article containing some additional details: http://www.oracle.com/technetwork/articles/database/ddl-optimizaton-in-odb12c-2331068.html.

Regards
Martin

Like

Categories: DBA Blogs

Changes in BI Server Cache Behaviour in OBIEE 12c : OBIS_REFRESH_CACHE

Rittman Mead Consulting - Tue, 2016-05-31 10:36

The OBIEE BI Server cache can be a great way of providing a performance boost to response times for end users – so long as it’s implemented carefully. Done wrong, and you’re papering over the cracks and heading for doom; done right, and it’s the ‘icing on the cake’. You can read more about how to use it properly here, and watch a video I did about it here. In this article we’ll see how the BI Server cache has changed in OBIEE 12c in a way that could prove somewhat perplexing to developers used to OBIEE 11g.

The BI Server cache works by inspecting queries as they are sent to the BI Server, and deciding if an existing cache entry can be used to provide the data. This can include direct hits (i.e. the same query being run again), or more advanced cases, where a subset or aggregation of an existing cache entry could be used. If a cache entry is used then a trip to the database is avoided and response times will typically be better – particularly if more than one database query would have been involved, or lots of additional post-processing on the BI Server.

When an analysis or dashboard is run, Presentation Services generates the necessary Logical SQL to return the data needed, and sends this to the BI Server. It’s at this point that the cache will, or won’t, kick in. The BI Server will accept Logical SQL from other sources than Presentation Services – in fact, any JDBC or ODBC client. This is useful as it enables us to validate behaviour that we’re observing and see how it can apply elsewhere.

When you build an Analysis in OBIEE 11g (and before), the cache will be used if applicable. Each time you add a column, or hit refresh, you’ll get an entry back from the cache if one exists. This has benefits – speed – but disadvantages too. When the data in the database changes, you will still get a cache hit, regardless. The only way to force OBIEE to show you the latest version of the data is to purge the cache first. You can target cache purges based on databases, tables, or even specific queries – but you do need to purge it.

What’s changed in OBIEE 12c is that when you click “Refresh” on an Analysis or Dashboard, the query is re-run against the source and the cache re-populated. Even if you have an existing cache entry, and even if the underlying data has not changed, if you hit Refresh, the cache will not be used. Which kind of makes sense, since “refresh” probably should indeed mean that.

Digging into OBIEE Cache Behaviour

Let’s prove this out. I’ve got SampleApp v506 (OBIEE 11.1.1.9), and SampleApp v511 (OBIEE 12.2.1). First off, I’ll clear the cache on each, using call saPurgeAllCache();, run via Issue SQL:

Then I can use another BI Server procedure call to view the current cache contents (new in 11.1.1.9), call NQS_GetAllCacheEntries(). For this one particularly make sure you’ve un-ticked “Use Oracle BI Presentation Services Cache”. This is different from the BI Server cache which is the subject of this article, and as the name implies is a cache that Presentation Services keeps.

I’ve confirmed that the BI Server cache is enabled on both servers, in NQSConfig.INI

###############################################################################
#
#  Query Result Cache Section
#
###############################################################################


[CACHE]

ENABLE = YES;  # This Configuration setting is managed by Oracle Enterprise Manager Fusion Middleware Control

Now I create a very simple analysis in both 11g and 12c, showing a list of Airline Carriers and their Codes:

After clicking Results, a cache entry is inserted on each respective system:

Of particular interest is the create time, last used time, and number of times used:

If I now click Refresh in the Analysis window:

We see this happen to the caches:

In OBIEE 11g the cache entry is used – but in OBIEE 12c it’s not. The CreatedTime is evidently not populated correctly, so instead let’s dive over to the query log (nqquery/obis1-query in 11g/12c respectively). In OBIEE 11g we’ve got:

-- SQL Request, logical request hash:
7c365697
SET VARIABLE QUERY_SRC_CD='Report',PREFERRED_CURRENCY='USD';SELECT
   0 s_0,
   "X - Airlines Delay"."Carrier"."Carrier Code" s_1,
   "X - Airlines Delay"."Carrier"."Carrier" s_2
FROM "X - Airlines Delay"
ORDER BY 1, 3 ASC NULLS LAST, 2 ASC NULLS LAST
FETCH FIRST 5000001 ROWS ONLY

-- Cache Hit on query: [[
Matching Query: SET VARIABLE QUERY_SRC_CD='Report',PREFERRED_CURRENCY='USD';SELECT
   0 s_0,
   "X - Airlines Delay"."Carrier"."Carrier Code" s_1,
   "X - Airlines Delay"."Carrier"."Carrier" s_2
FROM "X - Airlines Delay"
ORDER BY 1, 3 ASC NULLS LAST, 2 ASC NULLS LAST
FETCH FIRST 5000001 ROWS ONLY

Whereas 12c is:

-- SQL Request, logical request hash:
d53f813c
SET VARIABLE OBIS_REFRESH_CACHE=1,QUERY_SRC_CD='Report',PREFERRED_CURRENCY='USD';SELECT
   0 s_0,
   "X - Airlines Delay"."Carrier"."Carrier Code" s_1,
   "X - Airlines Delay"."Carrier"."Carrier" s_2
FROM "X - Airlines Delay"
ORDER BY 3 ASC NULLS LAST, 2 ASC NULLS LAST
FETCH FIRST 5000001 ROWS ONLY

-- Sending query to database named X0 - Airlines Demo Dbs (ORCL) (id: <<320369>>), connection pool named Aggr Connection, logical request hash d53f813c, physical request hash a46c069c: [[
WITH
SAWITH0 AS (select T243.CODE as c1,
     T243.DESCRIPTION as c2
from
     BI_AIRLINES.UNIQUE_CARRIERS T243 /* 30 UNIQUE_CARRIERS */ )
select D1.c1 as c1, D1.c2 as c2, D1.c3 as c3 from ( select 0 as c1,
     D1.c1 as c2,
     D1.c2 as c3
from
     SAWITH0 D1
order by c3, c2 ) D1 where rownum <= 5000001

-- Query Result Cache: [59124] The query for user 'prodney' was inserted into the query result cache. The filename is '/app/oracle/biee/user_projects/domains/bi/servers/obis1/cache/NQS__736117_52416_27.TBL'.

Looking closely at the 12c output shows three things:

  1. OBIEE has run a database query for this request, and not hit the cache
  2. A cache entry has clearly been created again as a result of this query
  3. The Logical SQL has a request variable set: OBIS_REFRESH_CACHE=1

    This is evidently added it by Presentation Services at runtime, since the Advanced tab of the analysis shows no such variable being set:

Let’s save the analysis, and experiment further. Evidently, the cache is being deliberately bypassed when the Refresh button is clicked when building an analysis – but what about when it is opened from the Catalog? We should see a cache hit here too:

Nope, no hit.

15239r

But, in the BI Server query log, no entry either – and the same on 11g. The reason being …. Presentation Service’s cache. D’oh!

From Administration > Manage Sessions I select Close All Cursors which forces a purge of the Presentation Services cache. When I reopen the analysis from the Catalog view, now I get a cache hit, in both 11g and 12c:

The same happens (successful cache hit) for the analysis used in a Dashboard being opened, having purged the Presentation Services cache first.

So at this point, we can say that OBIEE 11g and 12c both behave the same with the cache when opening analyses/dashboards, but differ when refreshing the analysis. In OBIEE 12c when an analysis is refreshed the cache is deliberately bypassed. Let’s check on refreshing a dashboard:

Same behaviour as with analyses – in 11g the cache is hit, in 12c the cache is bypassed and repopulated

To round this off, let’s doublecheck the behaviour of the new request variable that we’ve found, OBIS_REFRESH_CACHE. Since it appears that Presentation Services is adding it in at runtime, let’s step over to a more basic way of interfacing with the BI Server – nqcmd. Whilst we could probably use Issue SQL (as we did above for querying the cache) I want to avoid any more behind-the-scenes funny business from Presentation Services.

In OBIEE 12c, I run nqcmd:

/app/oracle/biee/user_projects/domains/bi/bitools/bin/nqcmd.sh -d AnalyticsWeb -u prodney -p Admin123

Enter Q to enter a query, as follows:

SET VARIABLE OBIS_REFRESH_CACHE=1,QUERY_SRC_CD='Report',PREFERRED_CURRENCY='USD';SELECT    0 s_0,    "X - Airlines Delay"."Carrier"."Carrier Code" s_1,    "X - Airlines Delay"."Carrier"."Carrier" s_2 FROM "X - Airlines Delay" ORDER BY 3 ASC NULLS LAST, 2 ASC NULLS LAST FETCH FIRST 5000001 ROWS ONLY

In `obis1-query.log’ there’s the cache bypass and populate:

Query Result Cache: [59124] The query for user 'prodney' was inserted into the query result cache. The filename is '/app/oracle/biee/user_projects/domains/bi/servers/obis1/cache/NQS__736117_53779_29.TBL'.

If I run it again without the OBIS_REFRESH_CACHE variable:

SET VARIABLE QUERY_SRC_CD='Report',PREFERRED_CURRENCY='USD';SELECT    0 s_0,    "X - Airlines Delay"."Carrier"."Carrier Code" s_1,    "X - Airlines Delay"."Carrier"."Carrier" s_2 FROM "X - Airlines Delay" ORDER BY 3 ASC NULLS LAST, 2 ASC NULLS LAST FETCH FIRST 5000001 ROWS ONLY

We get the cache hit as expected:

-------------------- Cache Hit on query: [[
Matching Query: SET VARIABLE OBIS_REFRESH_CACHE=1,QUERY_SRC_CD='Report',PREFERRED_CURRENCY='USD';SELECT    0 s_0,    "X - Airlines Delay"."Carrier"."Carrier Code" s_1,    "X - Airlines Delay"."Carrier"."Carrier" s_2 FROM "X - Airlines Delay" ORDER BY 3 ASC NULLS LAST, 2 ASC NULLS LAST FETCH FIRST 5000001 ROWS ONLY
Created by:     prodney

Out of interest I ran the same two tests on 11g — both resulted in a cache hit, since it presumably ignores the unrecognised variable.

Summary

In OBIEE 12c, if you click “Refresh” on an analysis or dashboard, OBIEE Presentation Services forces a cache-bypass and cache-reseed, ensuring that you really do see the latest version of the data from source. It does this using the request variable, new in OBIEE 12c, OBIS_REFRESH_CACHE.

Footnote

Courtesy of Steve Fitzgerald:

As per the presentation server xsd, you can also revert the behavior (not sure why one would, but you can) in the instanceconfig.xml

<Cache>
<Query>
<RefreshIncludeBIServerCache>false</RefreshIncludeBIServerCache>
</Query>
</Cache>

The post Changes in BI Server Cache Behaviour in OBIEE 12c : OBIS_REFRESH_CACHE appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

resizing Amazon RDS Oracle EE

Pat Shuff - Tue, 2016-05-31 09:38
Yesterday we looked at connecting to DBaaS with the Oracle platform as a service option. We wanted to extend the database table size because we expect to grow the tablespace beyond current storage capacity. We are going to do the same thing today for Amazon RDS running an Oracle Enterprise Edition 12c instance. To start our journey, we need to create a database instance in Amazon. This is done by going to the Amazon AWS Console and click on RDS to launch a database instance. We then click on the Launch DB Instance button and click on the Oracle tab. We select Enterprise Edition then the Dev/Test for our example. We accept the defaults, define our database with ORACLE_SID=pri, username of oracle, no multi-AZ replication, and select the processor and memory size.

While the database is creating we need to change the default network configuration. By default port 1521 is open but open to an ip address range. We need to open this up to everyone so that we can connect to the database from our desktop instance. We are using a Windows 2012 instance in the Oracle IaaS cloud so the default mapping back to the desktop we used to create the database does not work. Note that since we do not have permission to connect via ssh connecting with a tunnel is not an option. We must open up port 1521 to the world and can not use a tunnel to connect these two instances. The only security option that we have is ip white listing, vpn, opening up port 1521 to the world. This is done by going into the security groups definition on the detail page of the instance. We change the default inbound rule from an ip address to anywhere.

We could have alternatively defined a security group that links the ip address of our IaaS instance as well as our desktop prior to creation of this database to keep security a little tighter. Once the database is finished creating we can connect to it. We get the connection string (DNS address) and open up SQL Developer. We create a new database connection using the sid of pri, username of oracle, and port 1521. Once we connect we can define the DBA view to allow us to manage parts of the database since we do not have access using Enterprise Manager.

It looks like the table space will autoextend into the available space, all we should have to do is extend the /rdsdata partition. This is done by modifying the RDS instance from the console. We change the storage from the 20 that we created to 40, turn on advanced monitoring (not necessary for this exercise), and check the apply immediately button. This reconfigures the database and extends the storage. Note the resize command that happens for us. This is a sysdba level command that is executed on our behalf since we do not have sys rights outside the console.

We can look at the new instance and see that the size has grown and we have more space to expand into.

We can see the changes from the monitoring console

In summary, we are able to easily scale storage in an Amazon RDS instance even though we do not have sys access to the system. We do need to use the AWS Console to make this happen and can not do this through Enterprise Manager because we can't add the agent to the instance. It is important to note that some of the options that are available from the console and some are available with altered command line options that give you elevated admin privileges without giving you system access. Look at the new command structures and decide if forking your admin tools just to run on RDS is something worth doing or too much effort. These changes effectively lock you into running your Oracle database on Amazon RDS. For example to change the tablespace definition in Oracle you would typically type

alter database default tablespace users2;

but with Amazon RDS you would need to type

exec rdsadmin.rdsadmin_util.alter_default_tablespace('users2');

Is this a show stopper? It could be or it might be trivial. It could stop some pre-packaged applications from working in Amazon RDS and you are forced to go with EC2 and S3. The upgrade storage example that we just went through would be a manual process with the involvement of an operating system administrator thus incurring additional cost and time. Again, this blog is not about A is better than B. This blog is about showing things that are hidden and helping you decide which is better for you. Our recommendation is to play with Amazon RDS and Oracle PaaS and see which fits your needs best.

TEAM's Informatics Webcast: Don't Regret - Redact & Protect Sensitive Information

WebCenter Team - Tue, 2016-05-31 08:21

spacer ; background-position-y: 30%;">

Greetings!

Sensitive data is extremely important to protect for your organization. Release of public records, data breaches, and simple sharing between parties can cause unnecessary distress and costs when names, addresses, credit card numbers, etc. become public. Personally Identifiable Information, or PII, is any data that could potentially enable identification of a specific individual. This data being made public can dramatically increase the potential for identity theft and removes the level of anonymity necessary in public disclosures.
With a single disclosure breach reaching an estimated cost as high as seven million dollars ($7m) and affecting all manner of agencies, corporations, and hospitals across the globe every year, why regret not securing your documents at the point of capture?
TEAM's Redaction Engine utilizes multiple technologies to scan for and identify sensitive data and then enables automatic redaction of that information in subsequently generated PDFs.
Join this Webinar to secure your organization's future and protect from unnecessary litigation due to a preventable data breach.

Features & Benefits
  • Ability to handle both scanned and electronic data
  • High accuracy in field recognition with confidence threshold
  • Maintain compliance protocols such as HIPPA, PCI, and many more
  • Integrated into Oracle WebCenter Content 11g & 12c
  • Reporting via search analytics
  • Pattern Recognition (regex and basic) redacts information such as:
    • SS number
    • Date of Birth
    • Home Address
    • Identification number/medical forms
    • Credit Card numbers

TEAM's Redaction Engine Webinar
June 23rd, 2016 - 1:00PM CST 



Stay up to date with TEAM:

NoSQL for SQL Developers

Gerger Consulting - Tue, 2016-05-31 06:00
Oracle Developers! Want to learn more about NoSQL, but don't know where to start? Look no further.

Attend our free webinar by the venerable Pramod Sadalage. He'll answer all the questions you have about NoSQL but were too afraid to ask. :-) 

After all, he wrote the book.


 Sign up at this link. 
Categories: Development

BPEL Chapter 1: Hello World BPEL Project

Darwin IT - Mon, 2016-05-30 08:50
Long ago, back in 2004/2005 when Oracle released Oracle BPEL 10.1.2 (and its predecessor the global available release of the rebranded Collaxa product) and in 2006 with the release of the first SOASuite 10.1.3, you had a Project per BPEL process. Each project was setup around the BPEL process file. Since 11g BPEL is a component in the Software Component Architecture (SCA) and a project can contain multiple BPEL components together with other components as Mediator, Rules, etc. In a later section I'll elaborate on the SCA setup, but for now I'll focus on the BPEL component. When I do I probably edit this introduction.

I had to make this introduction because using 12c we have to create a SOA Application with a SOA Project for our first BPEL process.

I assume you have installed the SOA QuickStart. If not do so, for instance with the use of the silent install. That article describes the installation of the BPM QuickStart, but for SOA QuickStart it works exactly the same, but you might want to change the home folder from c:\oracle\JDeveloper\12210_BPMQS to c:\oracle\JDeveloper\12210_SOAQS.

Then start JDeveloper from the JDeveloper home. If you used the silent install option, then you can start JDeveloper via c:\oracle\JDeveloper\12210_BPMQS\jdeveloper\jdev\bin\jdev64W.exe.
Create SOA ApplicationAfter having JDeveloper started, click on the 'New Application' link, top right. You can also use the New icon () or the File->New menu, option Application or From Gallery:
You'll get the following screen:



Choose SOA Application and click OK. This results in the following first page of the Create SOA Application wizard:
Name the application 'HelloWorld' and provide a Directory for the root folder for the application. For instance 'c:\Data\Source\HelloWorld'. And click Next.
Then provide a name for the project. For now you can use the same name as the application. The Project Directory should be adapted automatically. If not make sure it's a subfolder in the application's directory, and click Next:

In the last step of the wizard you can choose a Standard Composite (with a start component) or a Template for the composite. You could choose for a composite with a BPEL Process, but for now, let's choose an Empty Composite, since we need to create a schema first.

JDeveloper SOA Application Screen OverviewWhen you click finished your JDeveloper screen looks more or less like:

Top left you'll find the Project Navigator that shows the generated artefacts that you'll find in the project folder. In the middle of the left panes you'll find a Resource Navigator which enables you to edit some specific property files in the SOA Application. But we'll leave them for the moment. Bottom left you'll find two tabs, one of which is the structure pane. This one comes in handy occasionally, but we'll get into that later as well.

In the middle you'll find the Composite Designer. We'll cover it later more extensively, but for now this is the start of assembling our application. Top Right you'll find a collapsed pane for the Components. You can expand it by clicking on it:
 
Top right you'll find a  button to dock the pane. This will make the pane visible permanently. This is the default. This is the default, but to free up space and have a larger portion of your screen available for the designer(s), you can collapse it using the minus-icon on the same place as the Dock-button:


To add components to the application you simply drag and drop than on the appropriate pane.
Create an XML SchemaA BPEL process is exposed as a service in most cases. To do so you have to create a WSDL (WebService Definition Language) document. This is an interface contract for a service described in XML. It defines the input and output of a service and the possible operations on the service. Each possible operation needs refer to it's input and possible output message. And each message refers to the definition of that message. The message definition is done as an element in an XML Schema Definition (XSD). Although you can have it auto-generated, it's recommended to do a bottom-up definition of the contract. So let's start with the XSD.

Using the File->New->From Gallery menu choose 'XML Schema' and click OK:
 
Enter the details of the file: name it 'HelloWorld.xsd'. The Directory is per default the 'Schemas' folder within the project. Namespaces are important in a xsd and the wsdl, so give a proper namespace, like 'http://www.darwin-it.nl/xsd/HelloWorld' and provide a convenient prefix for it, like 'hwd':

This generates the following  xsd:
 
The xsd should contain an element for the request as well as the response  message. First rename the 'exampleElement' to 'helloWorldRequestMessage'. If you click on the element the name of the elememt should be highlighted. Then you can change the name by starting to type the new name. You can also bring up the properties pane and change the name there:


Change the name to helloWorldRequestMessage; I like to have elements have a proper name and in this case it's defining a request message for the HelloWorld service. I use the Java Standard Camel Case notition where elements have a lowercase first character.

It's time to extend the XSD. Since we're in the XSD editor, the component pallette is changed and contains the possible components to use in an XSD:

Drag and drop a new element from the pallette to under the helloWorldRequestMessage, until you see a line under the element and a plus-sign to the mouse-pointer:



Name the new element helloWorldResponseMessage.
In the same way add a complexType and name it HelloWorldRequestMessageType. I don't like implicit nameless complexTypes within elements. So I tend to always create named comlextypes, that I name the same as the corresponding elements, except for a UpperCase first character and the suffix 'Type'.

Then add a sequence on the helloWorldRequestMessage:

Do the same for complexType HelloWorldResponseMessageType
Then in the HelloWorldRequestMessageType add a sequence:

And in the sequence drag and drop an element called 'name'.
Do the same for HelloWorldResponseMessageType, except name the element 'greeting'.

Now we need to set a type for the elements. Right-click on the name element and choose 'Set Type' from the context menu:


In the dropdown box choose xsd:string:

You can also set the type from the properties:

Set also the greeting element to xsd:string.

Then set the helloWorldRequestMessage to the type hwd:HelloWorldRequestMessageType:



And the helloWorldResponseMessage to the type hwd:HelloWorldResponseMessageType accordingly.
After saving it, the resulting xsd shood look like:


Create the WSDLNow we can create a WSDL based on the xsd. JDeveloper has a nice WSDL generator and editor.
The easiest way to generate or kickstart a wsdl is to drag and drop a SOAP service from the component palette:

to the Exposed Services lane on the canvas:
This opens the dialog for defining the SOAP Service. Name it HelloWorld. Then you can choose a WSDL or generate/define it from scratch based on the xsd. To do so click on the cog:

The name of the wsdl is pre-defined on the SOAP service, but you can leave the suggestion. Do yourself a favor and provide a sensible namespace. I choose 'http://xmlns.darwin-it.local/xsd/HelloWorld/HelloWorldWS'. Think of a good convention where you have a company base url (I'll get back to it later, but for now it's common convention to use an URI, that does not need, in most cases even don't, refer an actual existing internet address).
Set the the Interface type to 'Synchronous Interface' and click the green plus icon to define a message for the Input:

There you can give the message part a name, for this you can also define a convention, but there is actually no real point in doing so. So you can leave it just to the suggested 'part1'. Click on the magnifier icon and look for the HelloWorld.xsd under Project Schema Files. Select the 'helloWorldRequestMessage' and click OK:

Back in the Add Message Part dialog, click OK again:
Back in the Create WSDL dialog, you can leave the names for Port Type and Operation, to the proposed values. For this service execute is a quite proper name. But in the future it is wise to think of a proper verb that gives a proper indication of what the operation means. Later when I'll discuss WSDL's you'll find that a WSDL can hold several operations and porttypes.

For now click OK again:

And also in the Create Web Service dialog click OK again:




What results in a HelloWorldWS Service in the Composite definition:

Create the HelloWorld BPEL processAt this point we're ready to add a BPEL Process. In the component palette look for the BPEL Proces Icon:

and drag&drop it on the Components lane in the Composite canvas:




This will bring up the following dialog.



Fill in the folowing properties like:
Property
Value
NameHelloWorldBPELProcessNamespacehttp://xmlns.darwin-it.nl/bpel/HelloWorld/HelloWorldBPELProcess
The namespace is not really important here, you could leave it at the suggested value. It's only used internally, not exposed since we're using a pre-defined wsdl to expose the service. However you can change it to a proper value like here to identify it as a BPEL created by you/your company. This can be sensible when using tools like Oracle Enterprise Repository. But especially when you generate the wsdl by exposing the BPEL process as a SOAP service.

Click on the import 'Find Existing WSDLs' icon to bring up the following WSDL search dialog:

Here you see that you can browse for WSDLs through various channels: look into an Application Server for already deployed services, search in the SOA-MDS (more on that later) and so on. But since we have the WSDL in our project, choose for the File System

Within the project navigate to the WSDLs folder and choose the HelloWorldWS.wsdl, and click OK.
This will bring us back to the Create BPEL Process dialog. Important here is to deselect the 'Expose as a SOAP service' checkbox, since we want to reuse the already defined SOAP service, instead of having a new service generated. In the case you missed this, and a new service is generated, you can delete it easily.

Now the only thing to do, before implementing the process, is to 'wire' the HelloWorldWS SOAP Service to the BPEL Process. Do so by picking up the '>' icon and drag it over to the top  '>' icon in the BPEL Process:


You'll see that a green line appears between the start icon, that gets a green circle around it, and the possible target icons.

Implement the BPEL processOpen the BPEL designer by right clicking on the BPEL component and choose  'Edit' or  double-clicking it:





This opens the BPEL Designer with a skeleton BPEL process:




It shows a so-called 'Partner link'  to the left. There are two 'Partner Links' Lanes one on each side of the process. They're equal, the designer picks out a lane for each new Partner Link as it finds appropriate, but you can drag and drop them to the other side when it suites the drawing better. But the so-called client Partner Link is always shown left at start, and it is common to have it left.

Then you'll see two activities in the process in the main-process-scope: receiveInput  and replyOutput. The first receives the call and initiates the BPEL Process.

If you edit this activity, you'll be able to see and edit the name of the activity. It shows the input variable where the request document is stored. And very important the checkbox 'Create Instance' is checked. This means that this is the input activity that initiates a process instance. It is very important that this is the very first Receive or Pick activity in the BPEL process.

The Reply activity is the counterpart of the first Receive and is only applicable for a synchronous BPEL process. It responses to the calling Service Consumer with the contents of the outputVariable.

So in between we need to build up the contents of the outputVariable based up on the provided input.

Since we're in the BPEL Designer, you could have noticed that the Component Palette has changed. It shows the BPEL Constructs and Activities, and below those some other extension categories that I'll probably cover in a more advanced chapter:



Take the Assign activity:



and drag and drop it between the 'receiveInput' and 'replyOutput' activities:

When dragging the Assign activity, you'll notice the green add-bullets denoting the valid places where you could drop the activity. This results in:



You can right-click-and-edit or double-click the activity:



Which results in the Assign Editor. It opens on the Copy Rules tab, but first click on the General tab to edit the Name of the Assign:


Switch back to the Copy Rules Tab. On the right, locate the outputVariable and 'Expand All Child Nodes', by right-clicking it and choose the appropriate option:



Then drop the Expresion icon (the most right in the button bar) on the greeting element of the variable:


This brings up the Expression Builder:



Here choose the concat() function under the category 'String Functions' under Functions. Place the cursor between the brackets and locate the name element under the helloWorldRequestMessage element under the inputVariable.part1. Edit the expression to prefix the name with 'Hello ' and suffix it with an exclamation mark. resulting in the expression:
concat( 'Hello ', $inputVariable.part1/ns2:name,'!')

This results in:




 Here you see that an element of the inputVariable is used in a function to build up the greeting. For more complex XSD's you can add several copy rules, as many as needed, just by drag and dropping elements to their target-elements. For now, the assign is finished, so click OK.



Your process now looks like the example above. Save the process by clicking the Save or Save All, that look like the old-school floppy disks.
Fire up the Integrated WeblogicTo be able to test the BPEL process, we'll need a SOA Server. At a real-live project, a proper server installation will be provided to you and your team (preferably and presumably). This is recommended even for development to be able to do integration tests for a complex chain of services and to be able to work against enterprise databases and information systems.

But for relatively small unit tests as this Oracle provided an integrated weblogic in the SOA QuickStart topology of JDeveloper with a complete SOASuite and Service Bus. To start it choose the menu option Run->Start Server Instance (IntegratedWeblogicServer):



The first time you do this, you'll have to provide a user id for the administrator (usually weblogic) and a corresponding password (often welcome1). Provide the credentials to your liking (and remember them):


When clicking OK, a log viewer pops up at the bottom that shows the server output log of the integrated weblogic server:




Wait until the message appears:

SOA Platform is running and accepting requests. Start up took 92300 ms
IntegratedWebLogicServer startup time: 363761 ms.
[IntegratedWebLogicServer started.]

Now the SOA Suite Server is available to receive the HelloWorld Composite and do some tests.

Deploy ...Deploying a SOA Composite is a kind of special way of deployment that differs from deploying a regular Web Application. Although you can find a deploy menu option in the Application menu of JDeveloper, this one is not suitable for SOA Composite deployments. For SOA Composite projects, you need to right-click on the project name and choose Deploy and then the name of the Project.


This starts a wizard with on the first page two options, Deploy to Application Server and Generate SAR File:

A SAR file is a so-called SOA-Archive and is used to be able to have the project deployed by an administrator to a target environment without the need of JDeveloper. Choose the option Deploy to Application Server and click Next.

In the next page, select the Overwrite any existing composite with the same revision ID checkbox, but deselect the Keep running instances after redeployment. Although this is the first deployment (so there are no existing running instances yet), after this deployment you'll find that this sort of profile is added to the deployment context menu, and not checking the Overwrite... option will cause a failure in the next deployments. But there's no need to keep existing instances running. Keep the Revision ID. Click Next.

Select the pre-defined IntegratedWeblogicServer and click Finish.


The log pane is opened on the SOA tab:

If you've done everything ok, then a BUILD SUCCESSFUL message is shown. By the way, alhough it might not be clear to you, the integrated build-tool Apache ANT is used for this.

When the BUILD SUCCESSFUL message is shown, you can switch to the Deployment tab. After building the SAR it is send to the weblogic server. If this finishes without errors you'll be able to test it.

... and test the BPEL processTesting and monitoring the SOA Composite instances can be done using the Enterprise Manager - Fusion Middleware Control.

To open it, start a Internet Browser and navigate to http://localhost:7101/em (or the server:port where your admin-server runs extended with the URI '../em'). This should bring you to the Login page:



Use the credentials you used to start the IntegratedWeblogicServer for the first time to login.
Then the Enterprise Manager - Fusion Middleware Control landing page is opened for the Weblogic Domain

Click on the Navigator Icon to open up the Target Navigation and navigate to the composite under SOA->soa-infra->default:



Here the SOA node contains both service-bus projects and SOA Suite (soa-infra) Composite Deployments. The node Default is actually the Default partition. You can create several partitions, to catalog your deployments.

Open the HelloWorld[1.0] composite by clicking on it. Click on the Test button:




You can select a Service, Port and Operation as defined in the composite. Expand the nodes under SOAP Body, to navigate to the name input field. Fill in a nice value for name and click on Test Web Service:


After a (very) short while (dependent on the complexity of the service) the response will be shown:




Only for Synchronous services, a response message will be shown. But you can (practically) always click on the Launch Flow Trace  button to review the running, faulted or completed instance:




The flowtrace shows the different initiated components in the flow in a hierarchical way.
Here you can click on the particular components that you want to introspect. Clicking on the HelloWorldBPELProcess will open up the BPEL Flow:




If you click on the the assign activity this will show:




It might be that in your case you'll not be able to open this, or viewing the message contents. This is caused by the Audit Level of the SOA Suite that by default is set to 'Production'. Even on the SOA QuickStart...

To resolve this you'll need to go to the Common Properties of the SOA Infrastructure. You can use the Target Navigation browser to navigate to the soa-infra. But you can also use the SOA Composite to navigate to the SOA Infrastructure Common Properties:




Then use the SOA Infrastructure menu to navigate to SOA Administration -> Common Properties:



Here you can set the Audit Level to Development. Click on the Apply button to effectuate the change.




Do another test and check-out the BPEL Flow and review the message contents.

ConclusionSo this concludes the first part of my series BPEL. And if you bravely followed my step-by-step instructions you should be able to do some basic navigation through JDeveloper and the Enterprise Manager Fusion Middleware Control. I hope you find this entertaining, although you might already be an experienced SOA Developer. But then you probably didn't get this far...

In next articles I hope to slowly increase the level of the subjects. Coming up: invoking other services and using scopes. But probably I need to explain something about SoapUI to provide mockservices. And to deliver a more convenient way to test your services.

Pages

Subscribe to Oracle FAQ aggregator