Feed aggregator

Oracle DBaaS CookBook using EM12c : MyNotes

Arun Bavera - Thu, 2016-06-02 10:15

Cloud/Super Administrator Tasks (EM CLOUD ADMIN Role)



Step 1: Decide with Architects and Cloud decision makers about the Service Catalog offerings


Business Catalog->Technical Catalog -> Service Catalog ( May be Subset of Technical Catalog)



A company can provide the Bronze and Silver tier in the self-service model but choose to deploy the Gold and Platinum tiers using native Oracle Enterprise Manager 12c capabilities.
Platinum Tier, should be implemented in its own reference architecture. In some cases, companies elect to provision these tiers using Oracle Enterprise Manager 12c native capabilities, as opposed to self-service




Step 2: Configure Software Library

Step 3: Deploy the necessary plug-ins. See Section 3.2, "Deploying the Required Plug-ins"


Step 4: Install the Management Agent on unmanaged hosts
so that they can be monitored by Enterprise Manager. See Section 11.2.1, "Adding Hosts".


Step 5: Configure Privilege Delegation for Database Servers Setup > Security > Privilege Delegation Configuring Privilege Delegation Settings


Step 6: Define Named Credentials Setup > Security > Named Credentials. Defining Roles and Assigning Users
In a DBaaS implementation the configuration of the named and preferred credentials is critical. In larger implementations of DBaaS, the preferred credentials are vital especially when using automation using emcli or when using the cloud framework in conjunction to the lifecycle management pack.


Step 7: Creating Cloud Users and Roles Setup > Security > Roles.
Quotas are assigned at the role level and users should be assigned to the roles based on the desired limits.
                 MYPROJECT_SSA_DB_DEV_USER_ROLE [ EM_SSA_USER is assigned to this role, also EM_USER and Public ]
               MYPROJECT_SSA_DB_PROD_USER_ROLE

Note: EM_CLOUD_ADMINISTRATOR(SYSMAN), EM_SSA_ADMINISTRATOR (SYSMAN), and EM_SSA_USER roles must have been created https://docs.oracle.com/cd/E63000_01/EMCLO/cloud_paas_setup.htm#EMCLO323>


Step 8: Create Self Service Portal Users Setup >Security >Administrators.
    MYPROJECT_CLOUD_USER as an example user or assign MYPROJECT_SSA_USER_ROLE to all MYPROJECT_DBA_USERS AD Group.


Step 9: Defining Databases Zones Setup > Cloud > PaaS Infrastructure Zones http://docs.oracle.com/cd/E24628_01/doc.121/e28814/cloud_paas_setup.htm#CHDBIBGB
Define and create zones to satisfy the Reference Architecture Design.
              1. DataCenter- A - Primary [ MYPROJECT_PaaS_DBZone_Primary_A ]
              2. DataCenter -B - Standby [ MYPROJECT_PaaS_DBZone_STandBy_B ]
Can be based on Geography (East/West),Lifecycle(Prod/Dev) or Functional ( ERP, DB, Siebel etc)
Placement Policy is maximum constraint for CPU and Memory on these Hosts
Refer: Adding Hosts  and  Creating a PaaS Infrastructure Zone


DBaaS Administrator Tasks (EM_SSA_ADMIN)


Step 10: Define Database Sizes: [ Only emcli option ]
emcli create_database_size -name=Small -description="Small size database" -attributes="cpu:2;storage:2;memory:1;processes:350"
emcli create_database_size -name=Medium -description="Medium size database" -attributes="cpu:3;storage:3;memory:1;processes:500"
emcli create_database_size -name=Large -description="Large size database" -attributes="cpu:4;storage:4;memory:2;processes:700"


Step 11: Defining Database Pools Setup >Cloud > Database and select "Database Pools"
a) Database pool name (select a distinct name that describes purpose of the pool in this example: DataCenter-A Linux64-11202-SI,
b) Provide Global credentials for an Oracle named credential. In this case, the grid and root credentials are optional,
c) Select the Hosts and database homes that will define the pool,
d) Select the PaaS Infrastructure Zone, Platform, database configuration and database version to be installed in the pool.




Step 12: Request Settings Setup >Cloud > Database and select "Request Settings" ( All settings at self service level)
• The amount of time in advance the self service user can schedule a request.
• The maximum amount of time for which a self service user can retain the instance.
• The amount of time after the completed self service create requests will be purged from the repository.


Step 13: Quotas Setup >Cloud > Database and select "Quotas" ( all settings at Role Level)
Using the Quota, the cloud administrator is able to assign to roles the following:
1. The amount of memory all the users assigned to the roles can consume.
2. The amount of storage all the users assigned to the roles can consume.
3. The number of databases all the users assigned to the roles can request.
4. The number of schema requests all the users assigned to the roles can request.
5. The number of Pluggable database requests all the users assigned to the roles can request.


Step 14: Defining Service Templates for Self Service Provisioning Setup >Cloud > Database and select "Profiles and Service Templates"





NOTE: Provide as much as details in description this is what SSA_USERS see when they request
Cloud DBaaS User Tasks [EM_SSA_USER is assigned to this role, also EM_USER and Public]


Step 15: Using Self Service Portal Enterprise > Cloud > Self-Service Portal or If the user has only SSA then when he logins he gets SSA Portal only not the normal view



Reference:
http://www.oracle.com/technetwork/oem/sys-mgmt/wp-em12c-monitoring-strategies-1564964.pdf
http://www.oracle.com/technetwork/oem/cloud-mgmt/dbaas-more-info-2394630.html
http://docs.oracle.com/cd/E24628_01/doc.121/e28814/toc.htm


YouTube:


Categories: Development

Compression -- 8 : DROPping a Column of a Compressed Table

Hemant K Chitale - Thu, 2016-06-02 09:56
Building on the series on Compression .....

What happens if we try to DROP a column in a Compressed Table ?  How can we execute the DROP ?

Starting with BASIC Compression.


SQL> connect hemant/hemant
Connected.
SQL> create table compress_basic as select * from source_data where 1=2;

Table created.

SQL> alter table compress_basic compress;

Table altered.

SQL> insert /*+ APPEND */ into compress_basic
2 select * from source_data
3 where rownum < 100001;

100000 rows created.

SQL> commit;

Commit complete.

SQL> select compression, compress_for
2 from user_tables
3 where table_name = 'COMPRESS_BASIC'
4 /

COMPRESS COMPRESS_FOR
-------- ------------------------------
ENABLED BASIC

SQL> alter table compress_basic drop (object_name);
alter table compress_basic drop (object_name)
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables


SQL>
SQL> !oerr ora 39726
39726, 00000, "unsupported add/drop column operation on compressed tables"
// *Cause: An unsupported add/drop column operation for compressed table
// was attemped.
// *Action: When adding a column, do not specify a default value.
// DROP column is only supported in the form of SET UNUSED column
// (meta-data drop column).

SQL>


So, I would have to set the column to UNUSED !

SQL> alter table compress_basic set unused column object_name;

Table altered.

SQL> alter table compress_basic drop (object_name);
alter table compress_basic drop (object_name)
*
ERROR at line 1:
ORA-00904: "OBJECT_NAME": invalid identifier


SQL> alter table compress_basic drop unused columns;
alter table compress_basic drop unused columns
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables


SQL>


I can't drop a column from a table with Compression enabled.

Is there another way ?

SQL> alter table compress_basic move nocompress;

Table altered.

SQL> alter table compress_basic drop unused columns;

Table altered.

SQL>


To actually execute the DROP, I have to Uncompress the table !

 So : Remember : You have to be careful when designing a table that you intend to Compress.  You won't be able to DROP columns !


Repeating the test case with OLTP Compression :

SQL> create table compress_oltp as select * from source_data where 1=2;

Table created.

SQL> alter table compress_oltp compress for oltp;

Table altered.

SQL> select compression, compress_for
2 from user_tables
3 where table_name = 'COMPRESS_OLTP'
4 /

COMPRESS COMPRESS_FOR
-------- ------------------------------
ENABLED ADVANCED

SQL>
SQL> insert into compress_oltp
2 select * from source_data
3 where rownum < 100001;

100000 rows created.

SQL> commit;

Commit complete.

SQL> alter table compress_oltp drop (object_name);

Table altered.

SQL>


Much easier, I could DROP the column.  But, wait.  Is there a catch ?

SQL> drop table compress_oltp purge;

Table dropped.

SQL> create table compress_oltp as select * from source_data where 1=2;

Table created.

SQL> alter table compress_oltp compress for oltp;

Table altered.

SQL> select compression, compress_for
2 from user_tables
3 where table_name = 'COMPRESS_OLTP'
4 /

COMPRESS COMPRESS_FOR
-------- ------------------------------
ENABLED ADVANCED

SQL> alter table compress_oltp nocompress;

Table altered.

SQL> select compression, compress_for
2 from user_tables
3 where table_name = 'COMPRESS_OLTP'
4 /

COMPRESS COMPRESS_FOR
-------- ------------------------------
DISABLED

SQL> alter table compress_oltp drop (object_name);
alter table compress_oltp drop (object_name)
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables


SQL>


If I ALTER the table to NOCOMPRESS (which can take effect only on *new* rows, not existing rows), I cannot DROP a column.  This is because Oracle is unsure if there is a mix of Comressed and Non-Compressed rows in the table now.

What I'd have to do is to rebuild it as a NOCOMPRESS table.

SQL> alter table compress_oltp move nocompress;

Table altered.

SQL> alter table compress_oltp drop (object_name);

Table altered.

SQL>


So, once a table is set to COMPRESS OLTP and then set to NOCOMPRESS, you can't simply DROP a column.


This test-case came out of an issue a friend of mine faced today.  He found that he couldn't drop a column from a table that was formerly set to COMPRESS FOR OLTP.

I pointed him to Support Document 1288918.1

.
.
.

Categories: DBA Blogs

OTN Cloud Developer Challenge 2016

Tim Hall - Thu, 2016-06-02 09:31

cloudThe Oracle ACE Program decided to tag an extra day on to the front of the AMIS 25 Beyond the Horizon conference for the “OTN Cloud Developer Challenge”. The idea was for teams of ACEs to get together and build applications using Oracle Cloud services. It was originally called a cloud hackathon, but got renamed to something that didn’t contain the word “hack”.

Oracle Application Express 5.1 Early Adopter 1 now available!

Marc Sewtz - Thu, 2016-06-02 08:55

We are very excited to announce the beginning of the Oracle Application Express 5.1 Early Adopter program, at https://apexea.oracle.com. The Early Adopter is an open-to-the-public hosted beta program. And we’re inviting everyone interested in Oracle Application Express - existing customers and new customers alike - to sign up for a free workspace and take our forthcoming release for a spin.

To get started, click the "Request a Workspace" button from the https://apexea.oracle.com home page, sign-in with your Oracle account (i.e. the same account you use for other Oracle sites, like OTN) and complete the sign-up wizard.

There are numerous new features to explore, including the Interactive Grid and Oracle JET charts. As this is an Early Adopters release, not everything is complete just yet, you can review a list of known issues here:

https://apex.oracle.com/pls/apex/f?p=510051:1

We’re looking for your feedback and input, which you can submit via the feedback button (see App Builder nav bar in the upper right-hand corner).

Thank you for your interest and your support!

Oracle NEXT_DAY Function with Examples

Complete IT Professional - Thu, 2016-06-02 06:00
The NEXT_DAY function is a handy date manipulation function provided by Oracle. In this article, I’ll explain how to use it and show you some examples. Purpose of the Oracle NEXT_DAY Function The Oracle NEXT_DAY function returns the date of the first weekday that comes after the specified date value. It’s good for working with […]
Categories: Development

Paris to the Netherlands

Tim Hall - Thu, 2016-06-02 02:04

airplane-flying-through-clouds-smallGetting to bed at 01:00 and having to wake up at 04:30 is not ideal, but it was worth it.

The alarm went off at 04:30, I had a quick shower, checked out of the hotel and took the shuttle across to terminal 2 in Charles de Gaulle airport. Despite the early hour, there were a lot of people around. I grabbed a coffee and caught up with my blog posts. By the time I had done that, it was time to board the short flight to Amsterdam.

We boarded on time, but unfortunately the flight was delayed by about 50 minutes, so a 50 minute flight became a 100 minute plane ride.

Links for 2016-06-01 [del.icio.us]

Categories: DBA Blogs

Creating a New Single-Node Oracle E-Business Suite Installation on Oracle Cloud

Senthil Rajendran - Thu, 2016-06-02 00:36
(adsbygoogle = window.adsbygoogle || []).push({ google_ad_client: "ca-pub-6607084504466173", enable_page_level_ads: true });
Before you read : Please note the documentation and procedures for Lifting and Shifting On-Premise to Oracle Cloud is constantly evolving. Instruction here are high level and recommend to review the latest documentation.

Link : Provisioning a New Single-Node Oracle E-Business Suite Installation in the Oracle Compute Cloud Service

This procedure could be the quickest way of getting an e-business suite 12.2.5 instance in IaaS public cloud.

To the compute account pull either a EBS 12.2.5 Demo Install Vision Image or a Fresh Install Image using Get App method.

Here is the Demo Install EBS 12.2.5 Image



Here is the Fresh Install EBS 12.2.5 Image


Click the Get App and this action would copy the Image to the Compute Storage.


Now with the Image available in the Compute Storage we will be good to start with a new provisioning.

From the Compute Account , Click Create Instance


Search for the Fresh Install Image and Select it


Select the shape of the machine


Give a name to the Instance


Review the storage , at this point if you wish to add more space you can do it or it can be done later once the instance is provisioned.

Finally review and click the Create button


From there the backend operation will take care and the instance will get provisioned. Once the instance is provisioned view the instance from the compute page and log into the instance with the assigned IP and Key.  Validate the environment.

What happens at the back end 
  • fresh image has pre-cloned environment staged and ready
  • first database is prepared using config clone option
  • then the application is prepared using config clone
  • run and patch files system are setup
With this process a base 12.2.5 environment is available to you. From here you can customize to your need. 

Hope this helps ! Thanks for reading.



Give Employees More Reasons to Stick Around

Linda Fishman Hoyle - Wed, 2016-06-01 18:40

A Guest Post by Andy Campbell, HCM strategic director (pictured left)

Winning over and retaining the best talent has never been easy, but employers today are finding it harder than ever to find people with the right skills to fill key vacancies. If businesses are to keep growing and evolving they need new ways to attract and engage the talented employees that will take them on that journey.

Interestingly, recent research from Mercer suggests that some businesses may be overestimating their ability to fill critical roles. Nine out ten organisations anticipate the competition for talent will increase in 2016, and 70% are confident that they can fill critical roles with internal candidates. The kicker is that nearly one-third (28%) of employees who responded to the survey plan to leave their current job in the next 12 months.

Unless businesses take action, the people they have pegged to ensure their future success may not be around when the time comes.

Many Companies Aren’t in a Position to Compete

Part of the issue is that many companies aren’t currently in a position to compete for talent successfully. Mercer’s research also reveals that 85% of businesses admit their talent management programs and policies need an overhaul, but instigating and managing these changes requires support from leadership. In addition, delivering modern HR requires modern HR systems that are more in keeping with the demands of a technology-savvy workforce.

Mercer is clear in its findings: Companies looking to retain the best workers should update and expand their employee development programs and other processes. And yet only 4% of HR professionals said the HR function is viewed as a strategic partner within their organisation.

Oracle Research: What Employees Really Want

It’s a conclusion that we at Oracle agree with completely. Our own Simply Talent: A Western European Perspective uncovered similar trends in Western Europe. We found that most employees desire more proactive, regular interactions with their managers, improvements that are underpinned by better employee engagement initiatives, but that once again very few (3%) workers see the HR department as having an important role in driving employee engagement.

Clearly there remains a gap between the engagement programs being run by the business and HR, and what employees now expect from their employers.

Turn Talk Into Action

So how can companies create a culture that attracts the best people and keeps them on board? The first step is to invest in understanding what motivates and engages their employees. With HR at the helm, managers should be mapping out the employee journey from on-boarding to retirement to see where points of friction may arise so they can do whatever is possible to mitigate them. This approach is similar to the way retailers and customer service teams map out the customer journey to make it more satisfying and convenient.

Regular dialogue with employees is also essential. The days of annual reviews are long gone; today’s employees need to feel they are both appreciated and that their needs are understood, and this can only happen if line managers and senior team members have more frequent, focused discussions with employees about their career path and progress.

Most importantly, these discussions must be followed by action. If employees feel nothing comes from their talks with managers they will be more inclined to seek a company that delivers on its promises. Our research found that less than one-third of employees believe their company is proactive about engaging with them. The time has come for a sea-change in how businesses communicate with their people.

Employees are every company’s most precious resource—in deed, as well as in word. HR must take the lead in fostering an employee-focused work culture where development, training, education, and support are treated as a priority and a true dialogue exists between employees and their managers. Those companies that adapt to this new reality will be in the best possible position to ensure they have the right talent to achieve their future ambitions.

OBIEE 12c – Extended Subject Areas (XSA) and the Data Set Service

Rittman Mead Consulting - Wed, 2016-06-01 18:17

One of the big changes in OBIEE 12c for end users is the ability to upload their own data sets and start analysing them directly, without needing to go through the traditional data provisioning and modelling process and associated leadtimes. The implementation of this is one of the big architectural changes of OBIEE 12c, introducing the concept of the Extended Subject Areas (XSA), and the Data Set Service (DSS).

In this article we’ll see some of how XSA and DSS work behind the scenes, providing an important insight for troubleshooting and performance analysis of this functionality.

What is an XSA?

An Extended Subject Area (XSA) is made up of a dataset, and associated XML data model. It can be used standalone, or “mashed up” in conjunction with a “traditional” subject area on a common field

How is an XSA Created?

At the moment the following methods are available:

  1. “Add XSA” in Visual Analzyer, to upload an Excel (XLSX) document

  2. CREATE DATASET logical SQL statement, that can be run through any interface to the BI Server, including ‘Issue Raw SQL’, nqcmd, JDBC calls, and so on

  3. Add Data Source in Answers. Whilst this option shouldn’t actually be present according to a this doc, it will be for any users of 12.2.1 who have uploaded the SampleAppLite BAR file so I’m including it here for completeness.

Under the covers, these all use the same REST API calls directly into datasetsvc. Note that these are entirely undocumented, and only for internal OBIEE component use. They are not intended nor supported for direct use.

How does an XSA work?

External Subject Areas (XSA) are managed by the Data Set Service (DSS). This is a java deployment (datasetsvc) running in the Managed Server (bi_server1), providing a RESTful API for the other OBIEE components that use it.

The end-user of the data, whether it’s Visual Analyzer or the BI Server, send REST web service calls to DSS, storing and querying datasets within it.

Where is the XSA Stored?

By default, the data for XSA is stored on disk in SINGLETON_DATA_DIRECTORY/components/DSS/storage/ssi, e.g. /app/oracle/biee/user_projects/domains/bi/bidata/components/DSS/storage/ssi

[oracle@demo ssi]$ ls -lrt /app/oracle/biee/user_projects/domains/bi/bidata/components/DSS/storage/ssi|tail -n5
-rw-r----- 1 oracle oinstall    8495 2015-12-02 18:01 7e43a80f-dcf6-4b31-b898-68616a68e7c4.dss
-rw-r----- 1 oracle oinstall  593662 2016-05-27 11:00 1beb5e40-a794-4aa9-8c1d-5a1c59888cb4.dss
-rw-r----- 1 oracle oinstall  131262 2016-05-27 11:12 53f59d34-2037-40f0-af21-45ac611f01d3.dss
-rw-r----- 1 oracle oinstall 1014459 2016-05-27 13:04 a4fc922d-ce0e-479f-97e4-1ddba074f5ac.dss
-rw-r----- 1 oracle oinstall 1014459 2016-05-27 13:06 c93aa2bd-857c-4651-bba2-a4f239115189.dss

They’re stored using the format in which they were created, which is XLSX (via VA) or CSV (via CREATE DATASET)

[oracle@demo ssi]$ head 53f59d34-2037-40f0-af21-45ac611f01d3.dss
"7 Megapixel Digital Camera","2010 Week 27",44761.88
"MicroPod 60Gb","2010 Week 27",36460.0
"MP3 Speakers System","2010 Week 27",36988.86
"MPEG4 Camcorder","2010 Week 28",32409.78
"CompCell RX3","2010 Week 28",33005.91

There’s a set of DSS-related tables installed in the RCU schema BIPLATFORM, which hold information including the XML data model for the XSA, along with metadata such as the user that uploaded the file, when they uploaded, and then name of the file on disk:

How Can the Data Set Service be Configured?

The configuration file, with plenty of inline comments, is at ORACLE_HOME/bi/endpointmanager/jeemap/dss/DSS_REST_SERVICE.properties. From here you can update settings for the data set service including upload limits as detailed here.

XSA Performance

Since XSA are based on flat files stored in disk, we need to be very careful in their use. Whilst a database may hold billions of rows in a table with with appropriate indexing and partitioning be able to provide sub-second responses, a flat file can quickly become a serious performance bottleneck. Bear in mind that a flat file is just a bunch of data plopped on disk – there is no concept of indices, blocks, partitions — all the good stuff that makes databases able to do responsive ad-hoc querying on selections of data.

If you’ve got a 100MB Excel file with thousands of cells, and want to report on just a few of them, you might find it laggy – because whether you want to report on them on or not, at some point OBIEE is going to have to read all of them regardless. We can see how OBIEE is handling XSA under the covers by examining the query log. This used to be called nqquery.log in OBIEE 11g (and before), and in OBIEE 12c has been renamed obis1-query.log.

In this example here I’m using an Excel worksheet with 140,000 rows and 78 columns. Total filesize of the source XLSX on disk is ~55Mb.

First up, I’ll build a query in Answers with a couple of the columns:

The logical query uses the new XSA syntax:

SELECT
   0 s_0,
   XSA('prodney'.'MOCK_DATA_bigger_55Mb')."Columns"."first_name" s_1,
   XSA('prodney'.'MOCK_DATA_bigger_55Mb')."Columns"."foo" s_2
FROM XSA('prodney'.'MOCK_DATA_bigger_55Mb')
ORDER BY 2 ASC NULLS LAST
FETCH FIRST 5000001 ROWS ONLY

The query log shows

Rows 144000, bytes 13824000 retrieved from database query
Rows returned to Client 200

So of the 55MB of data, we’re pulling all the rows (144,000) back to the BI Server for it to then perform the aggregation on it, resulting in the 200 rows returned to the client (Presentation Services). Note though that the byte count is lower (13Mb) than the total size of the file (55Mb).

As well as aggregation, filtering on XSA data also gets done by the BI Server. Consider this example here, where we add a predicate:

xsa14

In the query log we can see that all the data has to come back from DSS to the BI Server, in order for it to filter it:

Rows 144000, bytes 23040000 retrieved from database
Physical query response time 24.195 (seconds),
Rows returned to Client 0

Note the time taken by DSS — nearly 25 seconds. Compare this later on to when we see the XSA data served from a database, via the XSA Cache.

In terms of BI Server (not XSA) caching, the query log shows that a cache entry was written for the above request:

Query Result Cache: [59124] The query for user 'prodney' was inserted into the query result cache. The filename is '/app/oracle/biee/user_projects/domains/bi/servers/obis1/cache/NQS__736113_56359_0.TBL'

If I refresh the query in Answers, the data is fetched anew (per this changed behaviour in OBIEE 12c), and the cache repopulated. If I clear the Presentation Services cache and re-open the analysis, I get the results from the BI Server cache, and it doesn’t have to refetch the data from the Data Set Service.

Since the cache has two columns in, an attribute and a measure, I wondered if running a query with just the fact rolled up might hit the cache (since it has all the data there that it needs)

Unfortunately it didn’t, and to return a single row of data required BI Server to fetch all the rows again – although looking at the byte count it appears it does prune the columns required since it’s now just over 2Mb of data returned this time:

Rows 144000, bytes 2304000 retrieved from database
Rows returned to Client 1

Interestingly if I build an analysis with several more of the columns from the file (in this example, ten of a total of 78), the data returned from the DSS to BI Server (167Mb) is greater than that of the original file (55Mb).

Rows 144000, bytes 175104000
Rows returned to Client 1000

And this data coming back from the DSS to the BI Server has to go somewhere – and if it’s big enough it’ll overflow to disk, as we can see when I run the above:

$ ls -l /app/oracle/biee/user_projects/domains/bi/servers/obis1/tmp/obis_temp
[...]
-rwxrwx--- 1 oracle oinstall 2910404 2016-06-01 14:08 nQS_AG_22345_7503_7c9c000a_50906091.TMP
-rwxrwx--- 1 oracle oinstall   43476 2016-06-01 14:08 nQS_AG_22345_7504_7c9c000a_50906091.TMP
-rw------- 1 oracle oinstall 6912000 2016-06-01 14:08 nQS_AG_22345_7508_7c9c000a_50921949.TMP
-rw------- 1 oracle oinstall  631375 2016-06-01 14:08 nQS_EX_22345_7506_7c9c000a_50921652.TMP
-rw------- 1 oracle oinstall 3670016 2016-06-01 14:08 nQS_EX_22345_7507_7c9c000a_50921673.TMP
[...]

You can read more about BI Server’s use of temporary files and the impact that it can have on system performance and particularly I/O bandwidth in this OTN article here.

So – as the expression goes – “buyer beware”. XSA is an excellent feature, but used in its default configuration with files stored on disk it has the potential to wreak havoc if abused.

XSA Caching

If you’re planning to use XSA seriously, you should set up the database-based XSA Cache. This is described in detail in the PDF document attached to My Oracle Support note OBIEE 12c: How To Configure The External Subject Area (XSA) Cache For Data Blending| Mashup And Performance (Doc ID 2087801.1).

In a proper implementation you would follow in full the document, including provisioning a dedicated schema and tablespace for holding the data (to make it easier to manage and segregate from other data), but here I’m just going to use the existing RCU schema (BIPLATFORM), along with the Physical mapping already in the RPD (10 - System DB (ORCL)):

In NQSConfig.INI, under the XSA_CACHE section, I set:

ENABLE = YES;

# The schema and connection pool where the XSA data will be cached.
PHYSICAL_SCHEMA = "10 - System DB (ORCL)"."Catalog"."dbo";
CONNECTION_POOL = "10 - System DB (ORCL)"."UT Connection Pool";

And restart the BI Server:

/app/oracle/biee/user_projects/domains/bi/bitools/bin/stop.sh -i obis1 && /app/oracle/biee/user_projects/domains/bi/bitools/bin/start.sh -i obis1

Per the document, note that in the BI Server log there’s an entry indicating that the cache has been successfully started:

[101001] External Subject Area cache is started successfully using configuration from the repository with the logical name ssi.
[101017] External Subject Area cache has been initialized. Total number of entries: 0 Used space: 0 bytes Maximum space: 107374182400 bytes Remaining space: 107374182400 bytes. Cache table name prefix is XC2875559987.

Now when I re-run the test XSA analysis from above, returning three columns, the BI Server goes off and populates the XSA cache table:

-- Sending query to database named 10 - System DB (ORCL) (id: <<79879>> XSACache Create table Gateway), connection pool named UT Connection Pool, logical request hash b4de812e, physical request hash 5847f2ef:
CREATE TABLE dbo.XC2875559987_ZPRODNE1926129021 ( id3209243024 DOUBLE PRECISION, first_n[..]

Or rather, it doesn’t, because PHYSICAL_SCHEMA seems to want the literal physical schema, rather than the logical physical one (?!) that the USAGE_TRACKING configuration stanza is happy with in referencing the table.

Properties: description=<<79879>> XSACache Create table Exchange; producerID=0x1561aff8; requestID=0xfffe0034; sessionID=0xfffe0000; userName=prodney;
[nQSError: 17001] Oracle Error code: 1918, message: ORA-01918: user 'DBO' does not exist

I’m trying to piggyback on SA511’s existing configruation, which uses catalog.schema notation:

Instead of the more conventional approach to have the actual physical schema (often used in conjunction with ‘Require fully qualified table names’ in the connection pool):

So now I’ll do it properly, and create a database and schema for the XSA cache – I’m still going to use the BIPLATFORM schema though…

Updated NQSConfig.INI:

[ XSA_CACHE ]

ENABLE = YES;

# The schema and connection pool where the XSA data will be cached.
PHYSICAL_SCHEMA = "XSA Cache"."BIEE_BIPLATFORM";
CONNECTION_POOL = "XSA Cache"."XSA CP";

After refreshing the analysis again, there’s a successful creation of the XSA cache table:

-- Sending query to database named XSA Cache (id: <<65685>> XSACache Create table Gateway), connection pool named XSA CP, logical request hash 9a548c60, physical request hash ccc0a410: [[
CREATE TABLE BIEE_BIPLATFORM.XC2875559987_ZPRODNE1645894381 ( id3209243024 DOUBLE PRECISION, first_name2360035083 VARCHAR2(17 CHAR), [...]

as well as a stats gather:

-- Sending query to database named XSA Cache (id: <<65685>> XSACache Collect statistics Gateway), connection pool named XSA CP, logical request hash 9a548c60, physical request hash d73151bb:
BEGIN DBMS_STATS.GATHER_TABLE_STATS(ownname => 'BIEE_BIPLATFORM', tabname => 'XC2875559987_ZPRODNE1645894381' , estimate_percent => 5 , method_opt => 'FOR ALL COLUMNS SIZE AUTO' ); END;

Although I do note that it is used a fixed estimate_percent instead of the recommended AUTO_SAMPLE_SIZE. The table itself is created with a fixed prefix (as specified in the obis1-diagnostic.log at initialisation), and holds a full copy of the XSA (not just the columns in the query that triggered the cache creation):

With the dataset cached, the query is then run and the query log shows a XSA cache hit

External Subject Area cache hit for 'prodney'.'MOCK_DATA_bigger_55Mb'/Columns :
Cache entry shared_cache_key = 'prodney'.'MOCK_DATA_bigger_55Mb',
table name = BIEE_BIPLATFORM.XC2875559987_ZPRODNE2128899357,
row count = 144000,
entry size = 201326592 bytes,
creation time = 2016-06-01 20:14:26.829,
creation elapsed time = 49779 ms,
descriptor ID = /app/oracle/biee/user_projects/domains/bi/servers/obis1/xsacache/NQSXSA_BIEE_BIPLATFORM.XC2875559987_ZPRODNE2128899357_2.CACHE

with the resulting physical query fired at the XSA cache table (replacing what would have gone against the DSS web service):

-- Sending query to database named XSA Cache (id: <<65357>>), connection pool named XSA CP, logical request hash 9a548c60, physical request hash d3ed281d: [[
WITH
SAWITH0 AS (select T1000001.first_name2360035083 as c1,
     T1000001.last_name3826278858 as c2,
     sum(T1000001.foo2363149668) as c3
from
     BIEE_BIPLATFORM.XC2875559987_ZPRODNE1645894381 T1000001
group by T1000001.first_name2360035083, T1000001.last_name3826278858)
select D1.c1 as c1, D1.c2 as c2, D1.c3 as c3, D1.c4 as c4 from ( select 0 as c1,
     D102.c1 as c2,
     D102.c2 as c3,
     D102.c3 as c4
from
     SAWITH0 D102
order by c2, c3 ) D1 where rownum <= 5000001

It’s important to point out the difference of what’s happening here: the aggregation has been pushed down to the database, meaning that the BI Server doesn’t have to. In performance terms, this is a Very Good Thing usually.

Rows 988, bytes 165984 retrieved from database query
Rows returned to Client 988

Whilst it doesn’t seem to be recorded in the query log from what I can see, the data returned from the XSA Cache also gets inserted into the BI Server cache, and if you open an XSA-based analysis that’s not in the presentation services cache (a third cache to factor in!) you will get a cache hit on the BI Server cache. As discussed earlier in this article though, if an analysis is built against an XSA for which a BI Server cache entry exists that with manipulation could service it (eg pruning columns or rolling up), it doesn’t appear to take advantage of it – but since it’s hitting the XSA cache this time, it’s less of a concern.

If you change the underlying data in the XSA

The BI Server does pick this up and repopulates the XSA Cache.

The XSA cache entry itself is 192Mb in size – generated from a 55Mb upload file. The difference will be down to data types and storage methods etc. However, that it is larger in the XSA Cache (database) than held natively (flat file) doesn’t really matter, particularly if the data is being aggregated and/or filtered, since the performance benefit of pushing this work to the database will outweigh the overhead of storage space. Consider this example here, where I run an analysis pulling back 44 columns (of the 78 in the spreadsheet) and hit the XSA cache, it runs in just over a second, and transfers from the database a total of 5.3Mb (the data is repeated, so rolls up):

Rows 1000, bytes 5576000 retrieved from database
Rows returned to Client 1000

If I disable the XSA cache and run the same query, we see this:

Rows 144000, bytes 801792000 Retrieved from database
Physical query response time 22.086 (seconds)
Rows returned to Client 1000

That’s 764Mb being sent back for the BI Server to process, which it does by dumping a whole load to disk in temporary work files:

$  ls -l /app/oracle/biee/user_projects/domains/bi/servers/obis1/tmp/obis_temp
[...]]
-rwxrwx--- 1 oracle oinstall 10726190 2016-06-01 21:04 nQS_AG_29733_261_ebd70002_75835908.TMP
-rwxrwx--- 1 oracle oinstall   153388 2016-06-01 21:04 nQS_AG_29733_262_ebd70002_75835908.TMP
-rw------- 1 oracle oinstall 24192000 2016-06-01 21:04 nQS_AG_29733_266_ebd70002_75862509.TMP
-rw------- 1 oracle oinstall  4195609 2016-06-01 21:04 nQS_EX_29733_264_ebd70002_75861716.TMP
-rw------- 1 oracle oinstall 21430272 2016-06-01 21:04 nQS_EX_29733_265_ebd70002_75861739.TMP

As a reminder – this isn’t “Bad”, it’s just not optimal (response time of 50 seconds vs 1 second), and if you scale that kind of behaviour by many users with many datasets, things could definitely get hairy for all users of the system. Hence – use the XSA Cache.

As a final point, with the XSA Cache being in the database the standard range of performance optimisations are open to us – indexing being the obvious one. No indexes are built against the XSA Cache table by default, which is fair enough since OBIEE has no idea what the key columns on the data are, and the point of mashups is less to model and optimise the data but to just get it up there in front of the user. So you could index the table if you knew the key columns that were going to be filtered against, or you could even put it into memory (assuming you’ve licensed the option).

The MoS document referenced above also includes further performance recommendations for XSA, including the use of RAM Disk for XSA cache metadata files, as well as the managed server temp folder

Summary

External Subject Areas are great functionality, but be aware of the performance implications of not being able to push down common operations such as filtering and aggregation. Set up XSA Caching if you are going to be using XSA properly.

If you’re interested in the direction of XSA and the associated Data Set Service, this slide deck from Oracle’s Socs Cappas provides some interesting reading. Uploading Excel files into OBIEE looks like just the beginning of what the Data Set Service is going to enable!

The post OBIEE 12c – Extended Subject Areas (XSA) and the Data Set Service appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

ORA-00918 column ambiguously defined

VitalSoftTech - Wed, 2016-06-01 17:49
What is the cause of the "ORA-00918 column ambiguously defined" error? How do I resolve this?
Categories: DBA Blogs

New Blog: E-Business Suite and Oracle Cloud

Steven Chan - Wed, 2016-06-01 17:06

Our team has just launched a new blog that will cover all aspects of running E-Business Suite on Oracle Cloud, including what you can do right now, and what you can plan for as our offerings evolve:

Cliff Godwin pictureThis new blog starts with an article from Cliff Godwin, Senior Vice President, Oracle E-Business Suite Development, that describes how this new blog will help E-Business Suite customers find a practical path to the cloud.

This new blog provides an excellent way of sharing your requirements and feedback with our teams focusing on this rapidly-evolving area.  I encourage everyone to monitor this blog for the latest updates on running E-Business Suite on the Oracle Cloud.  You can subscribe by email, too!

Categories: APPS Blogs

BOF: A Sample Application For Testing Oracle Security

Pete Finnigan - Wed, 2016-06-01 17:05

In my Oracle security training classes I use a couple of sample applications for various demonstrations. I teach people how to perform security audits of Oracle databases, secure coding in PL/SQL, designing audit trail solutions and locking down Oracle. We....[Read More]

Posted by Pete On 10/03/16 At 11:07 AM

Categories: Security Blogs

OBIEE 12c – Extended Subject Areas (XSA) and the Data Set Service

Rittman Mead Consulting - Wed, 2016-06-01 14:06
OBIEE 12c – Extended Subject Areas (XSA) and the Data Set Service

One of the big changes in OBIEE 12c for end users is the ability to upload their own data sets and start analysing them directly, without needing to go through the traditional data provisioning and modelling process and associated leadtimes. The implementation of this is one of the big architectural changes of OBIEE 12c, introducing the concept of the Extended Subject Areas (XSA), and the Data Set Service (DSS).

In this article we'll see some of how XSA and DSS work behind the scenes, providing an important insight for troubleshooting and performance analysis of this functionality.

What is an XSA?

An Extended Subject Area (XSA) is made up of a dataset, and associated XML data model. It can be used standalone, or "mashed up" in conjunction with a "traditional" subject area on a common field

OBIEE 12c – Extended Subject Areas (XSA) and the Data Set Service

How is an XSA Created?

At the moment the following methods are available:

  1. "Add XSA" in Visual Analzyer, to upload an Excel (XLSX) document OBIEE 12c – Extended Subject Areas (XSA) and the Data Set Service

  2. CREATE DATASET logical SQL statement, that can be run through any interface to the BI Server, including 'Issue Raw SQL', nqcmd, JDBC calls, and so on OBIEE 12c – Extended Subject Areas (XSA) and the Data Set Service

  3. Add Data Source in Answers. Whilst this option shouldn't actually be present according to a this doc, it will be for any users of 12.2.1 who have uploaded the SampleAppLite BAR file so I'm including it here for completeness. OBIEE 12c – Extended Subject Areas (XSA) and the Data Set Service OBIEE 12c – Extended Subject Areas (XSA) and the Data Set Service

Under the covers, these all use the same REST API calls directly into datasetsvc. Note that these are entirely undocumented, and only for internal OBIEE component use. They are not intended nor supported for direct use.

How does an XSA work?

External Subject Areas (XSA) are managed by the Data Set Service (DSS). This is a java deployment (datasetsvc) running in the Managed Server (bi_server1), providing a RESTful API for the other OBIEE components that use it.

OBIEE 12c – Extended Subject Areas (XSA) and the Data Set Service

The end-user of the data, whether it's Visual Analyzer or the BI Server, send REST web service calls to DSS, storing and querying datasets within it.

Where is the XSA Stored?

By default, the data for XSA is stored on disk in SINGLETONDATADIRECTORY/components/DSS/storage/ssi, e.g. /app/oracle/biee/user_projects/domains/bi/bidata/components/DSS/storage/ssi

[oracle@demo ssi]$ ls -lrt /app/oracle/biee/user_projects/domains/bi/bidata/components/DSS/storage/ssi|tail -n5  
-rw-r----- 1 oracle oinstall    8495 2015-12-02 18:01 7e43a80f-dcf6-4b31-b898-68616a68e7c4.dss
-rw-r----- 1 oracle oinstall  593662 2016-05-27 11:00 1beb5e40-a794-4aa9-8c1d-5a1c59888cb4.dss
-rw-r----- 1 oracle oinstall  131262 2016-05-27 11:12 53f59d34-2037-40f0-af21-45ac611f01d3.dss
-rw-r----- 1 oracle oinstall 1014459 2016-05-27 13:04 a4fc922d-ce0e-479f-97e4-1ddba074f5ac.dss
-rw-r----- 1 oracle oinstall 1014459 2016-05-27 13:06 c93aa2bd-857c-4651-bba2-a4f239115189.dss

They're stored using the format in which they were created, which is XLSX (via VA) or CSV (via CREATE DATASET)

[oracle@demo ssi]$ head 53f59d34-2037-40f0-af21-45ac611f01d3.dss  
"7 Megapixel Digital Camera","2010 Week 27",44761.88
"MicroPod 60Gb","2010 Week 27",36460.0
"MP3 Speakers System","2010 Week 27",36988.86
"MPEG4 Camcorder","2010 Week 28",32409.78
"CompCell RX3","2010 Week 28",33005.91

There's a set of DSS-related tables installed in the RCU schema BIPLATFORM, which hold information including the XML data model for the XSA, along with metadata such as the user that uploaded the file, when they uploaded, and then name of the file on disk:

OBIEE 12c – Extended Subject Areas (XSA) and the Data Set Service

How Can the Data Set Service be Configured?

The configuration file, with plenty of inline comments, is at ORACLEHOME/bi/endpointmanager/jeemap/dss/DSSREST_SERVICE.properties. From here you con update settings for the data set service including upload limits as detailed here.

XSA Performance

Since XSA are based on flat files stored in disk, we need to be very careful in their use. Whilst a database may hold billions of rows in a table with with appropriate indexing and partitioning be able to provide sub-second responses, a flat file can quickly become a serious performance bottleneck. Bear in mind that a flat file is just a bunch of data plopped on disk - there is no concept of indices, blocks, partitions -- all the good stuff that makes databases able to do responsive ad-hoc querying on selections of data.

If you've got a 100MB Excel file with thousands of cells, and want to report on just a few of them, you might find it laggy - because whether you want to report on them on or not, at some point OBIEE is going to have to read all of them regardless. We can see how OBIEE is handling XSA under the covers by examining the query log. This used to be called nqquery.log in OBIEE 11g (and before), and in OBIEE 12c has been renamed obis1-query.log.

In this example here I'm using an Excel worksheet with 140,000 rows and 78 columns. Total filesize of the source XLSX on disk is ~55Mb.

First up, I'll build a query in Answers with a couple of the columns:

OBIEE 12c – Extended Subject Areas (XSA) and the Data Set Service

The logical query uses the new XSA syntax:

SELECT  
   0 s_0,
   XSA('prodney'.'MOCK_DATA_bigger_55Mb')."Columns"."first_name" s_1,
   XSA('prodney'.'MOCK_DATA_bigger_55Mb')."Columns"."foo" s_2
FROM XSA('prodney'.'MOCK_DATA_bigger_55Mb')  
ORDER BY 2 ASC NULLS LAST  
FETCH FIRST 5000001 ROWS ONLY
The query log shows
Rows 144000, bytes 13824000 retrieved from database query
Rows returned to Client 200
So of the 55MB of data, we're pulling all the rows (144,000) back to the BI Server for it to then perform the aggregation on it, resulting in the 200 rows returned to the client (Presentation Services). Note though that the byte count is lower (13Mb) than the total size of the file (55Mb). As well as aggregation, filtering on XSA data also gets done by the BI Server. Consider this example here, where we add a predicate: OBIEE 12c – Extended Subject Areas (XSA) and the Data Set Service In the query log we can see that all the data has to come back from DSS to the BI Server, in order for it to filter it:
Rows 144000, bytes 23040000 retrieved from database  
Physical query response time 24.195 (seconds),  
Rows returned to Client 0
Note the time taken by DSS -- nearly 25 seconds. Compare this later on to when we see the XSA data served from a database, via the XSA Cache. In terms of BI Server (not XSA) caching, the query log shows that a cache entry was written for the above request:
Query Result Cache: [59124] The query for user 'prodney' was inserted into the query result cache. The filename is '/app/oracle/biee/user_projects/domains/bi/servers/obis1/cache/NQS__736113_56359_0.TBL'
If I refresh the query in Answers, the data is fetched anew (per this changed behaviour in OBIEE 12c), and the cache repopulated. If I clear the Presentation Services cache and re-open the analysis, I get the results from the BI Server cache, and it doesn't have to refetch the data from the Data Set Service. Since the cache has two columns in, an attribute and a measure, I wondered if running a query with just the fact rolled up might hit the cache (since it has all the data there that it needs) OBIEE 12c – Extended Subject Areas (XSA) and the Data Set Service Unfortunately it didn't, and to return a single row of data required BI Server to fetch all the rows again - although looking at the byte count it appears it does prune the columns required since it's now just over 2Mb of data returned this time:
Rows 144000, bytes 2304000 retrieved from database
Rows returned to Client 1
Interestingly if I build an analysis with several more of the columns from the file (in this example, ten of a total of 78), the data returned from the DSS to BI Server (167Mb) is greater than that of the original file (55Mb). OBIEE 12c – Extended Subject Areas (XSA) and the Data Set Service
Rows 144000, bytes 175104000
Rows returned to Client 1000
And this data coming back from the DSS to the BI Server has to go somewhere - and if it's big enough it'll overflow to disk, as we can see when I run the above:
  
$ ls -l /app/oracle/biee/user_projects/domains/bi/servers/obis1/tmp/obis_temp
[...]
-rwxrwx--- 1 oracle oinstall 2910404 2016-06-01 14:08 nQS_AG_22345_7503_7c9c000a_50906091.TMP
-rwxrwx--- 1 oracle oinstall   43476 2016-06-01 14:08 nQS_AG_22345_7504_7c9c000a_50906091.TMP
-rw------- 1 oracle oinstall 6912000 2016-06-01 14:08 nQS_AG_22345_7508_7c9c000a_50921949.TMP
-rw------- 1 oracle oinstall  631375 2016-06-01 14:08 nQS_EX_22345_7506_7c9c000a_50921652.TMP
-rw------- 1 oracle oinstall 3670016 2016-06-01 14:08 nQS_EX_22345_7507_7c9c000a_50921673.TMP
[...]

You can read more about BI Server's use of temporary files and the impact that it can have on system performance and particularly I/O bandwidth in this OTN article here.

So - as the expression goes - "buyer beware". XSA is an excellent feature, but used in its default configuration with files stored on disk it has the potential to wreak havoc if abused.

XSA Caching

If you're planning to use XSA seriously, you should set up the database-based XSA Cache. This is described in detail in the PDF document attached to My Oracle Support note OBIEE 12c: How To Configure The External Subject Area (XSA) Cache For Data Blending| Mashup And Performance (Doc ID 2087801.1).

In a proper implementation you would follow in full the document, including provisioning a dedicated schema and tablespace for holding the data (to make it easier to manage and segregate from other data), but here I'm just going to use the existing RCU schema (BIPLATFORM), along with the Physical mapping already in the RPD (10 - System DB (ORCL)):

In NQSConfig.INI, under the XSA_CACHE section, I set:

ENABLE = YES;

# The schema and connection pool where the XSA data will be cached.
PHYSICAL_SCHEMA = "10 - System DB (ORCL)"."Catalog"."dbo";
CONNECTION_POOL = "10 - System DB (ORCL)"."UT Connection Pool";
And restart the BI Server:
  
/app/oracle/biee/user_projects/domains/bi/bitools/bin/stop.sh -i obis1 && /app/oracle/biee/user_projects/domains/bi/bitools/bin/start.sh -i obis1

Per the document, note that in the BI Server log there's an entry indicating that the cache has been successfully started:

[101001] External Subject Area cache is started successfully using configuration from the repository with the logical name ssi.
[101017] External Subject Area cache has been initialized. Total number of entries: 0 Used space: 0 bytes Maximum space: 107374182400 bytes Remaining space: 107374182400 bytes. Cache table name prefix is XC2875559987.
Now when I re-run the test XSA analysis from above, returning three columns, the BI Server goes off and populates the XSA cache table:
-- Sending query to database named 10 - System DB (ORCL) (id: <<79879>> XSACache Create table Gateway), connection pool named UT Connection Pool, logical request hash b4de812e, physical request hash 5847f2ef:  
CREATE TABLE dbo.XC2875559987_ZPRODNE1926129021 ( id3209243024 DOUBLE PRECISION, first_n[..]  

Or rather, it doesn't, because PHYSICALSCHEMA seems to want the literal physical schema, rather than the logical physical one (?!) that the USAGETRACKING configuration stanza is happy with in referencing the table.

Properties: description=<<79879>> XSACache Create table Exchange; producerID=0x1561aff8; requestID=0xfffe0034; sessionID=0xfffe0000; userName=prodney;  
[nQSError: 17001] Oracle Error code: 1918, message: ORA-01918: user 'DBO' does not exist

I'm trying to piggyback on SA511's existing configruation, which uses catalog.schema notation:

OBIEE 12c – Extended Subject Areas (XSA) and the Data Set Service

Instead of the more conventional approach to have the actual physical schema (often used in conjunction with 'Require fully qualified table names' in the connection pool):

OBIEE 12c – Extended Subject Areas (XSA) and the Data Set Service

So now I'll do it properly, and create a database and schema for the XSA cache - I'm still going to use the BIPLATFORM schema though...

OBIEE 12c – Extended Subject Areas (XSA) and the Data Set Service

Updated NQSConfig.INI:

[ XSA_CACHE ]

ENABLE = YES;

# The schema and connection pool where the XSA data will be cached.
PHYSICAL_SCHEMA = "XSA Cache"."BIEE_BIPLATFORM";
CONNECTION_POOL = "XSA Cache"."XSA CP";
After refreshing the analysis again, there's a successful creation of the XSA cache table:
-- Sending query to database named XSA Cache (id: <<65685>> XSACache Create table Gateway), connection pool named XSA CP, logical request hash 9a548c60, physical request hash ccc0a410: [[  
CREATE TABLE BIEE_BIPLATFORM.XC2875559987_ZPRODNE1645894381 ( id3209243024 DOUBLE PRECISION, first_name2360035083 VARCHAR2(17 CHAR), [...]  

as well as a stats gather:

-- Sending query to database named XSA Cache (id: <<65685>> XSACache Collect statistics Gateway), connection pool named XSA CP, logical request hash 9a548c60, physical request hash d73151bb:  
BEGIN DBMS_STATS.GATHER_TABLE_STATS(ownname => 'BIEE_BIPLATFORM', tabname => 'XC2875559987_ZPRODNE1645894381' , estimate_percent => 5 , method_opt => 'FOR ALL COLUMNS SIZE AUTO' ); END;  

Although I do note that it is used a fixed estimatepercent instead of the recommended AUTOSAMPLE_SIZE. The table itself is created with a fixed prefix (as specified in the obis1-diagnostic.log at initialisation), and holds a full copy of the XSA (not just the columns in the query that triggered the cache creation):

OBIEE 12c – Extended Subject Areas (XSA) and the Data Set Service

With the dataset cached, the query is then run and the query log shows a XSA cache hit

External Subject Area cache hit for 'prodney'.'MOCK_DATA_bigger_55Mb'/Columns :
Cache entry shared_cache_key = 'prodney'.'MOCK_DATA_bigger_55Mb',
table name = BIEE_BIPLATFORM.XC2875559987_ZPRODNE2128899357,
row count = 144000,
entry size = 201326592 bytes,
creation time = 2016-06-01 20:14:26.829,
creation elapsed time = 49779 ms,
descriptor ID = /app/oracle/biee/user_projects/domains/bi/servers/obis1/xsacache/NQSXSA_BIEE_BIPLATFORM.XC2875559987_ZPRODNE2128899357_2.CACHE
with the resulting physical query fired at the XSA cache table (replacing what would have gone against the DSS web service):
-- Sending query to database named XSA Cache (id: <<65357>>), connection pool named XSA CP, logical request hash 9a548c60, physical request hash d3ed281d: [[  
WITH  
SAWITH0 AS (select T1000001.first_name2360035083 as c1,  
     T1000001.last_name3826278858 as c2,
     sum(T1000001.foo2363149668) as c3
from  
     BIEE_BIPLATFORM.XC2875559987_ZPRODNE1645894381 T1000001
group by T1000001.first_name2360035083, T1000001.last_name3826278858)  
select D1.c1 as c1, D1.c2 as c2, D1.c3 as c3, D1.c4 as c4 from ( select 0 as c1,  
     D102.c1 as c2,
     D102.c2 as c3,
     D102.c3 as c4
from  
     SAWITH0 D102
order by c2, c3 ) D1 where rownum <= 5000001  

It's important to point out the difference of what's happening here: the aggregation has been pushed down to the database, meaning that the BI Server doesn't have to. In performance terms, this is a Very Good Thing usually.

Rows 988, bytes 165984 retrieved from database query
Rows returned to Client 988
Whilst it doesn't seem to be recorded in the query log from what I can see, the data returned from the XSA Cache also gets inserted into the BI Server cache, and if you open an XSA-based analysis that's not in the presentation services cache (a third cache to factor in!) you will get a cache hit on the BI Server cache. As discussed earlier in this article though, if an analysis is built against an XSA for which a BI Server cache entry exists that with manipulation could service it (eg pruning columns or rolling up), it doesn't appear to take advantage of it - but since it's hitting the XSA cache this time, it's less of a concern. If you change the underlying data in the XSA OBIEE 12c – Extended Subject Areas (XSA) and the Data Set Service The BI Server does pick this up and repopulates the XSA Cache. The XSA cache entry itself is 192Mb in size - generated from a 55Mb upload file. The difference will be down to data types and storage methods etc. However, that it is larger in the XSA Cache (database) than held natively (flat file) doesn't really matter, particularly if the data is being aggregated and/or filtered, since the performance benefit of pushing this work to the database will outweigh the overhead of storage space. Consider this example here, where I run an analysis pulling back 44 columns (of the 78 in the spreadsheet) and hit the XSA cache, it runs in just over a second, and transfers from the database a total of 5.3Mb (the data is repeated, so rolls up):
Rows 1000, bytes 5576000 retrieved from database
Rows returned to Client 1000
If I disable the XSA cache and run the same query, we see this:
Rows 144000, bytes 801792000 Retrieved from database
Physical query response time 22.086 (seconds)
Rows returned to Client 1000
That's 764Mb being sent back for the BI Server to process, which it does by dumping a whole load to disk in temporary work files:
$  ls -l /app/oracle/biee/user_projects/domains/bi/servers/obis1/tmp/obis_temp  
[...]]
-rwxrwx--- 1 oracle oinstall 10726190 2016-06-01 21:04 nQS_AG_29733_261_ebd70002_75835908.TMP
-rwxrwx--- 1 oracle oinstall   153388 2016-06-01 21:04 nQS_AG_29733_262_ebd70002_75835908.TMP
-rw------- 1 oracle oinstall 24192000 2016-06-01 21:04 nQS_AG_29733_266_ebd70002_75862509.TMP
-rw------- 1 oracle oinstall  4195609 2016-06-01 21:04 nQS_EX_29733_264_ebd70002_75861716.TMP
-rw------- 1 oracle oinstall 21430272 2016-06-01 21:04 nQS_EX_29733_265_ebd70002_75861739.TMP

As a reminder - this isn't "Bad", it's just not optimal (response time of 50 seconds vs 1 second), and if you scale that kind of behaviour by many users with many datasets, things could definitely get hairy for all users of the system. Hence - use the XSA Cache.

As a final point, with the XSA Cache being in the database the standard range of performance optimisations are open to us - indexing being the obvious one. No indexes are built against the XSA Cache table by default, which is fair enough since OBIEE has no idea what the key columns on the data are, and the point of mashups is less to model and optimise the data but to just get it up there in front of the user. So you could index the table if you knew the key columns that were going to be filtered against, or you could even put it into memory (assuming you've licensed the option).

The MoS document referenced above also includes further performance recommendations for XSA, including the use of RAM Disk for XSA cache metadata files, as well as the managed server temp folder

Summary

External Subject Areas are great functionality, but be aware of the performance implications of not being able to push down common operations such as filtering and aggregation. Set up XSA Caching if you are going to be using XSA properly.

If you're interested in the direction of XSA and the associated Data Set Service, this slide deck from Oracle's Socs Cappas provides some interesting reading. Uploading Excel files into OBIEE looks like just the beginning of what the Data Set Service is going to enable!

Categories: BI & Warehousing

What Are Oracle SQL Aggregate Functions?

Complete IT Professional - Wed, 2016-06-01 06:00
Oracle SQL aggregate functions are very useful, and are some of the most commonly used functions. Learn what Oracle SQL aggregate functions are and what they do in this article. What Are Oracle SQL Aggregate Functions? Aggregate functions are functions that allow you to view a single piece of data from multiple pieces of data. […]
Categories: Development

Oracle Database 12c SQL by Jason Price

Pat Shuff - Wed, 2016-06-01 02:07
Given that we have a database in Amazon RDS and Oracle PaaS we can go through some books from Oracle Press and see if anything breaks running through a book. Let's start with something simple, Oracle Database 12c SQL by Jason Price, published by Oracle Press. This is an introductory book that goes through the basic data types, sql commands, and an introduction of XML at the end of the book. The material should be relatively straightforward and not have any issues or problems executing the sample code. The sample code can be downloaded from Oracle Press Books by searching for the book title and downloading the Chapter 1 sample code. This will give us a way to load a table with data and execute code against the table. We will use SQL Developer to execute the code from d:\workshops\sql books\SQL and see what works and what does not work.

To get started, we need to use our Amazon RDS instance and SQL Developer that we installed yesterday. We connect with the user oracle to port 1521 after opening up the port to anyone. From this connection we can execute sql code in the main part of the SQL Developer window and load the sample code to execute. We can test the connection with the following command

select sysdate from dual;

we can follow along the book and create the user store, load the schema into the database, and look at the examples throughout the book.

Everything worked on Amazon RDS. We were able to create users, grant them audit functionality, execute XML code, and generally do everything listed in the book. The audit did not report back as expected but this could have been a user error. According to the Amazon RDS Documentation audting should work. We might not have had something set properly to report back the right information.

In summary, the Amazon RDS is a good platform to learn how to program 12c SQL and the various user level commands. If you go through a book like Oracle Database 12c SQL everything should work. This or the Oracle PaaS equivalent make an excellent sandbox that you can use for a day or two and turn off minimizing your cost of experimenting.

Paris Province Oracle Meetup

Tim Hall - Tue, 2016-05-31 23:50

paris-province-oracle-meetupThe reason for me being in Paris was to speak at the Paris Province Oracle Meetup. Breaking my journey to the Netherlands with a quick trip to Paris was a really easy way to connect with more people.

The Paris meetup is very similar to those found in other cities around the world, including Oracle Midlands in my home town. We all gathered at about 19:00 in the AVNET office in Paris and I did two talks with a short break between them. The first talk was about pluggable databases and the second one was about running Oracle databases in the cloud.

I like these local meetups. They feel a lot less formal and more personal than some (but not all) conferences. It just feels more natural to me. I really enjoyed doing the talks and the crowd seemed to respond well to them, which was nice. I’ll definitely be back again, if they will have me. Maybe next time I will get to do some sightseeing in Paris too.

Birmingham to Paris

Tim Hall - Tue, 2016-05-31 23:15

airplane-flying-through-clouds-smallThe day started at a rather civilised time of 06:00, which makes a change. I usually book early flights, forgetting I have to get to the airport a couple of hours early, then regret the decision later.

Storing Date Values As Numbers (The Numbers)

Richard Foote - Tue, 2016-05-31 19:45
In my last couple of posts, I’ve been discussing how storing date data in a character based column is a really really bad idea. In a follow-up question, I was asked if storing dates in NUMBER format was a better option. The answer is that it’s probably an improvement from storing dates as strings but it’s […]
Categories: DBA Blogs

Announcing the Dodeca Spreadsheet Management System, Version 7 and the Dodeca Excel Add-In for Essbase

Tim Tow - Tue, 2016-05-31 14:54
After 18 months of hard work, Applied OLAP is proud to announce the general availability of the Dodeca Spreadsheet Management System, version 7, and the all-new Dodeca Excel Add-In for Essbase.

The Dodeca Spreadsheet Management System provides customers the ability to automate spreadsheet functionality, reducing the risk of spreadsheet errors while increasing productivity.  It combines unprecedented ease-of-use for business users using spreadsheets for planning, budgeting, forecasting, reporting and analysis tasks.  It also provides a robust, programmable development environment enabling companies to create spreadsheet applications tailored to their specific needs.

The new Dodeca Excel Add-In for Essbase was created as a drop-in replacement for the classic Essbase add-in and is the world’s only Excel add-in focused exclusively on Essbase.  The new add-in supports the most common actions used by Essbase traditionalists, supports the corresponding VBA functions, and includes a built-in Excel ribbon.  Early adopters have also been impressed by the speed of retrieving data, commenting they found the Dodeca Excel Add-In as fast as, or even faster, than the classic Excel Add-In for Essbase.  It is supported for Excel 2010, 2013, and 2016 and for Essbase 9.3.1 and higher.




Dodeca 7 includes new features and functionality for security, selectors, logging, and enhanced workbook scripting.

The enhanced security features add the ability to more easily manage users, roles, and permissions to access a Dodeca application.  The new security features include:
  • User Management – You can now track, monitor, and control user access to a Dodeca application and more easily monitor your Dodeca user base.  This feature enables customers to control individual user access to a specific application, enable users for admin access, and manage user mapping to roles while also tracking metrics such as the first, last, and count of user logins.  This feature also logs metrics on the users system to enable Dodeca administrators to more easily support their users.



    Here is an example of the metrics stored for each user record.

  • User Roles – In addition to provisioning roles via Essbase, Microsoft Active Directory, or LDAP groups using Dodeca authentication services, you can now create your own groups directly in Dodeca and map them to users.  In addition, these new roles can be configured to be additive to the roles provided by other authentication services.  
  • Application Session Timeout – You can now prevent users from keeping a Dodeca session open indefinitely by specifying an inactivity timeout or by specifying a designated shutdown time.

The new logging features provide the ability for customers to both easily track what their users are running in the system and assist our support team if and when support is needed.  The new logging features include:
  • View Usage Logging – View usage is now automatically logged in the server.  The logs include not only identifying information for the view and the user, but also include performance information, error information, and tokens used in the view generation.

  • Client-side Request and Response XML Logging – The XML traffic traveling between the client and the server may now be logged to a directory on the client machine.  This logging expands on the Server-side Request and Response XML Logging to make it easier to gather the XML traffic for a single user.  In turn, the XML captured can be used by our support team to easily replicate transactions for our developers if and when necessary.

The selector enhancements include improvements to both view selectors and member selectors.  The improvements include:
  • View Selector Relational Hierarchy Generation – You may now populate the entire View Hierarchy, or alternatively, populate one or more branches, based on the results of a relational query.

  • View Selector Hierarchy Item Access Filters – Previously, you could control the View Hierarchies available to a given user.  This new filter provides the ability to control which view hierarchy items are presented in the view selector based on the current user’s roles.

  • Relational Treeview Point-of-View Selector List– You may now configure a hierarchy of point-of-view items based on the results of a relational query.

  • Enhanced Essbase Treeview Point-of-View Selector List Expansion and Selection Filtering – You can now filter Essbase member content and selectable status based on one or more specified filters including generation number, level number, member name, alias, or shared status.
View editing enhancements make it even easier to create and modify Views in Dodeca:
  • Enhanced SQLPassthroughDataSet Query Editing – You can now define token values to use for testing tokenized SQL queries in the Test Data Set utility.
  • Improved Token Editor – You can now view and edit tokens and their values in an improved grid layout.

Workbook scripting functionality adds more flexibility to Dodeca via 108 events that provide the opportunity for customers to extend Dodeca, 116 configurable methods (actions that can be taken in response to the events), and 138 functions that provide extended information to the workbook script.

The new workbook script functionality includes:
  • New Events
    • BeforeBuildExecute - Allows a workbook script to cancel the build before the view is covered.
    • BeforeRefreshExecute - Allows a workbook script to cancel the refresh before the view is covered.
    • Shown - Raised when the view is initially shown.  The event is raised before the framework applies the view’s AutoBuildOnOpen property, which allows a workbook script to set the property dynamically.
  • New Methods
    • ExportToExcel – Provides the ability to export view and point-of-view information in an exported Excel file.  This information is used to regenerate the view upon Excel file import which enables off-line data entry in Excel with subsequent database updates in Dodeca.
    • SetSelectorConfiguration - Provides the ability to add or remove a point-of-view selector to/from a view dynamically.
  • Enhanced Methods
    • CallWebService – Added a new RESTRequest overload, which allows web service calls to made to RESTful web services.
    • SendEmail - Added a new ServletSMTP overload, which sends email from the Dodeca server rather than from the client.  This is useful in circumstances in which SMTP mail must come from an approved IP address.
    • SetEntry – Added a new Added FormulaArray overload which allows Excel array formulas to be entered programmatically.
    • SetFill - Added a new Clear overload, which clears the fill color and pattern from the specified range.
  • New Functions
    • ColumnWidth - Returns the column width based on the active cell or a given cell.
    • RowHeight - Returns the row height based on the active cell or a given cell.
    • DataPointHasCellNote - Returns a boolean indicating if the active or specified data cell has Essbase LRO cell notes associated with it.
    • IsInCharacterRange - Returns a boolean indicating whether the specified string is limited to the specified character range.  This function can be used to detect multi-byte characters in a string.
  • Enhanced Functions
    • SheetCount - Added an optional IncludeHiddenSheets argument, which controls whether the returned count includes both visible and hidden sheets or only visible sheets.
For more information, we recommend you download and read the latest Dodeca release notes from the registered section of our website.  As always, you may also reach out to us via email or call us at
256.885.4371.

Categories: BI & Warehousing

Pages

Subscribe to Oracle FAQ aggregator