Feed aggregator

counting the appearance of a certain word in a document

Tom Kyte - Tue, 2017-03-07 01:06
Hello, I am trying to count the appearance of a certain word into a document (stored as BFILE in database). Table definition looks like this: CREATE TABLE MY_DOC (ID NUMBER PRIMARY KEY, FILE_NAME VARCHAR2(255), UPLOAD_DATE VARCH...
Categories: DBA Blogs

Modify non partitioned table to partitioned online

Tom Kyte - Tue, 2017-03-07 01:06
Hi Tom, I am trying to convert a non partitioned table into interval partitioned table online.I created a simple test table and ran the below script but I get ORA-14006 invalid partition name error. My conjecture would be this is not allowed for i...
Categories: DBA Blogs

Create Partition on load table 11g

Tom Kyte - Tue, 2017-03-07 01:06
Hi Connor/Chris, Please have a look at below table structure This is load table and there is no uniqueness related to any column, hence we cannot go for Primary key for this. <code> tb_card_details --------------- card_id start_date end...
Categories: DBA Blogs

Data guard

Tom Kyte - Tue, 2017-03-07 01:06
Hi AskTOM team, -- On standby database I am using real time redo apply. My standby database is started using following commands. STARTUP MOUNT; ALTER DATABASE OPEN READ ONLY; ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFI...
Categories: DBA Blogs

Re: parameterized views revisited

Tom Kyte - Tue, 2017-03-07 01:06
Hi Connor, Thank you for your response to my question. I have a follow up question about the consolidated view. I did the following suggestion: <code> create or replace 2 view V as 3 select 'T1' tname, t1.* from t1 4 union all 5...
Categories: DBA Blogs

Exadata snapshot access limitation

Tom Kyte - Tue, 2017-03-07 01:06
we have Exadata and we've started to implement snapshots. the business case required: we create a snapshots from the same test master, and grant access on each two snapshot for a specific group of users while preventing them to access other snapsho...
Categories: DBA Blogs

Getting started with Oracle PaaS Service Manager Command Line Interface (PSM)

Amis Blog - Mon, 2017-03-06 23:41

Oracle PaaS Service Manager (PSM) provides a command line interface (CLI) with which you can manage the lifecycle of various services in Oracle Public Cloud. This opens the door for scripting (recurring) tasks – such as (re)deployment of applications on ACCS to provisioning new environments. PSM makes performing admin operations on the Oracle Public Cloud a lot easier and efficient, compared to using the the GUI.

Note that the CLI is a thin wrapper over PaaS REST APIs that invokes these APIs to support common PaaS features.

The steps for installing and configuring PSM are simple enough – and take about 6 minutes. I will briefly walk you through them. They are also documented just fine.  Before I continue, I want to thank Abhijit Ramchandra Jere of Oracle for graciously helping me out with PSM.

1. Install Python (3.3+) and cURL

PSM is a Python based tool. To set it up and run it, you need to have Python set up on your machine.

2. Download PSM

The psmcli.zip can be downloaded from Cloud UI (as described here) or it can be fetched through cURL from the REST API (as described here):

curl -I -X GET       -u “lucas.jellema:password”      -H “X-ID-TENANT-NAME: cloud17”      -H “Accept: application/json”       https://psm.us.oraclecloud.com/paas/api/v1.1/instancemgmt/cloud17/services/cs/instances

3. Install PSM as a Python Package

With a simple statement, PSM is installed from the downloaded zip file (see here)

pip install -U psmcli.zip

image

This installs PSM into the Python Scripts directory: image

Verify the result using

pip show psmcli

image

On Linux:

image

 

4. Configure PSM for the identity domain and the connection to the cloud

Run the setup for PSM and configure it for use with your identity domain (see docs). Note: this step assumes that the Python scripts directory that contains PSM is in the PATH environment variable.

psm setup

image

I am not sure if and how you can use PSM on your machine for multiple identity domains or user accounts. I have access to several Oracle Public Cloud identity domains – even in different data centers. I have now setup PSM for one of them. If I can do a setup for a second identity domain and then somehow be able to switch between the two is not yet clear to me.
EDIT: switching to a different identity domain is simply done by running psm setup again. I need to provide the identity domain, region and credentials to make the switch. Note: psm remembers the set up across restart of the operating system.

5. Start using PSM for inspecting and manipulating PaaS Services

PSM can be used with many PaaS Services – not yet all – for inspecting their health, stopping and (re)starting, scaling and performing many administrative activities. See docs for all of them.

Some examples:

List all applications on Application Container Cloud:

psm accs apps

image

List log details for a specific application on ACCS:

psm accs log -n|–app-name name -i|–instance-name name

psm accs log -n Artist-Enricher-API -i web.1

and the list of log files is presented

image

 

6. Update PSM

To get rid of the slightly annoying message about their being a later version of PSM available – and to get hold of the latest version, you simply type:

psm update

and wait for maybe 15 seconds.

image

 

Issues:

I ran into an issue, caused as it turned out by having multiple Python versions on my machine. PSM got installed as Python package with version 3.5 and I was trying to run PSM with Python 3.6 as first version in my PATH environment variable. Clearly, that failed.

The error I ran into: ModuleNotFoundError: No module named ‘opaascli’

image

The solution: I removed all but one Python version (3.5 because with 3.6 the installation did not go well because of missing pip) and then installed with that one version.

Resources

Documentation on PSM: http://docs.oracle.com/en/cloud/paas/java-cloud/pscli/abouit-paas-service-manager-command-line-interface.html

Documentation on Oracle PaaS REST APIs: https://apicatalog.oraclecloud.com/ui/

The post Getting started with Oracle PaaS Service Manager Command Line Interface (PSM) appeared first on AMIS Oracle and Java Blog.

Spring Boot Actuator support added to Pivotal Web Services (PWS) Application Manager

Pas Apicella - Mon, 2017-03-06 17:07
Recently we added "Spring Boot Actuator support" to Pivotal Web Services (PWS) http://run.pivotal.io. If you want to try this out simply use the demo below which is all setup to verify how this works.

https://github.com/papicella/SpringBootPCFPas

Once pushed you will see a Spring Boot icon in the Application Manager UI showing the Actuator support as per below.








Categories: Fusion Middleware

Misleading wait event names clarified in V$EVENT_NAME

Yann Neuhaus - Mon, 2017-03-06 14:15

The oracle wait event names were originally implemented for the oracle rdbms developers and are now use by the database users to troubleshoot performance issues. The consequence is that the name may be misleading because they have a meaning from the internal point of view. Here is some clarification about them.

In 12c the clarification is easy because we have a new DISPLAY_NAME column in the V$EVENT_NAME view:


SQL> select wait_class,name, display_name from v$event_name where display_name != name order by 1,2;
WAIT_CLASS NAME DISPLAY_NAME
-------------- ----------------------------------- ----------------------------------------------
Administrative concurrent I/O completion online move datafile IO completion
Administrative datafile copy range completion online move datafile copy range completion
Administrative wait for possible quiesce finish quiesce database completion
Commit log file sync commit: log file sync
Configuration log buffer space log buffer full - LGWR bottleneck
Idle LGWR real time apply sync standby apply advance notification
Other DFS db file lock quiesce for datafile offline
Other Image redo gen delay redo resource management
Other datafile move cleanup during resize online move datafile resize cleanup
System I/O control file sequential read control file read
System I/O control file single write control file write
System I/O db file parallel write db list of blocks write
System I/O log file parallel write log file redo write
System I/O log file sequential read log file multiblock read
System I/O log file single write log file header write
User I/O db file parallel read db list of blocks read
User I/O db file scattered read db multiblock read
User I/O db file sequential read db single block read
User I/O db file single write db single block write

For long we know the misleading ‘db file sequential read’ which is what we call ‘random reads’ from storage point of view and ‘db file scattered read’ that is what we call ‘sequential reads’ from storage point of view. The DISPLAY_NAME clarifies everything: single block reads vs. multiblock reads.

‘db file parallel read’ is a batch of random reads, used by prefetching for example, which reads multiple blocks but non contiguous.
‘db file parallel write’ is similar, for DBWR to write a batch of blocks. The DISPLAY_NAME clarifies everything: ‘db list of blocks’.

‘log file parallel write’ is ‘parallel’ only because you can have multiplexed files. DISPLAY_NAME is less misleading with ‘log file redo write’.
The ‘log buffer space’ has a DISPLAY_NAME that is more focused on the cause: ‘log buffer full – LGWR bottleneck’

You can look at the others where DISPLAY_NAME is very clear about the operation: ‘online move’ for some operations on files, ‘commit’ for the well know log file sync…

Of course they are also described in the Database Reference documentation.

 

Cet article Misleading wait event names clarified in V$EVENT_NAME est apparu en premier sur Blog dbi services.

A Case for Essbase and Oracle Data Visualization

Rittman Mead Consulting - Mon, 2017-03-06 12:03

So it’s the end of the month, or maybe even the end of the quarter. And you’ve found yourself faced, yet again, with the task of pulling whatever data will be needed to produce the usual standardized budgetary and / or finance reports. You’re dreading this as it will probably eat up most of your day just getting the data you need, combing through it to find the metrics you need, and plugging them into your monumentally complicated custom spreadsheet, only to find the numbers are, well, off.

Enter Data Visualization (DV), and its lightweight brother application, Data Visualization Desktop (DVD). My bud at RM, Matt Walding, already did a pretty great post on some of the cursory features of DVD, covering a lot of the important how-tos and what’s whats. So check that out if you need a bit of a walkthrough. Both of these great tools tout that you can go from zero to analysis pretty darn quick, and from the extensive testing and prodding we’ve done with both DV and DVD, this claim is accurate. Now how does this help us, however, in the previous scenario? Well, IT processes being what they are in a lot of mid to large size companies, getting the data we need, to do the crunching we need to do, can be quite the monumental task, let alone the correct data. So when we get it, we are going to want a solution that can take us from zero to report, pretty darn quick.


If you’re the one stuck with doing the crunching, and then providing the subsequent results, your solution or workflow probably resembles one of the following:


Scenario 1

Emailed a spreadsheet with a ton of rows. Download the csv/xlsx and then crunch the rows into something that you can force into a super spreadsheet that has a ton of moving pieces just waiting to throw an error.


Scenario 2

You have access to Essbase, which stays pretty fresh, especially as reporting time draws nigh. You connect to Smartview and extract what you need for your report. See scenario 1.


Scenario 3

You have OBIEE that you depend on for data dumps, and then just export whatever you need. See scenario 1.

While there are no doubt variations on these themes, the bigger picture here is that between the time you receive your dataset and the final report, there are likely a few iterations of said final report. Maybe you’re having to make corrections to your Excel templates, perhaps the numbers on your sheet just aren’t jiving. Whatever the case may be, this part of the process is often the one that can be the most demanding of your time, not to mention the most headache inducing. So what’s the point of my schpeal? Well, wouldn’t it be nice to expedite this part of the whole thing? Let’s take a look at how we can do just that with both Data Visualization in OBIEE and Data Visualization Desktop.


Data Visualization

With DV, we can simply access any of our existing OBIEE subject areas to quickly create a basic pivot table. Right away you can see the profound time savings garnered by using DV. What's more, you don't need to feel forced into managing OBIEE on premises, as DV is also part of BICS (Business Intelligence Cloud Service) and DVCS (Data Visualization Cloud Service).

Even if you have Smartview, and can do more or less the same thing, what if you wanted to delegate some of the tedium involved in manually crunching all those rows? You could simply hand off an export to another analyst, and have them plug it right into their own instance of Data Visualization Desktop, which, might I add, comes with your purchase of the DV license. This also, however, leads down the slippery slope into siloing off your department. This approach is essentially doing that, however kept under the quarantine that is DVD, as this blog is touting, and keeping the data with which you are working consistent, you shouldn't be able to do too much damage. The point I'm trying to make is that everything about using DV and DVD as your sort of report crafting and proofing mediums, is super-fluid and smooth. The process from source system to report and over and over again, is super-seamless. Even if I didn't have direct access to the data source I needed, and had to rely on emailed data dumps or other, I can simply upload that sheet right into DV, assign some data types, and get to work. I can even add dynamic filters to the analysis by simply dragging and dropping a column to the filters area. If you're feeling adventurous, you could also display these tables on a dashboard, that perhaps your department looks at to proof them and share in the pleasurable experience that is concocting period-end reporting.


Data Visualization Desktop

Right now, DVD is only out for Windows (with a version for the Mac on the roadmap), which is mostly ok, as most every medium to large size company I have worked with employs Windows as their go-to OS. An analyst can install the program on their desktop machine and be ready to plug away in under 10 minutes. We can take the example spreadsheet above that we dumped out of VA and create our own version of the report right in DVD. One better, we can also blend it with any other source DVD can connect to. This feature, especially, can save lots of time when trying to get your numbers just right for sign off.

Looks Just Like DV

Acts Just Like DV

And hold the phone! There's even a native Essbase connector!

Actually One Better Than DV!

Speaking of connectors, check out the rest of the list, as well as the custom data flow functionality, which allows you to construct and save in-app data transformations to be invoked again, and again, and again...


Summary

Flexibility is the name of the game with DV and DVD. So sure, while it isn't the ideal tool for creating precisely formatted financial statements for SEC submission, it sure beats massaging and munging all that data in Excel. And once we're happy with our numbers, either by sharing our reports on a dashboard, or export, we can go ahead and plug our numbers right into whatever tool we are going to use to produce the final product. We've covered a couple of really good concepts, so let's just do a bit of a wrap up to make everything a bit more cohesive.

  • We can pull data into Data Visualization from OBIEE, from a spreadsheet, or even mash up the two to handle any inconsistencies that may exist at month's/quarter's end. Note that we can also use Smartview to pull directly to Excel. Note that you will be unable to use the OLAP capabilities of Essbase / Essbase in OBIEE, as the data will presented in DV and DVD as flattened hierarchies.

  • We can share reports in Data Visualization with other analysts or approvers via a dashboard (note that this is not a supported feature, but something that requires only a 'bit' of a hack) or PDF, or hand off our exported data set for further work. This can be to another person using DV or DVD.

  • Data Visualization Desktop can connect directly to our Essbase source or utilize a data dump from DV / OBIEE to do further work and analysis on the data set. We can also connect DVD to OBIEE, as posted here, in order to extract an analysis from the web catalog. This will at least save us the step of dumping / emailing the report.

  • All of this app-to-app compatability encourages a sustainable, functional, and fluid reporting environment (warning: data silos!), especially for those who are unfamiliar with the ins and outs of Essbase and / or working with OBIEE.

Categories: BI & Warehousing

ODA – 32GB template but got a database with 16GB SGA???

Yann Neuhaus - Mon, 2017-03-06 11:40

I got an interesting question today from a customer which created a database on ODA. He selected the template odb-04 which shows 32GB Memory but got a database with a 16GB SGA… Is it due to the PGA size, a limitation in the system, the huge pages usage which is reached or even a bug?

Indeed, the answer is easier and funnier. If you look to the menu shown by OAKCLI while creating a database, you get something like that:

 Please select one of the following for Database Class  [1 .. 6]:

1    => odb-01s  (   1 cores ,     4 GB memory)

2    =>  odb-01  (   1 cores ,     8 GB memory)

3    =>  odb-02  (   2 cores ,    16 GB memory)

4    =>  odb-04  (   4 cores ,    32 GB memory)

5    =>  odb-06  (   6 cores ,    48 GB memory)

6    =>  odb-12  (  12 cores ,    96 GB memory)

4

Selected value is : odb-04  (   4 cores ,    32 GB memory)

 

So using the template odb-04 seems to use 32GB memory for the newly created database. However looking to what OAKCLI really does shows that the reality is a bit different. Following all files/scripts which are called by OAKCLI at execution, we come to following file

/opt/oracle/oak/lib/oakutilslib/DbSizingValues.pm

 

This script contains the definition of the DBCA template used including the memory definition

my $sga_size = $memory * $dbTypes{$dbtypeid}{sga_factor};
my $pga_size = $memory * $dbTypes{$dbtypeid}{pga_factor};

So the memory value is multiplied by a factor depending on the database type. Looking in the same script we find both information:

my  %dbTemplates  =

    (  1  => { name => 'odb-01s', cpus => 1,  sfactor  => 0.5},

       2  => { name => 'odb-01',  cpus => 1  },

       3  => { name => 'odb-02',  cpus => 2  },

       4  => { name => 'odb-04',  cpus => 4  },

       5  => { name => 'odb-06',  cpus => 6  },

       6  => { name => 'odb-12',  cpus => 12 },

       7  => { name => 'odb-16',  cpus => 16 },

       8  => { name => 'odb-24',  cpus => 24 },

       9  => { name => 'odb-32',  cpus => 32 },

       10 => { name => 'odb-36',  cpus => 36 }

    );

my  %dbTypes  =

    ( 1 => { name => 'OLTP', template_name => 'OAK_oltp.dbt',     sga_factor => 0.5,  pga_factor => 0.25 },

      2 => { name => 'DSS',  template_name => 'OAK_dss.dbt',      sga_factor => 0.25, pga_factor => 0.50 },

      3 => { name => 'In-Memory', template_name => 'OAK_oltp.dbt',sga_factor => 0.25, pga_factor => 0.25, in_mem_factor=>0.25, only12c=> 1}

    );

 

This means that If you create an OLTP database with the odb-04 template it takes 32GB as basis and multiplied them by 0,5.
Here we go we have our 16GB!!

In conclusion the memory information shown by OAKCLI CREATE DATABASE is the base memory used for the calculation and not the one assigned to the SGA. I must admit that this is quite confusing for the end users as the base memory as no signification and is useless…

To be fully fair, I have to mention that the correct information about SGA size per template is available in the documentation the appendix B:

http://docs.oracle.com/cd/E83239_01/doc.121/e83201/database-templates-oracle-database-appliance.htm#CMTAR269

Enjoy!

David

 

Cet article ODA – 32GB template but got a database with 16GB SGA??? est apparu en premier sur Blog dbi services.

Real World OBIEE: Demystification of Variables Pt. 1

Rittman Mead Consulting - Mon, 2017-03-06 10:36

OBIEE has some very powerful features and capabilities when it comes to how we visualize and present our data. There are some features which almost everyone uses (filters, prompts, rename columns ect.) and then some that are less common. In part, I think there is a lot of "noise" that surrounds the explanation of some of these features. Another reason, and probably the most likely, is that a lot of report developers may just don't know that they exist. I can't tell you how many times in one of our training courses here at Rittman Mead that someone says "wow, I didn't know I could do that!" or that I get asked, "could you explain how Presentation Variables work?". Now, I am not saying that there is anything wrong with not knowing or being confused when it comes to some of the features in OBIEE. In fact, it's my main purpose in writing this blog series! I want to demystify and show you some of the features and techniques I have used with almost every client I have worked with. Even if you are a seasoned veteran, my goal is to present something in a way that you haven't heard before and spark some ideas in your day-to-day development. So with all of that said, let's dive into part one of the Demystification of Variables.

For the first part of this series, I want to talk about using Bins and Presentation Variables together at a report level and also in dashboard prompts. Before we dive into an example, let me talk a little bit about bins and presentation variables.

Bins

Sometimes we need to group values differently than what we have available from our column selection. Common examples could be when we need to group products differently than product category, when we need to create a group for items on sale or, maybe the most common I've seen, when we need to create custom groups for certain values and then have a group for the "others". Creating custom groups can be accomplished in a few different ways (selection steps, add new group from the Results tab), but the way I'm going to show you involves using the Bins tab under Edit formula within a column. I'll go over this process shortly when we dive into the example.

Presentation Variables

These are one of the most powerful features in Answers. They are also one of the things that trip people up the most. Think of presentation variables like placeholders and these placeholders are waiting for a value or a group of values to be passed into them. There are really only three parts to the presentation variables we will look at in this example: First you assign the presentation variable (give it a name) in a prompt to pass the value or group of values selected into the second part. The second part is where the placeholder (presentation variable with the same name) is put in a column formula, column heading, text object etc. The third part is called formatting. Formatting is used when you need to apply a certain format to the values coming into the presentation variable placeholder. There are a massive amount of use cases for presentation variables but I am going to cover only a couple of the ones that I've used the most.

Make sense? If you're still a little confused or hearing about this for the first time, don't worry. We are going to go through an example that I hope will explain the uses of Bins and Presentation Variables in a way that is concise and clear.

Now, lets dive into the example.

The Scenario

In this example, I am going to work through a hypothetical situation where I am the front-end developer for a coffee shop and I've been asked to create a report for an executive dashboard. The report needs to include Gross Revenue, Net Revenue and # of Orders for the current date for six groups of products (Breakfast Foods, Coffee, Coffee Beans, Tea, Tea Bags and Other) and five location groups which contain all the stores located in four states (California, Massachusetts, District of Columbia, New York and Others). The report needs to be in a table view and the person that requested this report also wants the ability to switch between the two columns for product groups and location groups. They also want a dashboard prompt for selecting from one or more groups for each column.

The Example

Let's start by tackling the custom groups. Since both groups are going to be created from the lowest level of detail for both products and locations, we need to bring in both Product Name and Shop Name columns and also our measures.

Note: For all of the examples in this blog series I am using OBIEE 12.2.1.2

I can click on column properties > Edit Formula then click on the Bins tab to start creating my groups.

In the Bins tab, I can click on Add Bin to start creating my first group.

Once I click on Add Bin, I am presented with a New Filter window. The process of creating a new group (or bin) is exactly the same as creating a column filter. I can type the values individually, separated by ; or I can click on the search icon to add values manually. In this case, I am going to add them manually.

Once in the Select Values window, I am going to select all of the products that belong in the first group (Breakfast Foods).

Note: In this scenario, the person has given me a list of all the products and stores that they want in each group

Once I'm done adding all the items, clicking OK will take me to the Edit Bin Name window where I can give my new group (or Bin) a custom name.

Since the person who requested the report probably wants the Others group to be last, I am going to add numerical values to each group. This will give me the flexibility to order columns any way that I choose. After I’m done creating all my Bins, there is a checkbox at the bottom of the Bins window where I can create a custom bin for all other values.

Clicking OK will return me to the Criteria where I can view the formula that OBIEE generated for all of my selections.

When using Bins, OBIEE generates a custom Case statement for us. At a high level, a Case statement takes a set of conditions and executes a corresponding statement. In other words, when the Product Name column has these values, put them all together an call them THIS.

If I go to Results, I can see the Bins I just created.

Now I can return to the Criteria to create the groups for the stores. Upon completion, here is the formula for the Locations:

And the result:

Now I need to come up with a way to be able to switch between the Product and Location columns I just created. There are a couple of options. Since i'm using OBIEE 12c, I can save both the Products and Locations columns to the web catalog and simply create a column selector just like I would do for any of the columns within a subject area.

There are two problems with this method. One is that the Save Column As feature is only available from OBIEE 11.1.1.9 forward so, if you are still on 11.1.1.7 or earlier, this method won't work. The second problem is that the request is not just for swapping the Product and Location columns with a prompt but also being able to select individual values for each column with a prompt. To avoid confusion, we probably want the prompt values for each column to change dynamically depending on which column is selected. To accomplish this I am going to use a second method which requires me to do the following:

  1. Combine the case statements for both Products and Locations column formulas

  2. Insert two additional case statements along with presentation variables to allow me to switch between the two.

  3. Create a filter for the combined column with a presentation variable to allow me to use prompted values.

  4. Create a variable type dashboard prompt to select between the Product and Locations independently.

  5. Create a dashboard prompt that dynamically changes between the value selections for the Products and Locations.

I'm going to start with the first two steps since they require editing the same column formula.

I am going to create a new column and copy both my Products and Locations column formulas into the new column's edit formula window and, to make it easier to edit and insert the two additional case statements, I am going to put a blank line between them.

Now I need to put in my two additional case statements with presentation variables so that I can switch between the two. The syntax and naming convention I am going to use is as follows:

CASE WHEN @{pv_select}['@']{'Products'} = 'Products' THEN [Products Case Statment]

CASE WHEN @{pv_select}['@']{'Products'} = 'Location' THEN [Locations Case Statment]

Let me take a moment to examine and explain the presentation variable I'm using.

The syntax for putting a presentation variable in a column formula, column heading or text object is as follows:

@{presentation_variable_name}[formatting]{default_value}

Note: I am going to use the presentation variable name (pv_select) in my variable prompt later in this example

The default value I have defined (Products) is an optional step that is primarily used for when I want to view my report without any values being passed into the presentation variable. The default value is used when no value has been passed into the presentation variable.

Notice that I have also included formatting between the presentation variable name and the default value (['@']). The reason for this is because the prompt values (which will be defined as Products and Locations in the variable prompt I will create) are a string (text) datatype and I have to explicitly define the format I need for the presentation variable value or I will get an error. You have to define formatting using single brackets [] and then encapsulate the @ character with the formatting you need applied to the presentation variable value. Think of the @ character as a sort of shorthand or an abbreviation for the presentation variable itself.

Note: I could enclose my presentation variable in single quotes to achieve the same results in this senario ('@{pv_select}{Products}').

So, to explain the entire statement another way:

CASE WHEN the value of the presentation variable coming from the prompt is equal to 'Products', THEN then use the CASE statment for the product groups

CASE WHEN the value of the presentation variable coming from the prompt is equal to 'Location', THEN then use the CASE statment for the location groups

My new column formula is as follows:

CASE WHEN @{pv_select}['@']{Products} = 'Products' THEN

CASE WHEN "Sales - Fact Sales"."Products"."Product Name" IN ('All Butter Croissant','Almond Croissant','Banana and Walnut Muffin','Blueberry Muffin','Caramel Shortbread','Chocolate Chip Muffin','Lemon Drizzle Muffin','Maple Pecan Danish') THEN '1. Breakfast Foods' WHEN "Sales - Fact Sales"."Products"."Product Name" IN ('Americano','Caramel Hot Cocoa','Classic Hot Cocoa','Dark Hot Cocoa','Expresso','Filter Coffee','Iced Caramel Latte','Iced Cocoa','Iced Latte','Iced Mocha','Latte','Macchiato','Mocha','Pain Au Chocolat','Vanilla Hot Cocoa','White Hot Cocoa') THEN '2. Coffee' WHEN "Sales - Fact Sales"."Products"."Product Name" IN ('Arabica Beans','Arabica Ground','Robustica Beans','Robustica Ground','GCBC Special Blend Instant') THEN '3. Coffee Beans' WHEN "Sales - Fact Sales"."Products"."Product Name" IN ('Chai Tea','Chamomile Tea','Earl Grey Tea','Fresh Mint Tea','Jasmine Tea','Spearmint Tea') THEN '4. Tea' WHEN "Sales - Fact Sales"."Products"."Product Name" IN ('Chamomile Tea Bags','Chai Tea Bags','Earl Grey Tea Bags','English Breakfast Tea Bags','Fresh Mint Tea Bags','Jasmine Tea Bags','Spearmint Tea Bags') THEN '5. Tea Bags' ELSE '6. Other' END

WHEN @{pv_select}['@']{'Products'} = 'Location' THEN

CASE WHEN "Shops"."Shop Name" IN ('SF1 - Fisherman''s Wharf', 'SF2 - South Beach', 'SJ1 - First Street Corridor', 'SV1 - Mountain View') THEN '1. California' WHEN "Shops"."Shop Name" IN ('BO1 - Financial District', 'BO2 - Atlantic Wharf', 'BO3 - Back Bay', 'CA2 - Harvard Square') THEN '2. Massachusetts' WHEN "Shops"."Shop Name" IN ('DC1 - Central Business District', 'DC2 - Dupont Circle') THEN '3. District of Columbia' WHEN "Shops"."Shop Name" IN ('NY1 - Times Square', 'NY2 - Fifth Avenue') THEN '4. New York' ELSE '5. Others' END
ELSE 'Other' END*

I am also going to use the presentation variable in the column heading so that when I make my selection in the prompt, the column heading will reflect the selection as well.

If I go to my results, notice how only the Products column is showing due to the default value I set when defining my presentation variable.

If I go back to my column formula and change the default value to be Location instead of Products, the column values will change to the ones defined in my Locations case statment.

In order for the report to be filtered for individual values, I need to create a filter using a presentation variable. To do this I am going to create a filter for the column I used to define all of my groups and, instead of selecting a value, I am going to put in a new presentation variable which I will define later when I create the dashboard prompt for the values.

I am going to name my new presentation variable pv_select_value.

When a Presentation Variable is added to the filter, two new text boxes appear. The Variable Expr box is where you define the variable to be used and the (default) box is used to add a default value. Notice how I am not using the syntax @{presentation_variable_name}. When defining a presentation variable using the presentation variable option in a filter or in a prompt, you only have to define the name. Once I click ok, OBIEE will insert the proper syntax for me. The default value is optional but, when defining a Presentation Variable within a filter, you have to specify a default value in order to get any results within the analysis. If I run the report with just a presentation variable defined with no default value, I will get the error shown below:

This is because, when the report is run, the query issued is trying to use the value of the presentation variable. Since there is no default value specified, it is trying to use the presentation variable name itself as a value. If I specify a default value or a list of default values, the report will use those values in the absence of a value or values in the presentation variable. Because I want all of the values to be shown initially on the dashboard, I am going to use the Convert this filter to SQL option and input all of the values for the product group manually for the list of default values.

If I run the report with the default values specified, I get the results back for the product group.

In order for the filter to work with the column prompt I will create later in this example, it requires the [formatting] syntax to be used (@{pv_select_value}['@']{'1. Breakfast Foods', '2. Coffee', '3. Coffee Beans', '4. Tea', '5. Tea Bags', '6. Other'}) for example). This will encapsulate each value passed into the pv_select_value presentation variable with single quotes.

Now I need to add an additional filter on the same column using the same pv_select_value presentation variable. The reason for this is because I need to define each default value for the location group as well so that all of the values for location will be shown on the dashboard. To do this, I am going use the same process that I used for the product group filter.

There is one additional step that I need to do in order to make the filter work for both products and locations. When you initially create report filters, OBIEE assumes that you want to use all filters together at the same time therefore it uses 'AND' for each filter created after the initial filter.

If I run the report, I will get an error because the filter is being applied for both the products and location values. Because the case statment is defined for products and locations independently, I can not use all values in a filter at the same time.

To fix this, I need to change the operator for the location filter from 'AND' to 'OR'. I can do this by simply clicking on the 'AND' icon.

Now the report will run because only the first set of conditions for the product group filter is applied.

Now I need to create my two dashboard prompts: One for the initial selection between my two case statements for Products and Locations and one for the values.

I start by creating a new dashboard prompt and create a new Variable Prompt.

Because this is a variable prompt, I have to manually specify the values I want the user to be able to select from as well as define the presentation variable. Since I am creating the prompt for switching between the two case statements for Products and Locations, I am going to make the following selections and input the following values.

Prompt for: Presentation Variable > pv_select

User Input: Choice List

Choice List Values: Custom Values

Values: Products and Location

Default Selection: Specific Custom Value > Products

As you can see, I have defined the variable that we used in the column formula for the case statements. By defining the two custom values (Products and Location), I can select between the two case statements at will and, as far as the user is concerned, it will be just like using a column prompt within a report. Also, the default value needs to be specified for reasons I will explain when creating the next prompt for the values.

Now I am going to create a column prompt for the values.

First, I need to create a new column prompt and put in the same column formula that I used earlier in my report. I can do that by clicking on the fx icon and pasting in the column formula.

Essentially what I am doing is making the column values populate for the prompt by using the same case statement definitions and they will also change depending on the value selected in the prompt we created previously.

For the choice list of values, I can simply leave the default of All Column Values. Since the values will dynamically change depending on the initial prompt value in the variable prompt, it will only show values corresponding to each selection.

I do not need to define a default selection because I set the default values in the two filters I created in my report.

Finally, all that is left to do is define the pv_select_value presentation variable I defined for the filter of the case statement column in the report. This is how I will get all of the values to be shown for each initial prompt selection and also how I will select individual values. The default values will be passed into the filter via the pv_select_value presentation variable initially which will prompt all the values to be shown and then individual selections will be passed in the same way.

Here are the results when placed on the dashboard.

So that concludes part 1 of Demystification of Variables. Please feel free to ask questions or leave me a comment! In part 2, I am going to talk about using Repository, System and Presentation Variables to make reports dynamic for any series of time. Until next time.

Categories: BI & Warehousing

Latest Release of Industry Leading Oracle Database Now Available in the Cloud, with Oracle Cloud at Customer, and On-Premises

Oracle Press Releases - Mon, 2017-03-06 10:00
Press Release
Latest Release of Industry Leading Oracle Database Now Available in the Cloud, with Oracle Cloud at Customer, and On-Premises Leading-edge Database technology now available in all environments

Redwood Shores, Calif.—Mar 6, 2017

Oracle today announced that the latest version of the world’s number one database, Oracle Database 12c Release 2 (12.2), is now available everywhere—in the cloud, with Oracle Cloud at Customer, and on-premises. Oracle Database 12.2 includes enhancements to the unique multitenant architecture and in-memory database technologies that provide customers with outstanding consolidation, performance, reliability, and security for all workloads including entry-level development and mission critical workloads. Additionally, the release includes more than 300 new features and enhancements across availability, performance, security, and developer productivity.

“Oracle has led the industry by providing the fastest and most reliable highly secure database for organizations of all sizes, and we continue to innovate and help customers transform to the cloud with minimum effort and risk,” said Andy Mendelsohn, executive vice president of database server technologies, Oracle. “With this announcement, Oracle is completing the rollout of Oracle Database 12.2, making it even easier for organizations to deploy Oracle Database 12c wherever they need it—in the cloud or on-premises.”

Oracle Database 12.2’s massive cloud scalability and real-time analytics offer customers greater agility, faster time to business insights, and real cost advantages. New innovations in Oracle Database 12.2 include:

  • Massive savings for consolidated and SaaS environments with up to 4,096 pluggable databases
  • Increased agility with online cloning and relocation of pluggable databases
  • Greatly accelerated in-memory database performance
  • Offload of real-time in-memory analytics to active standby databases
  • Native database sharding
  • Massive scaling with Oracle Real Application Clusters (RAC)
  • JSON document store enhancements

Multiple independent industry analyst reports recently recognized Oracle Database 12c technology leadership for common database workloads, including online transaction processing, hybrid transactional and analytical processing, data warehousing, internet of things (IoT), and in-memory database. Following are some of these reports:

Showcasing the impact of Oracle Database 12c for SaaS Clouds, Oracle Taleo Talent Management Cloud deployed Oracle Multitenant and achieved 25x more efficiency than using a virtual machine based architecture.

Oracle Cloud

Oracle Cloud is the industry’s broadest and most integrated public cloud, offering a complete range of services across SaaS, PaaS, and IaaS. It supports new cloud environments, existing ones, and hybrid, and all workloads, developers, and data. The Oracle Cloud delivers nearly 1,000 SaaS applications and 50 enterprise-class PaaS and IaaS services to customers in more than 195 countries around the world and supports 55 billion transactions each day.

For more information, please visit us at http://cloud.oracle.com.

Contact Info
Nicole Maloney
Oracle
+1.650.506.0806
nicole.maloney@oracle.com
Kristin Reeves
Blanc & Otus
+1.415.856.5145
kristin.reeves@blancandotus.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Nicole Maloney

  • +1.650.506.0806

Kristin Reeves

  • +1.415.856.5145

User Active or Deactive

Tom Kyte - Mon, 2017-03-06 06:46
I used "alter user scott account lock;" for lock the account. Now i try to access database so oracle prompts a message user is locked. but when i connect via website build in asp.net , I first time also get message via exception handling in front-s...
Categories: DBA Blogs

weird behavior for namespaces of public synonym and normal tables

Tom Kyte - Mon, 2017-03-06 06:46
Hi ,tom I am currently working with synonym in oracle and find the fact that: SQL> create public synonym mysynonym for myschema.mytable; Synonym created. SQL> create table myschema.mytable(a int); ERROR at line 1: ORA-00955: name is alre...
Categories: DBA Blogs

DB Block size greater than 8K

Tom Kyte - Mon, 2017-03-06 06:46
Hi Tom, I am looking for guidance on when to choose a block size greater than 8K for Oracle DB. I have seen few posts from the past which indicated 8K typically should do fine for most scenarios but have always been under the impression that for D...
Categories: DBA Blogs

60 SQL Interview Questions and Answers

Complete IT Professional - Mon, 2017-03-06 05:00
Are you going for a job where you need to know SQL, such as a Database Developer or Database Administrator? Brush up on your interview questions with this extensive list of SQL interview questions. This collection of interview questions on SQL has been collated from my experience with SQL and from various websites. It contains […]
Categories: Development

12cR1 RAC Posts -- 7 : OCR Commands

Hemant K Chitale - Mon, 2017-03-06 03:11
[Yes, I know that 12.2 is now available for download but it will be some time before I have a running 12.2 RAC environment]

Some OCR / OLR Commands :

The OCR is the Cluster Registry.  We also have an OLR that is the Local Registry which is created on a local filesystem.

We can check the consistency of the Registry with ocrcheck.  Note the difference between using oracle (or grid) and using root to run the check.  oracle can't check the OLR and can't do a logical consistency check of the OCR -- both require to be run as root.

[root@collabn1 ~]# su - oracle
[oracle@collabn1 ~]$ . oraenv
ORACLE_SID = [oracle] ? +ASM1
The Oracle base has been set to /u01/app/oracle
[oracle@collabn1 ~]$ ocrcheck -local
PROTL-602: Failed to retrieve data from the local registry
PROCL-26: Error while accessing the physical storage Operating System error [Permission denied] [13]
[oracle@collabn1 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 4
Total space (kbytes) : 409568
Used space (kbytes) : 1676
Available space (kbytes) : 407892
ID : 827167720
Device/File Name : +OCRVOTE
Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check bypassed due to non-privileged user

[oracle@collabn1 ~]$ su
Password:
[root@collabn1 oracle]# ocrcheck -local
Status of Oracle Local Registry is as follows :
Version : 4
Total space (kbytes) : 409568
Used space (kbytes) : 1036
Available space (kbytes) : 408532
ID : 1014277103
Device/File Name : /u01/app/12.1.0/grid/cdata/collabn1.olr
Device/File integrity check succeeded

Local registry integrity check succeeded

Logical corruption check succeeded

[root@collabn1 oracle]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 4
Total space (kbytes) : 409568
Used space (kbytes) : 1676
Available space (kbytes) : 407892
ID : 827167720
Device/File Name : +OCRVOTE
Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

[root@collabn1 oracle]#


Oracle automates backups of the OCR (but not the OLR !).  Below, the -showbackuploc shows the location of backups.

[root@collabn1 oracle]# ocrconfig -showbackuploc
The Oracle Cluster Registry backup location is [/u01/app/12.1.0/grid/cdata/]
[root@collabn1 oracle]# ls -lt /u01/app/12.1.0/grid/cdata
total 1272
-rw-------. 1 root oinstall 503484416 Mar 6 17:03 collabn1.olr
drwxrwxr-x. 2 oracle oinstall 4096 Jan 16 14:12 collabn-cluster
drwxr-xr-x. 2 oracle oinstall 4096 Dec 19 15:06 collabn1
drwxr-xr-x. 2 oracle oinstall 4096 Dec 19 14:37 localhost
[root@collabn1 oracle]# ls -lt /u01/app/12.1.0/grid/cdata/collabn1
total 820
-rw-r--r--. 1 root root 839680 Dec 19 15:06 backup_20161219_150615.olr
[root@collabn1 oracle]# ocrconfig -showbackup

collabn1 2017/01/16 14:09:40 /u01/app/12.1.0/grid/cdata/collabn-cluster/backup00.ocr 0

collabn1 2017/01/16 14:09:40 /u01/app/12.1.0/grid/cdata/collabn-cluster/day.ocr 0

collabn1 2017/01/16 14:09:40 /u01/app/12.1.0/grid/cdata/collabn-cluster/week.ocr 0

collabn2 2016/12/19 15:47:24 /u01/app/12.1.0/grid/cdata/collabn-cluster/backup_20161219_154724.ocr 0

collabn2 2016/12/19 15:47:16 /u01/app/12.1.0/grid/cdata/collabn-cluster/backup_20161219_154716.ocr 0
[root@collabn1 oracle]#


All recent (4-Hourly, Daily, Weekly) of the OCR are on the "master" node -- collabn1 -- which comes up first in my cluster.    The 19-Dec backups (of the OCR and OLR) are when I started setting up the Cluster.  Note that there are no subsequent (automated) OLR backups.
Note : There are no 4-Hourly/Daily/Weekly backups since 16-Jan because I haven't had my cluster running for long enough for those backups to kick in.

[root@collabn1 oracle]# ocrconfig -local -manualbackup

collabn1 2017/03/06 17:11:29 /u01/app/12.1.0/grid/cdata/collabn1/backup_20170306_171129.olr 0

collabn1 2016/12/19 15:06:15 /u01/app/12.1.0/grid/cdata/collabn1/backup_20161219_150615.olr 0
[root@collabn1 oracle]# ocrconfig -manualbackup

collabn1 2017/03/06 17:12:21 /u01/app/12.1.0/grid/cdata/collabn-cluster/backup_20170306_171221.ocr 0

collabn2 2016/12/19 15:47:24 /u01/app/12.1.0/grid/cdata/collabn-cluster/backup_20161219_154724.ocr 0

collabn2 2016/12/19 15:47:16 /u01/app/12.1.0/grid/cdata/collabn-cluster/backup_20161219_154716.ocr 0
[root@collabn1 oracle]#


I can run manual backups (the -local is for the OLR) as shown above.

It is important to include these backups in the backup strategy for the filesystem(s) that hold the Grid Infrastructure and RDBMS installations (binaries, configuration files, trace files etc).
.
.
.
Categories: DBA Blogs

Can You Share EBS Database Homes?

Steven Chan - Mon, 2017-03-06 02:04

No. The Oracle E-Business Suite database ORACLE_HOME cannot be shared between multiple EBS instances.

The Oracle E-Business Suite database ORACLE_HOME must be used exclusively for a single EBS database.  It cannot be shared with other Oracle E-Business Suite instances or other applications.  This applies to all EBS releases, including EBS 12.1 and 12.2.

Why does this restriction exist?

Configurations, log files, and more must be unique to a given instance.  Existing tools are designed to work with the a single database associated with a single application.  For example, the EBS pre-clone tool creates a clone directory that is related to a specific database.  AutoConfig is designed to run for a particular application+database combination.

What are the support implications if you ignore this restriction?

Running these tools in an environment where multiple applications are associated with a single database ORACLE_HOME will have unpredictable results.  If you report an issue whose root cause is found to be due to the sharing of a single database ORACLE_HOME between multiple EBS instances, our default recommendation would be to revert to a configuration where each EBS instance has its own database ORACLE_HOME.

Oracle will produce patches only for issues that can be reproduced in an environment where a single database ORACLE_HOME is associated with a single EBS application. 

Categories: APPS Blogs

OUAF 4.3.0.4.0 On its way

Anthony Shorten - Sun, 2017-03-05 15:32

We are currently putting the final touches on the next service pack (SP4) for the latest Oracle Utilities Application Framework release (4.3). This is a very exciting release for us with a lot of functionality that we are using for the cloud implementations of our products being made available to customers on cloud as well as customers on non-cloud implementations.

Over the next few weeks I will be releasing a series of articles, highlighting some of  the major changes we have introduced into the service pack that will be of interest to people in the field for their non-cloud implementations.

The release adds new functionality, updates existing functionality and retires functionality that we have previously announced as deprecated. You will start seeing products released based upon this new service pack in the upcoming months.

It is a very exciting time for Oracle Utilities and this release will be a foundation for even more exciting functionality we have planned going forward.

Pages

Subscribe to Oracle FAQ aggregator