Feed aggregator

Thanks Oracle, for making my life easier!

Jeff Moss - Tue, 2016-05-03 06:11

So, I’m really pleased to see that Oracle has acquired Opower. Why? Well, for the last few months I’ve occasionally had to do an extract for Opower….and another for Oracle on the same stuff…..hopefully I’ll only need to do the one extract from now on…yay!

Oracle’s New Data Visualization Desktop

Rittman Mead Consulting - Tue, 2016-05-03 05:00

A recent addition to the Oracle lineup of visualization tools is the Oracle Data Visualization Desktop. Described by Oracle as a “single user desktop application that provides Oracle Data Visualization functionality to business users,” DVD is an easy-to-install data visualization tool for Windows 7, 8 or 10 that packs some very powerful features.

I recently had a chance to sit down and explore DVD and wanted to share some of my first impressions.

At its core, DVD is a stand-alone version of Oracle’s DVCS. If you are at all familiar with Visual Analyzer, you will feel right at home.

Screen Shot 2016-04-29 at 9.34.25 AM

Installation was a breeze on my Windows 10 VM and only took about 5 minutes and required no additional software or plugins for the standard VA functionality.

After installation, launching DVD is as easy as clicking on the desktop icon like any other stand-alone application.

Screen Shot 2016-04-29 at 2.05.26 PM

Screen Shot 2016-04-29 at 10.02.43 AM

After the ODV startup, I was presented with a home screen which contains a search field for finding projects, a list of user folders and a main window to select individual visualizations that have been created.

Screen Shot 2016-05-02 at 11.15.59 AM

Clicking on the hamburger Screen Shot 2016-04-29 at 2.24.46 PM icon in the top left corner brings up a menu where I can choose to start Visual Analyzer with the last data source selected, select new Data Sources or create a new VA Project.

Screen Shot 2016-04-29 at 2.02.58 PM

I chose to create a new VA project and selected the sample data from Oracle (the sample data is an optional install chosen during the DVD install process). Creating a dashboard was a fairly straightforward process. Following Visual Analyzer’s functionality of dragging and dropping columns, I was able to put together a simple sales and profit dashboard in a few minutes.

Screen Shot 2016-04-29 at 2.39.44 PM

While creating my dashboard, I noticed that Oracle has included some new visualization types. You can now choose Scatter (Cat.), Stacked Scatter (Cat.), Donut or Sunburst visualizations.

Screen Shot 2016-04-29 at 1.50.12 PM

Screen Shot 2016-04-29 at 10.15.46 AM

One other feature that Oracle added to DVD is the ability to insert images onto the dashboards. You can choose to upload your own image or link to a URL to pull images from the web.

Screen Shot 2016-05-02 at 8.21.19 AM

Screen Shot 2016-05-02 at 8.25.25 AM

I uploaded an image and changed the canvas layout to freeform, which allowed me to move the image anywhere on the dashboard. By adjusting the transparency it is possible to have the image underlay the entire dashboard and still be able to see the visualizations. This example is pretty extreme, and in a real world scenario, caution should be used as to not obstruct the visualizations.

Screen Shot 2016-05-02 at 8.33.38 AM

Next I decided to try to connect to my Oracle 12c sample database to pull in some new data to work with. Selecting “Create New Datasource” from the menu prompted me with three options: create from a file, from an existing app or from a database.

Screen Shot 2016-04-29 at 11.24.23 AM

Clicking on the “From Database” option, I was presented with a connection screen.

Screen Shot 2016-05-01 at 1.15.42 PM

On this screen I discovered one of the most impressive things about DVD. Clicking on “Database Type” reveals a dropdown menu which you can choose from a variety of database formats, including Spark, Hive and Mongo DB, among others.
Screen Shot 2016-05-01 at 1.16.15 PM

That’s awesome.

Because I already had 12c DB installed, I selected the Oracle Database Type and entered all my connection information.

Once a connection to the database is made, it shows up in the available connections list. Clicking on my sample database brought up a list of available schemas to choose from. In this case, I chose the sample HR schema which then brings up a list of tables available to add as data sources for visualizations.

Screen Shot 2016-04-29 at 3.11.22 PM

Screen Shot 2016-04-29 at 3.04.32 PM

I chose to add EMPLOYEES, JOBS and LOCATIONS and then started a new VA project. The HR tables now show up in the list of available data sources.

Screen Shot 2016-04-29 at 3.17.22 PM

I selected EMPLOYEES and JOBS and, within seconds, was able to create a simple table showing a list of employee names, their job titles, salaries and commission percentages.

Screen Shot 2016-04-29 at 11.35.09 AM

As you can see, adding new data sources is quick and easy and allows users to explore their data and create meaningful visualizations from that data in a very short amount of time.

Another feature is the Advanced Analytics portion of Oracle Data Visualization Desktop. This feature, which uses R, gives users the ability to do things like highlight outliers or show trend lines with a click of a button.

Screen Shot 2016-04-29 at 3.43.43 PM

Screen Shot 2016-04-29 at 3.48.50 PM

This feature does require an optional install located within the DVD application folder. The install process proved once again to be very quick and easy and completed in about 5 minutes.

Screen Shot 2016-04-29 at 10.02.23 AM

Screen Shot 2016-04-29 at 9.58.38 AM

After the installation was complete, I created a new VA project. Choosing the sample data provided by Oracle for DVD, I created a quick scatter chart and then, by right clicking anywhere on the visualization, clicked “Add Outliers.”

Screen Shot 2016-05-02 at 9.29.19 AM

Screen Shot 2016-05-02 at 9.29.39 AM

As you can see, outliers and non-outliers are easily distinguishable by the color key that DVD assigned automatically.

Next, I wanted to see how if I could change some of the colors in my visualization. DVD allows you to do this under the visualization menu.

Screen Shot 2016-05-02 at 9.35.17 AM

Screen Shot 2016-05-02 at 9.38.43 AM

Screen Shot 2016-05-02 at 9.39.33 AM

As with OBIEE, entering specific hex values is supported as well as selecting from pre-made color pallets is possible with DVD.

Using the same right-click functionality that I used for adding outliers, I was able to additionally add a polynomial trend line to show a gains and losses.

Screen Shot 2016-05-02 at 9.43.58 AM

Next, I decided to see if I could export this data and import it into Excel. Choosing export from the visualization menu, I was able to easily export the data as a .CSV and upload it into Excel.

Screen Shot 2016-05-02 at 9.54.35 AM

Overall, Oracle Data Visualization Desktop is a very impressive new addition to the to the DVCS lineup. The ability to collect data from multiple sources, its native adaptors for a variety of popular databases, and the ability to manipulate visualizations to convey the data in creative ways make it a strong contender against Tableau and Wave. It requires no remote server infrastructure and is a solid business solution for users Oracle Data Visualization functionality in a small and easily accessible package.

I feel as though I have just cracked the surface of everything this tool can do. Check back for future blogs and articles as we at Rittman Mead continue to explore the possibilities of DVD. The future of data visualization may be closer than we think.

If you would like more information about Visual Analyzer or the Oracle Cloud Service, see this blog post by Mark Rittman.

If you would like to watch the official Tech Demo of DVD, you can find it here.

Rittman Mead also offers in depth professional training courses for OBIEE 12c and Visual Analyzer.

The post Oracle’s New Data Visualization Desktop appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

Debugging

Jonathan Lewis - Tue, 2016-05-03 02:11

The OTN database forum supplied a little puzzle a few days ago – starting with the old, old, question: “Why is the plan with the higher cost taking less time to run?”

The standard (usually correct) answer to this question is that the optimizer doesn’t know all it needs to know to predict what’s going to happen, and even if it had perfect information about your data the model used isn’t perfect anyway. This was the correct answer in this case, but with a little twist in the tail that made it a little more entertaining. Here’s the query, with the two execution plans and the execution statistics from autotrace:


SELECT  /* INDEX(D XPKCLIENT_ACCOUNT) */ 
        E.ECID,A.acct_nb
FROM    
        client_account d, 
        client         e, 
        account        a
where
        A.acct_nb ='00000000000000722616216'</li>


AND     D.CLNT_ID = E.CLNT_ID
AND     D.ACCT_ID=A.ACCT_ID;

Plan (A) with a full tablescan of client_account – cost 808, runtime 1.38 seconds, buffer gets 17,955


-------------------------------------------------------------------------------------------------
| Id | Operation                      | Name           | Rows  | Bytes  | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT               |                |     1 |    59  |   808 (14) | 00:00:10 |
|  1 |  NESTED LOOPS                  |                |     1 |    59  |   808 (14) | 00:00:10 |
|  2 |   NESTED LOOPS                 |                |     1 |    59  |   808 (14) | 00:00:10 |
|* 3 |    HASH JOIN                   |                |     1 |    42  |   806 (14) | 00:00:10 |
|  4 |     TABLE ACCESS BY INDEX ROWID| ACCOUNT        |     1 |    30  |     5  (0) | 00:00:01 |
|* 5 |      INDEX RANGE SCAN          | XAK1ACCOUNT    |     1 |        |     4  (0) | 00:00:01 |
|  6 |     TABLE ACCESS FULL          | CLIENT_ACCOUNT |  9479K|   108M |   763 (10) | 00:00:09 |
|* 7 |    INDEX UNIQUE SCAN           | XPKCLIENT      |     1 |        |     1  (0) | 00:00:01 |
|  8 |   TABLE ACCESS BY INDEX ROWID  | CLIENT         |     1 |    17  |     2  (0) | 00:00:01 |
-------------------------------------------------------------------------------------------------

Statistics
----------------------------------------------------------
     0  recursive calls
     0  db block gets
 17955  consistent gets
     0  physical reads
     0  redo size
   623  bytes sent via SQL*Net to client
   524  bytes received via SQL*Net from client
     2  SQL*Net roundtrips to/from client
     0  sorts (memory)
     0  sorts (disk)
     1  rows processed

Plan (B) with an index fast full scan on a client_account index – cost 1,190, runtime 0.86 seconds, buffer gets 28696


----------------------------------------------------------------------------------------------------
| Id | Operation                      | Name              | Rows  | Bytes  | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT               |                   |     1 |    59  |  1190  (8) | 00:00:14 |
|  1 |  NESTED LOOPS                  |                   |     1 |    59  |  1190  (8) | 00:00:14 |
|  2 |   NESTED LOOPS                 |                   |     1 |    59  |  1190  (8) | 00:00:14 |
|* 3 |    HASH JOIN                   |                   |     1 |    42  |  1188  (8) | 00:00:14 |
|  4 |     TABLE ACCESS BY INDEX ROWID| ACCOUNT           |     1 |    30  |     5  (0) | 00:00:01 |
|* 5 |      INDEX RANGE SCAN          | XAK1ACCOUNT       |     1 |        |     4  (0) | 00:00:01 |
|  6 |     INDEX FAST FULL SCAN       | XPKCLIENT_ACCOUNT | 9479K |   108M |  1145  (5) | 00:00:13 |
|* 7 |    INDEX UNIQUE SCAN           | XPKCLIENT         |     1 |        |     1  (0) | 00:00:01 |
|  8 |   TABLE ACCESS BY INDEX ROWID  | CLIENT            |     1 |    17  |     2  (0) | 00:00:01 |
----------------------------------------------------------------------------------------------------

Statistics
----------------------------------------------------------
     0  recursive calls
     0  db block gets
 28696  consistent gets
     0  physical reads
     0  redo size
   623  bytes sent via SQL*Net to client
   524  bytes received via SQL*Net from client
     2  SQL*Net roundtrips to/from client
     0  sorts (memory)
     0  sorts (disk)
     1  rows processed

Note, particularly, that the two plans are the same apart from operation 6 where a full tablescan changes to an index fast full scan, predicting the same number of rows but with an increase of 50% in the cost; the increase in cost is matched by an increase in the reported workload – a 60% increase in the number of consistent reads and no disk reads or recursive SQL in either case. Yet the execution time (on multiple repeated executions) dropped by nearly 40%.

So what’s interesting and informative about the plan ?

The cost of a tablescan or an index fast full scan is easy to calculate; broadly speaking it’s “size of object” / “multiblock read count” * k, where k is some constant relating to the hardware capability. The costs in these plans and the autotrace statistics seem to be telling us that the index is bigger than the table, while the actual run times seem to be telling us that the index has to be smaller than the table.

It’s easy for an index to be bigger than its underlying table, of course; for example, if this table consisted of nothing but two short columns the index could easily be bigger (even after a rebuild) because it would be two short columns plus a rowid. If that were the case here, though, we would expect the time to fast full scan the index to be higher than the time to scan the table.

So two thoughts crossed my mind as I looked at operation 6:

  • Mixing block sizes in a database really messes up the optimizer costing, particularly for tablescans and index fast full scans. Maybe the table had been built in a tablespace using 32KB  blocks while the index had been built in a tablespace using the more common 8KB blocksize – I didn’t want to start working out the arithmetic but that might be just enough to produce the contradiction.
  • Maybe the table was both bigger AND smaller than the index – bigger because it held more data, smaller because it had been compressed. If so then the difference in run-time would be the overhead of decompressing the rows before projecting and comparing the data.

Conveniently the OP has included an extract from the 10053 trace:


Table Stats::
  Table: CLIENT_ACCOUNT  Alias:  D
    #Rows: 9479811  #Blks:  18110  AvgRowLen:  71.00  ChainCnt:  0.00
  Column (#1): CLNT_ID(
    AvgLen: 6 NDV: 1261035 Nulls: 0 Density: 0.000001 Min: 0 Max: 4244786
    Histogram: HtBal  #Bkts: 254  UncompBkts: 254  EndPtVals: 239
  Column (#2): ACCT_ID(
    AvgLen: 6 NDV: 9479811 Nulls: 0 Density: 0.000000 Min: 1 Max: 22028568
    Histogram: HtBal  #Bkts: 254  UncompBkts: 254  EndPtVals: 255

Index Stats::
  Index: XPKCLIENT_ACCOUNT  Col#: 1 2
    LVLS: 2  #LB: 28543  #DK: 9479811  LB/K: 1.00  DB/K: 1.00  CLUF: 1809449.00

Note that the index is called xpclient_account – which suggests “primary key” –  and the number of distinct keys in the index (#DK) matches the number of rows in the table(#Rows). The index and table stats seem to be consistent so we’re not looking at a problem of bad statistics.

Now to do some simple (ballpark) arithmetic: for the table can we check if  “rows * average row length / 8K =  blocks”. We can read the numbers directly from the trace file:  9,500,000 * 71 / 8,000 = 84,000.  It’s wrong by a factor of about 4 (so maybe it’s a 32K block, and maybe I could rule out that possibility by including more detail in the arithmetic – like allowing properly for the block header, row overheads, pctfree etc).

For the index – we believe it’s the primary key, so we know the number of rows in the index – it’s the same as the number of distinct keys. As for the length of an index entry, we have the index definition (col#: 1 2) and we happen to have the column stats about those columns so we know their average length. Allowing for the rowid and length bytes we can say that the average index entry is (6 +1) + (6 + 1) + 6 = 20 bytes.  So the number of leaf blocks should be roughy 9,500,000 * 20 / 8,000 = 23,750. That’s close enough given the reported 28,543 and the fact that I haven’t bothered to worry about row overheads, block overheads and pctfree.

The aritmetic provides an obvious guess – which turned out to be correct: the table is compressed, the index isn’t. The optimizer hasn’t allowed for the CPU cost of decompressing the compressed rows, so the time required to decompress 9.5M rows doesn’t appear in the execution plan.

Footnote.

Looking at the column stats, it looks like there are roughly 8 acct_ids for each clnt_id, so it would probably be sensible to compress the primary key index (clnt_id, acct_id) on the first column as this would probably reduce the size of the index by about 20%.

Better still – the client_account table has very short rows – it looks like a typical intersection table with a little extra data carried. Perhaps this is a table that should be an index-organized table with no overflow. It looks like there should also be an index (acct_id, clnt_id) on this table to optimse the path from account to client and this would become a secondary index – interestingly being one of those rare cases where the secondary index on an IOT might actually be a tiny bit smaller than the equivalent index on a heap table because (in recent versions of Oracle) primary key columns that are included in the secondary key are not repeated in the index structure. (It’s a little strange that this index doesn’t seem to exist already – you might have expected it to be there given the OP’s query, and given that it’s an “obvious” requirement as an index to protect the foreign key.)

The only argument against the IOT strategy is that the table clearly compresses very well as a heap table, so a compressed heap table plus two B-tree indexes might be more cost-effective than an IOT with a single secondary index.

 


Links for 2016-05-02 [del.icio.us]

Categories: DBA Blogs

SQL vs. Excel – Subgroup medians

RDBMS Insight - Mon, 2016-05-02 19:34

Recently I ran across this post on how to do subgroup medians in Excel 2010. First you need to create a pivot table, then “do some copying and pasting and use a formula to make it happen”. In SQL you can do this with one command.

Suppose that you have the same table as the Excel article, something like this:

CREATE TABLE sampletab
(arrest_day_of_week varchar2(10), 
arrest_ts TIMESTAMP, 
fingerprint_ts TIMESTAMP, 
days_between NUMBER);

and you want to get the median value of days_between for each day of the week.

The steps in Excel apparently go like this:

  1. Create pivot table to get the means
  2. Copy and paste the column values from the pivot table
  3. For Sunday, create an IF formula to include a cell’s days_between in the median calculation only if the arrest_day_of_week for that row is Sunday
  4. Repeat for other six days

Ouch!

In SQL, it’s one simple statement:

SELECT arrest_day_of_week, median(days_between) AS median_days_between FROM sampletab
GROUP BY arrest_day_of_week;

Conclusion – if you’re into data analysis, SQL can be a big time-saver!

Categories: DBA Blogs

Oracle Fusion Middleware : WebLogic is Key, Learn Domain, Install/Configure, Start/Stop : Part II

Online Apps DBA - Mon, 2016-05-02 17:53

 This is part II of my journey on learning Oracle Fusion Middleware with me (Atul Kumar). In part I here I covered why I started learning Oracle Fusion Middleware almost 10 Years ago, How it transformed my career and I now charge premium rate for my consulting Services and mentioned that If I can do it […]

The post Oracle Fusion Middleware : WebLogic is Key, Learn Domain, Install/Configure, Start/Stop : Part II appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Reserved words usage in MySQL

Pythian Group - Mon, 2016-05-02 15:07

It is not uncommon to come across MySQL databases where reserved words are in use as identifiers for any kind of database objects.

Perhaps when the application schema was implemented, the words were not reserved yet, and they became reserved later on a subsequent MySQL release.

It is a good practice to check reserved words usage prior to doing any database upgrades, as any newly reserved keywords will cause syntax errors on the new version.

This is usually not a problem if proper quoting is used for referencing the objects, as described on the official manual page.

The actual steps to do this depend on the environment; for example, the following can be configured to tell Hibernate to escape identifiers:

property name="hibernate.globally_quoted_identifiers" value="true"

This does not appear to be documented properly (there is an open bug unresolved at the time of this writing).

However, we cannot make the assumption that all application code is properly escaped to deal with this kind of issues.

So what are the symptoms?

Error 1064 will be reported while trying to use a reserved word:

mysql> CREATE TABLE interval (begin INT, end INT);
ERROR 1064 (42000): You have an error in your SQL syntax ...
near 'interval (begin INT, end INT)'
How can we check for reserved words?

The following procedure can help you find out if any particular MySQL version’s reserved words are in use:

  1. Using the list on the corresponding manual page, create a text file with one reserved word on each line
  2. Load data into a temporary table
     USE test;
    CREATE TABLE reserved_words VARCHAR(50); 
    LOAD DATA INFILE 'reserved_words.txt' INTO TABLE test.reserved_words;
    
  3. Check for any column names using reserved keywords
    SELECT table_schema, table_name, column_name, ordinal_position 
    FROM information_schema.columns
    WHERE table_schema NOT IN ( 'mysql', 'information_schema', 'performance_schema' ) 
    AND column_name = ANY ( SELECT * FROM test.reserved_words ) 
    ORDER BY 1,2,4;
  4. Check for any table names using reserved keywords
    SELECT table_schema, table_name
    FROM information_schema.tables
    WHERE table_schema NOT IN ( 'mysql', 'information_schema', 'performance_schema' ) 
    AND table_name = ANY ( SELECT * FROM test.reserved_words );
  5. Check for any procedures or functions
    SELECT routine_schema, routine_name, routine_type
    FROM information_schema.routines
    WHERE routine_schema NOT IN ( 'mysql', 'information_schema', 'performance_schema' )
    AND routine_name = ANY ( select * from test.words );

I hope this post helps you avoid one of the many issues you may encounter during the challenging task of database upgrades.

Categories: DBA Blogs

storage cloud appliance in the cloud

Pat Shuff - Mon, 2016-05-02 10:46
Last week we focused on getting infrastructure as a service up and running. I wanted to move up the stack and talk about platform as a service but unfortunately, I got distracted with yet another infrastructure problem. We were able to install the storage cloud appliance software in a virtual machine but how do you install this in a compute cloud instance? This brings up two issues. First, how do you run a Linux 7 - 3.10 kernel in the Oracle Compute Cloud Service. Second, how do you connect and manage this service both from an admin perspective and client from another compute engine in the cloud service.

Let's tackle the first problem. How do you spin up a Linux 7 - 3.10 kernel in the Oracle Compute Cloud Service? If we look at the compute instance creation we can see what images that we can boot from.

There is not Linux 7 - 3.10 kernel so we need to download and import and image that we can boot from. Fortunately, Oracle has gone through a good importing a bootable image tutorial. If we follow these steps, we need to first download a CentOS 7 bootable image from cloud.centos.org. The cloud instance that we use is the CentOS-7-x86_64-OracleCloud.raw.tar.gz. We first download this to a local directory then upload it to the compute cloud image area. This is done by going to the compute console and clicking on the "Images" tab at the top of the screen.

We then upload the tar.gz file that is a bootable image. This allows us to create a new storage instance that we can boot from. The upload takes a few minutes and once it is complete we need to associate it with a bootable instance. This is done by clicking on the "Associate Image" button where we basically enter a name to use for the operating system as well as description.

Note that the OS size is 9 GB which is really small. We don't have a compute instance at this point. We either need to create a bootable storage element or compute instance based on this image. We will go through the storage create first since this is the easiest way of getting started. We first have to change from the Image tab to the Storage tab. We click on the Create Storage Volume and go through selection of the image, storage name, and size. We went with the storage size rather than resizing the storage we are creating.

At this point we should be able to create a compute instance based on this boot disk. We can clone the disk, boot from it, or mount it on another instance. We will go through and boot from this instance once it is created. We do this by going to the Instance tab and clicking on Create Instance. It does take 5-10 minutes to create the storage instance and need to wait till it is completed before creating a compute instance. An example of a creation looks like

We select the default network, the CentOS7 storage that we previously created, the 2016 ssh keys that we uploaded, and review and launch the instance.

After about 15 minutes, we have a compute instance based on our CentOS 7 image. Up to this point, all we have done is create a bootable Linux 7 - 3.10 kernel. Once we have the kernel available we can focus on connecting and installing the cloud storage appliance software. This follows the making backup better blog post. There are a couple of things that are different. First, we connect as the user centos rather than oracle or opc. This is a function of the image that we downloaded and not a function of the compute cloud. Second, we need to create a second user that allows us to login. When we use the centos user and install the oscsa_install.sh script, we can't login with our ssh keys for some reason. If we create a new user then whatever stops us from logging in as the centos user does not stop us from logging in as oracle, for example. The third thing that we need to focus on is creating a tunnel from our local desktop to the cloud instance. This is done with ssh or putty. What we are looking for is routing the management port for the storage appliance. It is easier to create a tunnel rather than change the management port and opening up the port through the cloud firewall.

From this we execute the commands we described in the maker backup better blog. We won't go through the screen shots on this since we have done this already. One thing is missing from the screenshot, you need to disable selinux vy editing /etc/sysconfig/selinux. You need to disable SELINUX by editing the file and rebooting. Make sure that you add a second user before rebooting otherwise you will get locked out and the ssh keys won't work once this change is made.

The additional steps that we need to do are create a user, copy the authorized_keys from an existing user into the .ssh directory, change the ownership, and assign a password to the new user, and add the user to /etc/sudoers.

useradd oracle
mkdir ~oracle/.ssh
cp ~centos/.ssh/authorized_keys ~oracle/.ssh
chown -R oracle ~oracle
passwd oracle
vi /etc/sudoers
The second major step is to create an ssh tunnel to allow you to connect in from your localhost into the cloud compute service. When you create the oscsa instance it starts up a management console using port 32769. To tunnel this port we use putty to connect.

At this point we should be able to spin up other compute instances and mount this file system internally using the command

 mount -t nfs -o vers=4,port=32770 e53479.compute-metcsgse00028.oraclecloud.internal:/ /local_mount_point
We might want to use the internal ip address rather than the external dns name. In our example this would be the Private IP address of 10.196.89.62. We should be able to mount this file system and clone other instances to leverage the object storage in the cloud.

In summary, we did two things in this blog. First, we uploaded a new operating system that was not part of the list of operating systems presented by default. We selected a CentOS instance that conforms to the requirements of the cloud storage appliance. Second, we configured the cloud storage appliance software on a newly created Linux 7 - 3.10 kernel and created a putty tunnel so that we can manage the directories that we create to share. This gives us the ability to share the object storage as an nfs mount internal to all of our compute servers. It allows for things like spinning up web servers or other static servers all sharing the same home directory or static pages. We can use these same processes and procedures to pull data from the Marketplace and configure more complex installations like JD Edwards, PeopleSoft, or E-Business Suite. We can import a pre-defined image, spin up a compute instance based on that image, and provision higher level functionality onto infrastructure as a service. Up next, platform as a service explained.

Video : Flashback Table

Tim Hall - Mon, 2016-05-02 07:51

Today’s video gives a quick run through of flashback table.

If you prefer to read articles, rather than watch videos, you might be interested in these.

The clip in today’s video comes courtesy of Deiby Gómez, an Oracle ACE Director, OCM 11g & 12c and consultant at Pythian.

Cheers

Tim…

Video : Flashback Table was first posted on May 2, 2016 at 1:51 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

#GoldenGate #Cloud Service (#GGCS) … what to expect?

DBASolved - Mon, 2016-05-02 07:30

As I sit here working on performing some GoldenGate migrations to AWS for a client, I’ve been thinking about the glimpse of GoldenGate Cloud Service (GGCS) that was provided to me earlier this week. That glimpse has helped me define what and how GGCS is going to work within the Oracle Cloud space. Ever since this service was announced back at Oracle Open World 2015, I’ve been wanting to get my hands on this cloud product from Oracle to just better understand it. Hopefully, what I’m about to share with you will provide some insight into what to expect.

First, you will need a cloud account. If you do not have a cloud account; visit http://cloud.oracle.com and sign up for an account. This will typically be the same account you use to login to My Oracle Support (MOS).

Once you have an account and are in the cloud interface, subscribe to some services. You will need a Database Cloud Service or an Compute Cloud Service. These services will be the end points for the GGCS to point to. As part of setting up the compute node, you will need to setup SSH access with a public/private key. Once you create the GGCS instance, the same public/private key should be use to keep everything simple.

Once GGCS is made available for trial, currently it is only available through the sales team, many of us will have the opportunity to play with this. The following screen captures and comments were taken from the interface I had access to while discussing GGCS with Oracle Product Management.

Like any of the other cloud services from Oracle, once you have access to GGCS it will appear in your dashboard as available cloud services. In the figure below, GGCS is listed at the top of the services that I had access to. You will notice over on the right, there is a link called “service console”.

When you click on the service console link, you are taken to the console that is specific to GGCS. On the left hand side of the console, you will see three primary areas. The “overview” area is the important one; it provides you with all the information needed about your GGCS environment. You will see the host and port number, what version of GGCS you are running and the status of your environment.

With the environment up and running, you will want to create a new GGCS instance. This instance is created under your cloud service console. On this screen you are given information that tells you how many instances you have running with the number of OCPUs, Memory and storage for the configuration along with the public IP address. Notice the button to the right, just below Public IPs, this is the button that allows you to create a new GGCS instance. In the figure below, the instance has already been created.

Drilling down into the instance, you are taken to a page that illustrates your application nodes for GGCS. Notice that the GGCS instance actually created a compute node VM to run GoldenGate from.

With everything configured from the Oracle Cloud interface, you can now access the cloud server using the details provided (do not have current screen shots of this). Once you access the cloud server, you will find that Oracle GoldenGate has been configured for you along with a TNS entry that points to a “target” location. These items are standard template items for you to build your GoldenGate environment from. The interesting thing about this configuration is that Oracle is providing a single virtual machine (compute node) that will handle all the apply process to a database (compute node).

With the GGCS service running, you are then ready to build out your GoldenGate environment.

Like many other GoldenGate architectures, you build out the source side of the architecture like anything else. You install the GoldenGate software, build an extract, trail files and a data pump. The data pump process is then pointed to the GoldenGate Cloud Service (GGCS) instance instead of the target instance. The local trail files will be shipped to the GGCS machine. Once on the GGCS instance, the replicat would need to be configured. Part of the configuration of the replicat at this point is updating the TNSNames.ora file to point to the correct “target” compute node/database instance. The below picture illustrates this concept.

You will notice that the GGCS is setup to be an intermediary point in the cloud. This allows you to be flexible with your GoldenGate architecture in the cloud. From a single GGCS service you can run multiple replicats that can point to multiple difference cloud compute nodes; turning your GGCS into a hub that can send data to multiple cloud resources.

In talking with the Oracle Product team about GGCS, the only downside to GGCS right now is that it cannot be used for bi-directional setup or pulling data from the cloud. In essence, this is a uni-direction setup that can help you move from on-premise to cloud with minimal configuration setup needed.

Well, this is my take on GGCS as of right now. Once GGCS trials are available, I’ll try to update this post or add more posts on this topic. Until then, hope you have gain a bit of information this topic and looking forward to using GGCS.

Enjoy!!

about.me: http://about.me/dbasolved


Filed under: Cloud, Golden Gate
Categories: DBA Blogs

Oracle Buys Opower

Oracle Press Releases - Mon, 2016-05-02 06:12
Press Release
Oracle Buys Opower Together, Oracle and Opower Will Become the Largest Provider of Mission-Critical Cloud Services to the $2.3 Trillion Utilities Industry

Redwood Shores, Calif.—May 2, 2016

Oracle (NYSE: ORCL) today announced that it has entered into a definitive agreement to acquire Opower (NYSE: OPWR), the leading provider of customer engagement and energy efficiency cloud services to utilities, for $10.30 per share in cash. The transaction is valued at approximately $532 million, net of Opower’s cash.

Opower’s solutions enable over 100 global utilities, such as PG&E, Exelon and National Grid, to deliver a modern digital customer experience. Opower’s big data platform stores and analyzes over 600 billion meter reads from 60 million utility end customers, enabling utilities to proactively meet regulatory requirements, decrease the cost to serve, and improve customer satisfaction.

“Utilities want modern technology solutions that work together to meet their evolving customer, operational and compliance needs,” said Rodger Smith, Senior Vice President and General Manager, Oracle Utilities Global Business Unit. “Together, Oracle Utilities and Opower will be the largest provider of mission-critical cloud services to utilities.”

“The combination will provide the industry with the most modern, complete cloud applications for the entire utility value chain, from meter to grid to end-customers," said Dan Yates, Chief Executive Officer and Co-Founder, Opower. "We are excited to join Oracle and to bring even more value to our customers as part of the Oracle Utilities Industry Cloud Platform."

The Board of Directors of Opower has unanimously approved the transaction. The transaction is expected to close in 2016, subject to Opower’s stockholders tendering a majority of Opower’s outstanding shares and derivative securities exercised prior to the closing of the tender offer, certain regulatory approvals and other customary closing conditions.

More information about this announcement is available at www.oracle.com/opower.

Contact Info
Deborah Hellinger
Oracle Corporate Communications
+1.212.508.7935
deborah.hellinger@oracle.com
Ken Bon
Oracle Investor Relations
+1.650.607.0349
ken.bond@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Cautionary Statement Regarding Forward-Looking Statements
This document contains certain forward-looking statements about Oracle and Opower, including statements that involve risks and uncertainties concerning Oracle’s proposed acquisition of Opower, anticipated customer benefits and general business outlook. When used in this document, the words “anticipates”, “can”, “will”, “look forward to”, “expected” and similar expressions and any other statements that are not historical facts are intended to identify those assertions as forward-looking statements. Any such statement may be influenced by a variety of factors, many of which are beyond the control of Oracle or Opower, that could cause actual outcomes and results to be materially different from those projected, described, expressed or implied in this document due to a number of risks and uncertainties. Potential risks and uncertainties include, among others, the possibility that the transaction will not close or that the closing may be delayed, the anticipated synergies of the combined companies may not be achieved after closing, the combined operations may not be successfully integrated in a timely manner, if at all, general economic conditions in regions in which either company does business, and the possibility that Oracle or Opower may be adversely affected by other economic, business, and/or competitive factors. Accordingly, no assurances can be given that any of the events anticipated by the forward-looking statements will transpire or occur, or if any of them do so, what impact they will have on the results of operations or financial condition of Oracle or Opower.

In addition, please refer to the documents that Oracle and Opower, respectively, file with the U.S. Securities and Exchange Commission (the “SEC”) on Forms 10-K, 10-Q and 8-K. These filings identify and address other important factors that could cause Oracle’s and Opower’s respective operational and other results to differ materially from those contained in the forward-looking statements set forth in this document. You are cautioned to not place undue reliance on forward-looking statements, which speak only as of the date of this document. Neither Oracle nor Opower is under any duty to update any of the information in this document.

Oracle is currently reviewing the existing Opower product roadmap and will be providing guidance to customers in accordance with Oracle’s standard product communication policies. Any resulting features and timing of release of such features as determined by Oracle's review of Opower’s product roadmap are at the sole discretion of Oracle. All product roadmap information, whether communicated by Opower or by Oracle, does not represent a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decision. It is intended for information purposes only, and may not be incorporated into any contract.

Additional Information about the Acquisition and Where to Find It
In connection with the proposed acquisition, Oracle will commence a tender offer for the outstanding shares of Opower. The tender offer has not yet commenced. This document is for informational purposes only and is neither an offer to purchase nor a solicitation of an offer to sell shares of Opower, nor is it a substitute for the tender offer materials that Oracle and its acquisition subsidiary will file with the SEC upon commencement of the tender offer. At the time the tender is commenced, Oracle and its acquisition subsidiary will file tender offer materials on Schedule TO, and Opower will file a Solicitation/Recommendation Statement on Schedule 14D-9 with the SEC with respect to the tender offer. The tender offer materials (including an Offer to Purchase, a related Letter of Transmittal and certain other tender offer documents) and the Solicitation/Recommendation Statement will contain important information. Holders of shares of Opower are urged to read these documents when they become available because they will contain important information that holders of Opower securities should consider before making any decision regarding tendering their securities. The Offer to Purchase, the related Letter of Transmittal and certain other tender offer documents, as well as the Solicitation/Recommendation Statement, will be made available to all holders of shares of Opower at no expense to them. The tender offer materials and the Solicitation/Recommendation Statement will be made available for free at the SEC’s web site at www.sec.gov.

In addition to the Offer to Purchase, the related Letter of Transmittal and certain other tender offer documents, as well as the Solicitation/Recommendation Statement, Oracle and Opower file annual, quarterly and special reports and other information with the SEC. You may read and copy any reports or other information filed by Oracle or Opower at the SEC public reference room at 100 F Street, N.E., Washington, D.C. 20549. Please call the Commission at 1-800-SEC-0330 for further information on the public reference room. Oracle’s and Opower’s filings with the SEC are also available to the public from commercial document-retrieval services and at the website maintained by the SEC at http://www.sec.gov.

Talk to a Press Contact

Deborah Hellinger

  • +1.212.508.7935

Ken Bon

  • +1.650.607.0349

FBDA -- 7 : Maintaining Partitioned Source Table

Hemant K Chitale - Mon, 2016-05-02 02:46
Taking up the TEST_FBDA_PARTITIONED table,  let's look at a couple of Partition Maintenance operations.

SQL> select partition_name, high_value, num_rows
2 from user_tab_partitions
3 where table_name = 'TEST_FBDA_PARTITIONED'
4 order by partition_position
5 /

PARTITION_NAME HIGH_VALUE NUM_ROWS
---------------- ------------------------- ----------
P_100 101 100
P_200 201 100
P_300 301 100
P_400 401 100
P_MAX MAXVALUE 301

SQL>


Let's try a TRUNCATE PARTITION

SQL> alter table test_fbda_partitioned truncate partition p_100;

Table truncated.

SQL>


So, that's supported.

Let's try a SPLIT PARTTIION

SQL> alter table test_fbda_partitioned       
2 split partition p_max at (501)
3 into (partition p_500, partition p_max)
4 /
alter table test_fbda_partitioned
*
ERROR at line 1:
ORA-55610: Invalid DDL statement on history-tracked table


SQL>


So, a SPLIT PARTITION fails.  We need to DISASSOCIATE the Flashback Archive.

SQL> execute dbms_flashback_archive.disassociate_fba('HEMANT','TEST_FBDA_PARTITIONED');

PL/SQL procedure successfully completed.

SQL> select table_name, flashback_archive_name, archive_table_name, status
2 from user_flashback_archive_tables
3 where table_name = 'TEST_FBDA_PARTITIONED'
4 /

TABLE_NAME
--------------------------------------------------------------------------------
FLASHBACK_ARCHIVE_NAME
--------------------------------------------------------------------------------
ARCHIVE_TABLE_NAME STATUS
----------------------------------------------------- -------------
TEST_FBDA_PARTITIONED
FBDA
SYS_FBA_HIST_93342 DISASSOCIATED


SQL>
SQL> alter table test_fbda_partitioned
2 split partition p_max at (501)
3 into (partition p_500, partition p_max)
4 /

Table altered.

SQL> execute dbms_flashback_archive.reassociate_fba('HEMANT','TEST_FBDA_PARTITIONED');

PL/SQL procedure successfully completed.

SQL>
SQL> select table_name, flashback_archive_name, archive_table_name, status
2 from user_flashback_archive_tables
3 where table_name = 'TEST_FBDA_PARTITIONED'
4 /

TABLE_NAME
--------------------------------------------------------------------------------
FLASHBACK_ARCHIVE_NAME
--------------------------------------------------------------------------------
ARCHIVE_TABLE_NAME STATUS
----------------------------------------------------- -------------
TEST_FBDA_PARTITIONED
FBDA
SYS_FBA_HIST_93342 ENABLED


SQL>


While a Table is disassociated with it's Flashback Archive, DDL that would not normally be permitted may be done under strict control to ensure that there is no data divergence.

.
.
.
Categories: DBA Blogs

Partition Storage -- 7 : Revisiting HWM - 2 (again)

Hemant K Chitale - Mon, 2016-05-02 02:19
Revisiting the previous test case, but with a larger AVG_ROW_LEN

SQL> create table part_table_large
(id_column number(6), data_col_1 varchar2(100), data_col_2 varchar2(100))
partition by range (id_column)
(partition p_100 values less than (101),
partition p_200 values less than (201),
partition p_300 values less than (301),
partition p_400 values less than (401),
partition p_max values less than (maxvalue))
/
2 3 4 5 6 7 8 9

Table created.

SQL>
SQL> insert into part_table_large values
(51,rpad('String',75,'X'), rpad('Another',60,'Y'))
2 3
SQL> /

1 row created.

SQL>
SQL> commit;

Commit complete.

SQL>
SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 100000
loop
insert into part_table_large
values (25, rpad('String',75,'X'), rpad('Another',60,'Y'));
commit;
cntr := cntr + 1;
end loop;
end;
2 3 4 5 6 7 8 9 10 11 12 13
14 /

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.

SQL>
SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 500001
loop
insert into part_table_large
values (45, rpad('String',75,'X'), rpad('Another',60,'Y'));
commit;
cntr := cntr + 1;
end loop;
end;
2 3 4 5 6 7 8 9 10 11 12 13
14 /

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.

SQL>
SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 500001
loop
insert into part_table_large
values (55, rpad('String',75,'X'), rpad('Another',60,'Y'));
commit;
cntr := cntr + 1;
end loop;
end;
2 3 4 5 6 7 8 9 10 11 12 13
14 /

PL/SQL procedure successfully completed.

SQL>
SQL> commit;

Commit complete.

SQL>
SQL> exec dbms_stats.gather_table_stats('','PART_TABLE_LARGE',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL>
SQL> select avg_row_len, num_rows, blocks
from user_tab_partitions
where table_name = 'PART_TABLE_LARGE'
and partition_name = 'P_100'
/
2 3 4 5
AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
140 1100003 22349

SQL>
SQL>
SQL> alter table part_table_large move partition p_100 ;

Table altered.

SQL>
SQL> exec dbms_stats.gather_table_stats('','PART_TABLE_LARGE',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL>
SQL> select avg_row_len, num_rows, blocks
from user_tab_partitions
where table_name = 'PART_TABLE_LARGE'
and partition_name = 'P_100'
/
2 3 4 5
AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
140 1100003 22626

SQL>
SQL>
SQL> select extent_id, blocks
from dba_extents
where segment_name = 'PART_TABLE_LARGE'
and segment_type = 'TABLE PARTITION'
and partition_name = 'P_100'
and owner = 'HEMANT'
order by 1
/
2 3 4 5 6 7 8
EXTENT_ID BLOCKS
---------- ----------
0 1024
1 1024
2 1024
3 1024
4 1024
5 1024
6 1024
7 1024
8 1024
9 1024
10 1024
11 512
12 1024
13 1024
14 1024
15 1024
16 1024
17 1024
18 1024
19 1024
20 1024
21 1024
22 1024

23 rows selected.

SQL>


Aha ! Unlike the previous case (where an AVG_ROW_LEN of 11, a MOVE reduced the HWM from 3,022 to 2,484), with a larger row size, the HWM has moved from 22,349 to 22,626.

So, space consumption is a factor of both the AVG_ROW_LEN and the manner in which the rows are  inserted / relocated.

SQL> l
1 select avg_row_len*num_rows*1.2/8192 Expected_Blocks, Blocks
2 from user_tab_partitions
3 where table_name = 'PART_TABLE_LARGE'
4* and partition_name = 'P_100'
SQL> /

EXPECTED_BLOCKS BLOCKS
--------------- ----------
22558.6553 22626

SQL>

Also, see how the "Expected Blocks" count seems more accurate than earlier.
.
.
.



Categories: DBA Blogs

Getting a C++11 compiler for Node 4, 5 and 6 on Oracle Linux 6

Christopher Jones - Sun, 2016-05-01 23:36

A newer compiler is needed on Oracle Linux 6 when you want to use add-ons like node-oracledb with Node 4 or later. This is because add-ons for those versions need to be built with a C++11 compatibile compiler. The default compiler on OL 6 doesn't have this support. OL 7 does have such a compiler, so these instructions are not needed for that version.

For OL 6 the easist way to get a new compiler is from the Software Collection Library (SCL). You enable the software collection yum channel, run a yum install command, and then the compiler is immediately available to use. Detailed installation SCL instructions are in the manual.

The steps below show how to install node-oracledb on Oracle Linux 6 for Node.js 4 or later.

Enabling the Software Collection Library

If you are using yum.oracle.com (formerly known as public-yum.oracle.com) then edit /etc/yum.repos.d/public-yum-ol6.repo and enable the ol6_software_collections channel:

  [ol6_software_collections]
  name=Software Collection Library release 1.2 packages for Oracle Linux 6 (x86_64)
  baseurl=http://yum.oracle.com/repo/OracleLinux/OL6/SoftwareCollections12/x86_64/
  gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
  gpgcheck=1
  enabled=1

If necessary, you can get the latest channel list from http://yum.oracle.com/public-yum-ol6.repo and merge any updates from it into your existing /etc/yum.repos.d/public-yum-ol6.repo file.

Alternatively, if your machine has a ULN support subscription, you can subscribe to the Oracle Software Collections 1.2 for Oracle Linux 6 channel in the Manage Subscription page on linux.oracle.com.

Installing the Updated Compiler

Once the channel is enabled, install the updated compiler with:

  yum install scl-utils devtoolset-3

This will install a number of packages that comprise the complete, updated tool set.

Installing node-oracledb

Installing node-oracledb on Node 4 (or later) is the same as in install instuctions, but using the new compiler. The Oracle Linux manual chapter Using the Software Collection Version of a Command shows various ways to enable the dev toolset.

In summary, to install node-oracledb on Node 4 or later using Oracle Linux 6, first install an Oracle client such as Instant Client. If you have anything except the Instant Client RPM packages, tell the installer where the libraries and header files are located, for example:

  export OCI_LIB_DIR=$HOME/instantclient
  export OCI_INC_DIR=$HOME/instantclient/sdk/include

If you are behind a firewall, set your proxy:

  export http_proxy=http://my.proxy.example.com:80/

In my development environments I often find some cleanup helps:

  which npm && rm -rf $(npm root)/oracledb $(npm root)/nan $HOME/.node-gyp $HOME/.npm \
        && npm cache clean

Now node-oracledb can be installed using the newer compiler:

  scl enable devtoolset-3 -- npm install oracledb
Using Node

Now you can use Node:

  $ node version.js 
  node.js version: v4.4.3
  node-oracledb version: 10900
  node-oracledb text format: 1.9.0
  oracle client library version: 1201000200
  oracle client library text format: 12.1.0.2.0
  oracle database version: 1201000200
  oracle database text format: 12.1.0.2.0

  $ cat /etc/oracle-release 
  oracle linux server release 6.7

Modern CX 2016

Oracle AppsLab - Sun, 2016-05-01 20:31

Last week, we were back in Las Vegas again for Oracle Modern Customer Experience Conference!  Instead of talking to customers and partners, we had the honor of chatting with UNLV Lee graduate students (@lbsunlv) and getting their feedback on how we envision the future of work, customers experience, marketing and data security.

We started off with Noel (@noelportugal) showing the portable Smart Office demo, including the Smart Badge, that we debuted at OpenWorld in October, followed by a break out session for the gradates to experience Glance and Virtual Reality at their own leisure.

VR1 VR2 VR3
The event was a hit! The 2 hour session flew by quickly. The same group of graduates who came in for the demos at the start of our session, left at the very last minute when we had to close down.

Experiencing these demos led into some exciting discussions that following day between the UNLV Lee Business School panelists and Rebecca Wettemann (@rebeccawettemann) from Nucleus Research (@NucleusResearch) on the future of work:

  • How will sales, marketing, customer service, and commerce change for the next generation?
  • What does the next generation expect from their employers?
  • Are current employers truly modern and using the latest technology solutions?

Fantastic panel with UNLV students @NucleusResearch @theappslab great UX = more prod work. pic.twitter.com/xg8RoF3cr5

— Erin Killian Evers (@keversca) April 28, 2016

Great thoughts from @unlv milleanials on #smartoffice opportunities in the workforce with @theappslab #SalesX16 pic.twitter.com/5T4CBfJ47q

— Gozel Aamoth (@gozelaamoth) April 28, 2016

While all of this was going on, a few of the developers and I were at the Samsung Developers Conference in SF discussing how we could build a more connected future. More on that in the next coming posts!Possibly Related Posts:

DISTRIBUTED mode deprecated

Anthony Shorten - Sun, 2016-05-01 18:50

Based upon feedback from partners and customers, the DISTRIBUTED mode used in the batch architecture has been deprecated in Oracle Utilities Application Framework V4.3.x and above. The DISTRIBUTED mode was originally introduced to the batch cluster architecture back in Oracle Utilities Application Framework V2.x and was popular but suffered from a number of restrictions. Given the flexibility of the batch architect was expanded in newer releases it was decided to deprecate the DISTRIBUTED mode to encourage more effective use of the architecture.

It is recommended that customers using this mode migrate to CLUSTERED mode using a few techniques:

  • For customers on non-production environments, it is recommended to use CLUSTERED mode using the single server (ss) template used by the Batch Edit facility. This is a simple cluster that uses CLUSTERED mode without the advanced configurations in a clustered environment. It is restricted to single host servers so it is not typically recommended for production or clustered environments that use more than one host server.
  • For customers on production environments, it is recommended to use CLUSTERED mode with the unicast (wka) template used by the Batch Edit facility. This will allow flexible configuration without the use of multi-cast which can be an issue on some implementations using CLUSTERED mode. The advantage of Batch Edit is that it has a simple interface to allow you to define this configuration without too much fuss. 

The advantage of Batch Edit when building your new batch configurations is that it is a simple to use as well as it generates an optimized set of configuration files that can be used directly for the batch architecture. Execution of the jobs would have to remove the DISTRIBUTED tags on the command lines or configuration files to use the new architecture.

Customers should read the Batch Best Practices (Doc Id: 836362.1) and the Server Administration Guide shipped with your product for advice on Batch Edit as well as the templates mentioned in this article.

Why I am a Dostoevskyan Humanist

Greg Pavlik - Sun, 2016-05-01 17:11
An explanation in 5 parts, by reference to the works of those who were not.*

'Lo! I show you the Last Man.

"What is love? What is creation? What is longing? What is a star?" -- so asks the Last Man, and blinks.

The earth has become small, and on it hops the Last Man, who makes everything small. His species is ineradicable as the flea; the Last Man lives longest.

"We have discovered happiness" -- say the Last Men, and they blink.

They have left the regions where it is hard to live; for they need warmth. One still loves one's neighbor and rubs against him; for one needs warmth.

Turning ill and being distrustful, they consider sinful: they walk warily. He is a fool who still stumbles over stones or men!

A little poison now and then: that makes for pleasant dreams. And much poison at the end for a pleasant death.

One still works, for work is a pastime. But one is careful lest the pastime should hurt one.

One no longer becomes poor or rich; both are too burdensome. Who still wants to rule? Who still wants to obey? Both are too burdensome.

No shepherd, and one herd! Everyone wants the same; everyone is the same: he who feels differently goes voluntarily into the madhouse.

"Formerly all the world was insane," -- say the subtlest of them, and they blink.

They are clever and know all that has happened: so there is no end to their derision. People still quarrel, but are soon reconciled -- otherwise it upsets their stomachs.

They have their little pleasures for the day, and their little pleasures for the night, but they have a regard for health.

"We have discovered happiness," -- say the Last Men, and they blink.'
Friedrich Nietzsche: Thus Spoke Zarathustra



The Body of the Dead Christ in the Tomb, Hans Holbein

Now, did He really break the seal
And rise again?
We dare not say….
Meanwhile, a silence on the cross
As dead as we shall ever be,
Speaks of some total gain or loss,
And you and I are free
Auden, Friday’s Child

“Wherever an altar is found, there is civilization."
Joseph de Maistre

“All actual life is encounter.”
Martin Buber, I and Thou

* model for composition stolen gratuitously from an online challenge.

Accessing Fusion Data from BI Reports using Java

Angelo Santagata - Sat, 2016-04-30 04:57
Introduction

In a recent article by Richard Williams on A-Team Chronicles, Richard explained how you can execute a BI publisher report from a SOAP Service and retrieve the report, as XML, as part of the response of the SOAP call.  This blog article serves as a follow on blog article providing a tutorial style walk through on how to implement the above procedure in Java.

This article assumes you have already followed the steps in Richard's blog article and created your report in BI Publisher, exposed it as a SOAP Service and tested this using SOAPUI, or another SOAP testing tool.

Following Richards guidance we know that he correct SOAP call could look like this

<soap:Envelope xmlns:soap="http://www.w3.org/2003/05/soap-envelope" xmlns:pub="http://xmlns.oracle.com/oxp/service/PublicReportService">
   <soap:Header/>
   <soap:Body>
      <pub:runReport>
         <pub:reportRequest>
            <pub:reportAbsolutePath>/~angelo.santagata@oracle.com/Bi report.xdo</pub:reportAbsolutePath>
            <pub:reportRawData xsi:nil="true" >true</pub:reportRawData>
            <pub:sizeOfDataChunkDownload>-1</pub:sizeOfDataChunkDownload>
            <pub:flattenXML>true</pub:flattenXML>
            <pub:byPassCache>true</pub:byPassCache>
         </pub:reportRequest>
         <pub:appParams/>
      </pub:runReport>
   </soap:Body>
</soap:Envelope>

Tip :One easy way to determine the reports location is to run the report and then examine the URL in the browser.

 

Implementing the SOAP call using JDeveloper 11g

We can now need to implement the Java SOAP Client to call our SOAP Service. For this blog we will use JDeveloper 11g, the IDE recommended for extending Oracle Fusion, however you are free to use your IDE of choice, e.g. NetBeans, Eclipse, VI, Notepad etc, the steps will obviously be different.

Creating the project

Within JDeveloper 11g start by creating a new Application and within this application create two generic projects. Call one project “BISOAPServiceProxy” and the other “FusionReportsIntegration”. The "BISOAPServiceProxy" project will contain a SOAP Proxy we are going to generate from JDeveloper 11g and the "FusionReportsIntegration" project will contain our custom client code. It is good practice to create separate projects so that the SOAP Proxies resides in its own separate project, this allows us to regenerate the proxy from scratch without affecting any other code.

Generating the SOAP Proxy

For this example we will be using the SOAP Proxy wizard as part of JDeveloper. This functionality generates a static proxy for us, which in turn makes it easier to generate the required SOAP call later.

  1. 1. With the BISOAPService project selected, start the JDeveloper SOAP Proxy wizard.
    File-> New-> Business Tier-> Web Services-> Web Service Proxy
  2. Proxy1
  3. 2. Click Next
  4. 3. Skipping the first welcome screen, in step 2 enter the JAX-WS Style as the type of SOAP Proxy you wish to generate in step 3 enter the WSDL of your Fusion Application BI Publisher webservice WSDL. It’s best to check this URL returns a WSDL document in your web browser before entering it here. The WSDL location will normally be something like : http://<your fusion Applications Server>/xmlpserver/services/ExternalReportWSSService?wsdl
  5. Proxy2
  6. It's recommended that you leave the copy WSDL into project check-box selected.
  7. 4. Give a package name, unless you need to it's recommended to leave the Root Package for generated types to be left blank
  8. proxy3
  9. 5. Now hit Finish
Fixing the project dependencies

We now need to make sure that the “FusionReportsIntegration” is able to see classes generated by the  “BISOAPServiceProxy” proxy. To resolve this in JDeveloper we simply need to setup a dependency between the two projects.

  1. 1. With the FusionReportsIntegration project selected, right-mouse click on the project and select “Project properties
  2. 2. In the properties panel select Dependencies
  3. 3. Select the little pencil icon and in the resulting dialog select “Build Output”. This selection tells JDeveloper that “this project depends on the successful build output” of the other project.
  4. 4. Save the Dialog
    dependancies1
  5. 5. Close [OK] the Project Properties dialog
  6. 6. Now is a good time to hit compile and make sure the SOAP proxy compiles without any errors, given we haven't written any code yet it should compile just fine.
Writing the code to execute the SOAP call

With the SOAP Proxy generated, the project dependency setup, we’re now ready to write the code which will call the BI Server using the generated SOAP Proxy

  1. 1. With the Fusion Reports Integration selected , right mouse Click -> New -> Java -> Java Class
    javacode
  2. 2. Enter a name, and java package name, for your class
  3. 3. Ensure that “Main Method” is selected. This is so we can execute the code from the command line, you will want to change this depending on where you execute your code from, e.g. A library, a servlet etc.
  4. 4. Within the main method you will need to enter the following code snippet, once this code snippet is pasted you will need to correct and resolve imports for your project.
  5. 1.	ExternalReportWSSService_Service externalReportWSSService_Service;
    2.	// Initialise the SOAP Proxy generated by JDeveloper based on the following WSDL xmlpserver/services/ExternalReportWSSService?wsdl
    3.	externalReportWSSService_Service = new ExternalReportWSSService_Service();
    4.	// Set security Policies to reflect your fusion applications
    5.	SecurityPoliciesFeature securityFeatures = new SecurityPoliciesFeature(new String[]
    6.	{ "oracle/wss_username_token_over_ssl_client_policy" });
    7.	// Initialise the SOAP Endpoint
    8.	ExternalReportWSSService externalReportWSSService = externalReportWSSService_Service.getExternalReportWSSService(securityFeatures);
    9.	// Create a new binding, this example hardcodes the username/password, 
    10.	// the recommended approach is to store the username/password in a CSF keystore
    11.	WSBindingProvider wsbp = (WSBindingProvider)externalReportWSSService;
    12.	Map<String, Object> requestContext = wsbp.getRequestContext();
    13.	//Map to appropriate Fusion user ID, no need to provide password with SAML authentication
    14.	requestContext.put(WSBindingProvider.USERNAME_PROPERTY, "username");
    15.	requestContext.put(WSBindingProvider.PASSWORD_PROPERTY, "password");
    16.	requestContext.put(WSBindingProvider.ENDPOINT_ADDRESS_PROPERTY, "https://yourERPServer:443/xmlpserver/services/ExternalReportWSSService");
    
    17.	// Create a new ReportRequest object using the generated ObjectFactory
    18.	ObjectFactory of = new ObjectFactory();
    19.	ReportRequest reportRequest = of.createReportRequest();
    20.	// reportAbsolutePath contains the path+name of your report
    21.	reportRequest.setReportAbsolutePath("/~angelo.santagata@oracle.com/Bi report.xdo");
    22.	// We want raw data
    23.	reportRequest.setReportRawData("");
    24.	// Get all the data
    25.	reportRequest.setSizeOfDataChunkDownload(-1); 
    26.	// Flatten the XML response
    27.	reportRequest.setFlattenXML(true);
    28.	// ByPass the cache to ensure we get the latest data
    29.	reportRequest.setByPassCache(true);
    30.	// Run the report
    31.	ReportResponse reportResponse = externalReportWSSService.runReport(reportRequest, "");
    32.	// Display the output, note the response is an array of bytes, you can convert this to a String
    33.	// or you can use a DocumentBuilder to put the values into a XLM Document object for further processing
    34.	System.out.println("Content Type="+reportResponse.getReportContentType());
    35.	System.out.println("Data ");
    36.	System.out.println("-------------------------------");
    37.	String data=new String (reportResponse.getReportBytes());
    38.	System.out.println(data);
    39.	System.out.println("-------------------------------");
  6. Going through the code
  7.  LineWhat does it do1-3This is the instantiation of a new class containing the WebService Proxy object. This was generated for us earlier5Initialise a new instance of a security policy object, with the correct security policy, for your Oracle Fusion server . The most common security policy is that of “oracle/wss_username_token_over_ssl_client_policy", however your server maybe setup differently8Calls the factory method to initialise a SOAP endpoint with the correct security features set9-16These lines setup the SOAP binding so that it knows which endpoint to execute (i.e. the Hostname+URI of your webservice which is not necessarily the endpoint where the SOAP Proxy was generated, the username and the password.In this example we are hard coding the details because we are going to be running this example on the command line. If this code is to be  executed on a JEE server, e.g. Weblogic, then we recommend this data is stored in the Credential store as CSF keys.17-19Here we create a reportRequest object and populate it with the appropriate parameters for the SOAP call. Although not mandatory its recommended that you use the objectFactory generated by the SOAP proxy wizard in JDeveloper.21This set the ReportPath parameter, including path to the report23This line ensures we get the raw data without decoration, layouts etc.25By default BI Publisher publishes data on a range basis, e.g. 50 rows at a time, for this usecase we want all the rows, and setting this to -1 will ensure this27Tells the webservice to flatten out the XML which is produced29This is an optional flag which instructs the BI Server to bypass the cache and go direct to the database30This line executes the SOAP call , passing the “reportReport” object we previously populated as a parameter. The return value is a reportResponse object34-39These lines print out the results from the BI Server. Of notable interest is the XML document is returned as a byte array. In this sample we simply print out the results to the output, however you would normally pass the resulting XML into Java routines to generate a XML Document.

 

 

Because we are running this code from the command line as a java client code we need to import the Fusion Apps Certificate into the Java Key Store. If you run the code from within JDeveloper then the java keystore is stored in <JDeveloperHome>\wlserver_10.3\server\lib\DemoTrust.jks

Importing certificates

 

  1. 1. Download the Fusion Applications SSL certificate, using a browser like internet explorer navigate to the SOAP WSDL URL
  2. 2. Mouse click on the security Icon which will bring you to the certificate details
  3. 3. View Certificate
    4. Export Certificate as a CER File
  4. 5. From the command line we now need to import the certificate into our DemoTrust.jks file using the following commandkeytool -import -alias fusionKey -file fusioncert.cer -keystore DemoIdentity.jks

jks

Now ready to run the code!

With the runReport.java file selected press the “Run” button, if all goes well then the code will execute and you should see the XML result of the BI Report displayed on the console.

 

Speakers: Put your Twitter Handle on the Windows taskbar!

The Oracle Instructor - Sat, 2016-04-30 04:56

If you speak often at conferences, sharing your screen to demo things, this could be helpful:

Twitter Handle on the Windows taskbar

Throughout your presentation, the audience will be able to see your Twitter Handle, reminding them to include it with tweets about the event. I used to include it in the slides, but this is better, because it works also with live demonstrations where no slides are being showed. Which is incidentally my favorite way to do presentations:-)

Now how can you do it? Quite easy, you open the Windows Control Panel and click on Region and Language. Then click on Additional settings:

Region and Language 1

Then you insert your Twitter Handle (or any other text you like to see on the taskbar) as AM and PM symbol. Make sure to select Time formats with trailing tt:

Region and Language 2

That’s it. If you want the font size as large as on the first picture above, that can be done here:

twitterhandel_taskbar4

I did that with Windows 7 Professional 64 bit. Hope you find it useful:-)


Tagged: speaker tip
Categories: DBA Blogs

Loading Data into Oracle Cloud ERP R10 using the new LoadAndImportData operation

Angelo Santagata - Sat, 2016-04-30 04:54

 

Introduction

As part of Oracle ERP cloud release 10 a new SOAP function has been made available to our customers which greatly simplifies the loading of ERP data using the batch oriented SOAP Services.

This article aims to give the reader, details of this new SOAP Service and how it helps in loading data files into Oracle ERP cloud.

Assuming the input file has been already produced, loading the data into Oracle ERP cloud service is traditionally a multi-step process.

The typical "happy" path is :

  1. 1. Load the file into Oracle Fusion ERP UCM service
  2. 2. Execute the first ESS Job which transfers the file from UCM to the Oracle ERP interface tables
  3. 3. Using a polling technique check to see when the ESS job has finished transferring the file into the interface tables
  4. 4. Execute a second ESS job, which transfers the file from Oracle ERP interface tables to the Oracle ERP data object tables
  5. 5. Use a polling technique to check to see when the file has been processed
    6. Finally execute a call to the downloadESSJobExecutionDetails() operation to download a log file so you can check for success,or any errors, which need dealing with.

Whilst this approach appears attractive, as it allows the developer a great deal of control of the process, in truth this internal processing should be something that the SaaS application [Oracle ERP Cloud] should manage and provide feedback to the developer when things finish

New SOAP method in R10

As of Oracle ERP cloud Release 10 there is a new API called "loadAndImportData", which is held within the ERPintegrationService, ( https://(FinancialDomain,Financial Common)/publicFinancialCommonErpIntegration/ErpIntegrationService?WSDL). This service has been specifically created to simplify the loading of data into Oracle ERP Cloud service by allowing you the ability to submit a file which is then automatically taken through the various stages of processing within Oracle ERP Cloud, without the user needing to execute each step of the process manually.

The operation takes the following parameters :

Element NameTypeDescriptiondocumentDocument Information SDOList of elements, each containing the details of the file to be uploaded. The details include the file content, file name, content type, file title, author, security group, and accountjobListProcess Details SDOList of elements, each containing the details of the Enterprise Scheduling Service job to be submitted to import and process the uploaded file. The details include the job definition name, job package name, and list of parametersinterfaceDetailsstringThe interface whose data is to be loaded.notificationCodestringA two-digit number that represents the manner and timing in which a notification is sent.callbackURLstringThe callback URL of the service implemented by customers to receive the Enterprise Scheduling Service job status on completion of the job

 

Diving into the Details

A sample soap payload, which imports journal records, looks like the following :

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:typ="http://xmlns.oracle.com/apps/financials/commonModules/shared/model/erpIntegrationService/types/" xmlns:erp="http://xmlns.oracle.com/apps/financials/commonModules/shared/model/erpIntegrationService/">
   <soapenv:Header/>
   <soapenv:Body>
      <typ:loadAndImportData>
         <typ:document>
            <erp:Content>  UEsDBBQAAAAIAMG2b0hvJGqkiAAAAKsBAAAPAAAAR2xJbnRlcmZhY2UuY3N2tY+xDoJADIZ3E9+hD9BIexojIwQWB0wU49yQqgMcyYnv7wELiYEwQIe2/9+mzZelD2Q0xMeA9gExxlKKLRRyJ/bzVIdXrepmoO+3ZLgfoU9EhGzYtA11ajTCcNePc5UKIuhKLE3xPnnz4r+8FM7111kpW+c/eOp8miXbzXJQB2aeAQU91rpUP1BLAQIUABQAAAAIAMG2b0hvJGqkiAAAAKsBAAAPAAAAAAAAAAAAIAAAAAAAAABHbEludGVyZmFjZS5jc3ZQSwUGAAAAAAEAAQA9AAAAtQAAAAAA</erp:Content>
            <erp:FileName>LoadGLData1.zip</erp:FileName>
            <erp:ContentType>zip</erp:ContentType>
            <erp:DocumentTitle>ImportJournalEntry</erp:DocumentTitle>
            <erp:DocumentAuthor></erp:DocumentAuthor>
            <erp:DocumentSecurityGroup>FAFusionImportExport</erp:DocumentSecurityGroup>
            <erp:DocumentAccount>fin$/journal$/import$</erp:DocumentAccount>
         </typ:document>
         <typ:jobList>
      
           <erp:JobName>oracle/apps/ess/financials/generalLedger/programs/common,JournalImportLauncher</erp:JobName>
           <erp:ParameterList>1061,Balance Transfer,1,123,N,N,N</erp:ParameterList>
         </typ:jobList>
         <typ:interfaceDetails>15</typ:interfaceDetails>
         <typ:notificationCode>50</typ:notificationCode>
         <typ:callbackURL>http://somecallbackserver.domain.com/mycallback</typ:callbackURL>
      </typ:loadAndImportData>
   </soapenv:Body>
</soapenv:Envelope>

Now lets dive into each element and explain what it represents and more importantly where you derive the data from :

  • Document : This element contains the details of the document to be uploaded
    • content : This is the document itself, base64 encoded and in-lined in the soap payload. There  are many tools on the internet to base64encode a document and in Java there is a  helper Base64.Encoder which does this for you.
    • contentType : This value should be set to "zip", this means your files must be zipped before base64encoding them and in lining them above
    • documentTitle : A title for the document, this is so you can find it in UCM later if you need to.
    • documentSecurityGroup : Needs to be set to a security group that secures the document, for our example we've used FAFusionImportExport
    • documentAccount : This needs to be set to the correct account depending on the data which is being loaded. For our journal import we need to set the account to fin$/journal$import. This is the same Account used when you "manually" upload files into Oracle ERP for loading.. If you don't know the what UCM account your data should be loaded into you can find it by going into File Based Data Import for Financials Cloud documentation and searching for your data object. In our case the object is "Journal Import" and the documentation states that the UCM account is fin/journal/import. For our SOAP Service we need to prefix each "/" with a "$"
  • jobList : This element contains data describing the job which needs to be executed for this batch upload
    • jobName : This is the "package name" of the ESS job which loads the data into Oracle ERP Cloud. You can find this in FusionAppsOER or in the documentation. The format for the field is "packageName,jobName"
    • parameterList : This is the list of parameters which the job requires to execute. The parameters depend on the ESS Job being executed. In our case the ESS Job is for journals and in our case the parameters are Data Access Set ID, Source (Balance Transfer), LedgerID, GroupID (aka BatchID) etc
    • [caption id="attachment_37609" align="alignnone" width="300"]journalimport Example from FusionOER[/caption]
  • interfaceDetails :  This is set to 15 for journals  (no longer needed in R11)
  • notificationCode : This is set to 50 (no longer needed in R11)
  • callBackURL :  The magic about this service is that it executes all of the ESS jobs in the background and then executes a callback to your service when its finished. This response contains the "last" ESSjob ID executed so you can then query the status of the jobs using the downloadESSJobExecutionDetails method.

 

Handling the callback
  • As mentioned earlier, the loadAndImportData operation does all heavy lifting, and orchestration , within Oracle Fusion ERP SaaS, the only thing the developer needs to implement [optionally but very desirable], is a webservice endpoint which manages the callback generated by the ESS framework. This service needs to implement the ESS onJobCompletion operation, which will deliver three pieces of data, the requestId of the ESSJob which completed, the state of the process and a status message. For more information on handing the ESS callbacks please see this documentation link, and additionally if you are using BPEL to execute the SOAP Service then this documentation link may be of interest (Section 11.7.7 : Receive the Job Completion Status)

 

Conclusion

The new LoadAndImportData operation will most certainly make importing of data into Oracle ERP a much simpler process, its biggest advantage is that developers will easily be able to trigger the import with a single SOAP call which can easily be done without the need to worry about orchestration. There are however scenarios when you would you probably use the traditional , step by step method, for example when you want to control/trigger external notification providers that each step has been executed at the macro level or when the import file size is very large (>100Mb). In the latter case you might want to upload the file into Oracle UCM using UCMs native IdcWebService, which supports MTOM and then execute the ESS jobs in order as we have traditionally done.

 

 

Pages

Subscribe to Oracle FAQ aggregator