Feed aggregator

Advanced Queuing & PL/SQL Notification & Oracle User

Tom Kyte - Wed, 2016-08-03 19:46
I have a queue and i register a PL/SQL - Routine fpr this QUEUE. When the routine is executed it will run under user SYS. I need that the procedure run under my user (e.g. ICIS_SERVICES). How i can reach this? Here my code example: =====...
Categories: DBA Blogs

INACTIVE sessions keeping a handle on CPU

Tom Kyte - Wed, 2016-08-03 19:46
Hi! We had an issue recently where there was a spike in CPU, and this caused an issue with the end users. We got it sorted by killing off some sessions that were running that shouldn't be. This fixed the issue, and we are working on the route caus...
Categories: DBA Blogs

Need some help developing sql analytical query to find consecutive rows changing between two values

Tom Kyte - Wed, 2016-08-03 19:46
Hello, I have master table having information for Trouble_Ticket_ID, Customrer_Circuit_ID,Location and details table B having the following information:Trouble_Ticket_ID,User_ID,Status, Newgroup. I need to calculate the no.of time the ticket h...
Categories: DBA Blogs

SQLcl to the rescue when the Database and APEX fail (with JSON limitations)

Dimitri Gielis - Wed, 2016-08-03 17:29
In the last two years I've been using JSON in almost every project I was involved in.
For example with APEX Office Print our plugin is sending JSON to the backend. This JSON you can actually see yourself in the Remote Debug screen in your AOP Dashboard.
Another example is the wearables project (IoT) I showed at KScope 16; the wearable is sending data to a smartphone or tablet, which in his turn is doing a call to our backend (in ORDS) and sending JSON across.

At the end of the day we want the data in the Oracle Database, so our APEX apps can work with that data.

Since Oracle DB 12c, JSON is supported straight from the database. I wrote a number of blog posts how to read JSON from SQL within the database. Here's a quick demo of JSON in the database:

SQL> create table tbl_with_json (
  2    json_clob  clob, 
  3    constraint json_clob_chk check (json_clob is json)
  4  );

Table TBL_WITH_JSON created.

SQL> 
SQL> insert into tbl_with_json (json_clob) values ('{
  2      "items": [{
  3          "client_id": -1,
  4          "registration_date": "2016-07-29T07:46:09.941Z",
  5          "question": "My Question",
  6          "description": "My huge clob"
  7      }]
  8  }');

1 row inserted.

SQL> 
SQL> select a.json_clob.items.question as question, a.json_clob.items.description as description 
  2    from tbl_with_json a;

QUESTION
----------------------------------------------------------------------------------------------------------------------------------------------------------------------
DESCRIPTION
----------------------------------------------------------------------------------------------------------------------------------------------------------------------
My Question                                                                                                                                                           
My huge clob                                                                                                                                                          




Now the reason of this blog posts: what if your JSON contains some very big text (>32K) in a single node e.g. in the description field? 

If you want to follow along in your own test case, open the description record in SQL Developer for example and past a large text (>32K) in the description node (so replace "My huge clob" with some other big text). Tip: For my test cases I typically use a Lorem Ipsum generator where I can specify the number of characters for example 33000 characters.



How can we parse this JSON and store for example the content of that in a CLOB field?

As I'm on 12c, should be simple right? The database is supporting reading JSON from SQL, so I first tried with JSON_TABLE, but there you can only define VARCHAR2 or NUMBER as data type, no CLOB, so went with VARCHAR2.

Here's the result:

SQL> select jt.question, jt.description
  2    from tbl_with_json, 
  3         json_table(json_clob, '$.items[*]'
  4           columns (
  5             question     varchar2 path '$.question',
  6             description  varchar2 path '$.description'
  7           )
  8*        ) as jt;

QUESTION
----------------------------------------------------------------------------------------------------------------------------------------------------------------------
DESCRIPTION
----------------------------------------------------------------------------------------------------------------------------------------------------------------------
My Question                                                                                                                                                           
                                                                                                                                                                      


Oracle just returns null (nothing - blank) for the description!

But it's definitely not blank:



Next I tried the query like in my initial example, but the result was the same:

SQL> select a.json_clob.items.question as question, a.json_clob.items.description as description 
  2    from tbl_with_json a;

QUESTION
----------------------------------------------------------------------------------------------------------------------------------------------------------------------
DESCRIPTION
----------------------------------------------------------------------------------------------------------------------------------------------------------------------
My Question                                                                                                                                                           
                                                                                                                                                                      


So the database will return a value when there's less than 4K (or possibly 32K depending the setting of your varchar2 size in the database) and it returns null when it's over this limit.

Hopefully Oracle Database 12.2 fixes this issue, but at the moment there's no native way to get to that data by using the Oracle supplied JSON functions.

Ok, what can we try next?...

Since Oracle Application Express 5, APEX comes with a very nice package to work with JSON, APEX_JSON. This package has been heaven for us, especially with AOP.
So I thought to try to use the APEX_JSON.PARSE and store it in a temporary JSON so I can read it with the get_clob_output method:

SQL> declare
  2    l_data clob;
  3    l_json apex_json.t_values;
  4    l_return clob;
  5  begin
  6    select json_clob
  7      into l_data
  8      from tbl_with_json;
  9    apex_json.parse(l_json, l_data) ;
 10    apex_json.initialize_clob_output(dbms_lob.call, true, 0) ;
 11    apex_json.open_object;
 12    apex_json.write(l_json, 'items[1].description') ;
 13    apex_json.close_object;
 14    l_return := apex_json.get_clob_output;
 15    apex_json.free_output;
 16  end;
 17  /

Error starting at line : 1 in command -
declare
  l_data clob;
  l_json apex_json.t_values;
  l_return clob;
begin
  select json_clob
    into l_data
    from tbl_with_json;
  apex_json.parse(l_json, l_data) ;
  apex_json.initialize_clob_output(dbms_lob.call, true, 0) ;
  apex_json.open_object;
  apex_json.write(l_json, 'items[1].description') ;
  apex_json.close_object;
  l_return := apex_json.get_clob_output;
  apex_json.free_output;
end;
Error report -
ORA-20987: Error at line 6, col 18: value exceeds 32767 bytes, starting at Lorem ipsum dolor sit amet, consectetuer adipiscin
ORA-06512: at "APEX_050000.WWV_FLOW_JSON", line 928
ORA-06512: at "APEX_050000.WWV_FLOW_JSON", line 993
ORA-06512: at line 9

But as you can see, there's a limit in there as well. So APEX 5 doesn't return null, but it returns an error. Hopefully a future version of APEX removes this limit ;)


When I work with data, I prefer to do it straight in the database, but now I was stuck. At those moments you have to go for a walk, get some sleep and talk to others to get more ideas... My preferred development languages (in this order) are APEX, SQL, PL/SQL, JavaScript, Node.js, ... (and then all others)

Then I remembered a blog post of Kris Rice that SQLcl has the ability to run JavaScript too because  SQLcl includes Nashorn (A Next-Generation JavaScript Engine for the JVM). So after looking at some SQLcl script examples, I wrote my own little SQLcl script that reads out the clob and puts it in a variable "content":

SQL> script
  2     var Types = Java.type("java.sql.Types")
  3     var BufferedReader = Java.type("java.io.BufferedReader")
  4     var InputStreamReader = Java.type("java.io.InputStreamReader")
  5     
  6     var GET_CLOB = "declare " + 
  7                    "   l_clob CLOB; " + 
  8                    " begin " + 
  9                    "   select json_clob " + 
 10                    "    into l_clob " + 
 11                    "    from tbl_with_json; " +
 12                    "   ? := l_clob;" + 
 13                    " end;"; 
 14  
 15     var cs = conn.prepareCall(GET_CLOB);
 16     cs.registerOutParameter(1, Types.CLOB);
 17     cs.execute();
 18     var clob = cs.getClob(1);
 19     cs.close();
 20  
 21     var r = new BufferedReader(new InputStreamReader(clob.getAsciiStream(), "UTF-8"))
 22     var str = null; 
 23     var content = "";
 24     while ((str = r.readLine()) != null) { content = content + str; }
 25     ctx.write(content);
 26  /
{ "items": [{ "client_id": -1, "registration_date": "2016-07-29T07:46:09.941Z", "question": "My Question", "description": "Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Donec quam felis, ultricies nec, pellentesque eu, pretium quis, sem. Nulla consequat massa quis enim. Donec pede justo, fring

So the above reads the content of the clob which contains the JSON.
As we are in JavaScript I thought we can parse this JSON and navigate to the description field. Once we have it we store it in another table or do whatever we want with it.
Cool if it would work, no? And it did! :)

So lets finish this example. First we create a table to store the description field (the very big text).

SQL> create table tbl_with_description (description clob);

Table TBL_WITH_DESCRIPTION created.


Here's the final script that will store the description node to another table :
- the ctx.write calls are there to send debug output
- the obj.items[0].description is how we get to the description node and we store that in a bind variable and execute another insert statement to save the description value:

SQL> script
  2  
  3  try {
  4     var Types = Java.type("java.sql.Types")
  5     var BufferedReader = Java.type("java.io.BufferedReader")
  6     var InputStreamReader = Java.type("java.io.InputStreamReader")
  7     
  8     var GET_CLOB = "declare " + 
  9                    "   l_clob CLOB; " + 
 10                    " begin " + 
 11                    "   select json_clob " + 
 12                    "    into l_clob " + 
 13                    "    from tbl_with_json; " +
 14                    "   ? := l_clob;" + 
 15                    " end;"; 
 16  
 17     var cs = conn.prepareCall(GET_CLOB);
 18     cs.registerOutParameter(1, Types.CLOB);
 19     cs.execute();
 20     var clob = cs.getClob(1);
 21     cs.close();
 22  
 23     var r = new BufferedReader(new InputStreamReader(clob.getAsciiStream(), "UTF-8"))
 24     var str = null; 
 25     var content = "";
 26     while ((str = r.readLine()) != null) { content = content + str; }
 27     //ctx.write(content);
 28  
 29     var obj = JSON.parse(content);
 30     ctx.write("Question: " + obj.items[0].question + "\n");
 31     ctx.write("Description: " + obj.items[0].description + "\n");
 32  
 33     var binds =  {};
 34     binds.description = obj.items[0].description;
 35  
 36     var ret = util.execute("insert into tbl_with_description (description) values (:description)", binds);
 37  
 38     if (ret) {
 39       ctx.write("Insert done!\n");
 40     } else {
 41       ctx.write("Error :(\n");
 42       var err = util.getLastException();      
 43       ctx.write("\nERROR:" + err + "\n");  
 44     }
 45  
 46  } catch(e){
 47      ctx.write(e +"\n")
 48      e.printStackTrace();
 49  }
 50  
 51  /
Question: My Question
Description: Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et magnis dis ... eu,
Insert done!


Testing:

SQL> select count(*) from tbl_with_description;

  COUNT(*)
----------
         1

SQL> select substr(description,1,50) from tbl_with_description;

SUBSTR(DESCRIPTION,1,50)                                                        
--------------------------------------------------------------------------------
Lorem ipsum dolor sit amet, consectetuer adipiscin                              

SQL> 

I was blown away by this... and I see a lot of potential be able to run JavaScript against the database.

There's actually a way to load Nashorn in your database too, so you can do JavaScript, Node.JS etc. straight from your database. Nashorn came with Java 8, but it should run in Java 7 too, now the default version of Java in the Oracle Database is 6, so there're some extra steps to do to get it to work. Running JavaScript from the database is something I've on my list to do R&D in and I actually submitted an abstract to KScope17 where I will present my results on this topic (if it gets accepted!) :) 

So to recap this (longer) blog posts:
1) JSON is being used a lot these days and having the possibility to work with JSON in the Oracle database is very nice, but as we have seen in the above example, it can't do everything yet. It has a real issue with large nodes.
2) Knowing other languages and thinking out-of-the-box might come in handy; I would even say that JavaScript becomes more and more important for an APEX developer.
3) SQLcl is a great tool, if you don't use it yet, I would definitely recommend looking into it. 
4) Oracle Nashorn opens up an entire new set of possibilities.

In the last paragraph of this blog post I want to thank Kris Rice for his help understanding SQLcl script. Although there are many examples, it took me some time to get going and I did struggle to understand how to get to error messages for example. Although it's mostly JavaScript in the script, having some Java knowledge makes it easier. Time to refresh that a bit, it has been 15 years ago I did some real Java coding.

Hope this blog post will help you work with JSON and JavaScript within an Oracle context.
Categories: Development

DBMS_STATS - Gather statistics on tables with many columns - 12c update

Randolf Geist - Wed, 2016-08-03 12:33
This is just a short 12c update on my post about gathering statistics on tables with many columns from some time ago.

I'm currently investigating the "Incremental Statistics" feature in 12.1.0.2 for a client, which probably will be worth one or more other posts, but since we're still in the process of evaluating and installing various patches it's too early to write about that.

As part of the investigation I've noticed a significant change in behaviour in 12.1.0.2 compared to previous versions when it comes to gathering statistics on tables with many columns, hence this post here.

The key message of the original post was that DBMS_STATS needs potentially several passes when gathering statistics on tables with many columns, which meant a significant increase in overall work and resource consumption, exaggerated by the fact that tables with that many columns consist of multiple row pieces.

Along with other significant improvements in 12c (among others new types of histograms and the capability to derive some types of these histograms from a single pass with no need to run separate passes for each histogram) obviously now no longer multiple passes are required for such tables - Oracle can obviously now cope with up to and including 1000 columns in a single pass.

Repeating the test case from the original post for a table with 1.000 columns / 10.000 blocks and using a default METHOD_OPT setting of "FOR ALL COLUMNS SIZE AUTO" results in the following single query executed by the DBMS_STATS call:

SQL ID: 98828wcrfyn0c Plan Hash: 1962185829

select /*+ full(t) no_parallel(t) no_parallel_index(t) dbms_stats
cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring
xmlindex_sel_idx_tbl no_substrb_pad */to_char(count("COL968")),
substrb(dump(min("COL968"),16,0,64),1,240),substrb(dump(max("COL968"),16,0,
64),1,240),to_char(count("COL969")),substrb(dump(min("COL969"),16,0,64),1,
240),substrb(dump(max("COL969"),16,0,64),1,240),to_char(count("COL970")),
substrb(dump(min("COL970"),16,0,64),1,240),substrb(dump(max("COL970"),16,0,
64),1,240),to_char(count("COL971")),substrb(dump(min("COL971"),16,0,64),1,
240),substrb(dump(max("COL971"),16,0,64),1,240),to_char(count("COL972")),
.
.
.
from
"CBO_TEST"."MANY_X_COLS" t /* NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,
NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,
NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,
NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,
NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,
.
.
.
NDV,NIL,NIL,NDV,NIL,NIL*/


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 1.95 1.97 0 40011 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 1.95 1.98 0 40011 0 1

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 111 (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
1 1 1 SORT AGGREGATE (cr=40011 pr=0 pw=0 time=1969578 us)
9995 9995 9995 OPTIMIZER STATISTICS GATHERING (cr=40011 pr=0 pw=0 time=751494 us cost=2717 size=240000 card=10000)
10000 10000 10000 TABLE ACCESS FULL MANY_X_COLS (cr=40011 pr=0 pw=0 time=413062 us cost=2717 size=240000 card=10000)
As it can be seen the "APPROXIMATE NDV AGGREGATE" operation introduced in 11g for the new statistics gathering mode has been renamed in 12c to "OPTIMIZER STATISTICS GATHERING".

Apart from that this is good news as it minimizes the work required to gather statistics for such tables - it takes the minimum of logical / physical I/Os to process the data. And as already mentioned 12c is also capable of more, like generating synopses for incremental statistics and histograms from such a single pass.

Webcast: Oracle WebCenter Portal Product Roadmap Review

WebCenter Team - Wed, 2016-08-03 11:20
Webcast: Oracle WebCenter Portal Product Roadmap Review Oracle WebCenter Content & Imaging Product Roadmap Review

The IOUG WebCenter SIG is hosting a webcast on August 10th at 12:00pm Central time.

Featured Speaker: Vihang Pathak, Principal Product Manager, Oracle Corp.

This session will recap all the major enhancements in the WebCenter Portal product line for previous releases and will discuss possible candidate enhancements for the WebCenter Portal 12.2.1.3 release.

Register for the webcast here.

IOUG WebCenter Special Interest Group

Introducing a VCE White Paper. Consolidating SAP, SQL Server and Oracle Production/Test/Dev/OLTP and OLAP Into a Single XtremIO Array with VCE Converged Infrastructure.

Kevin Closson - Wed, 2016-08-03 10:32

This is just a short blog post to direct readers to a fantastic mixed-workload and heterogeneous database consolidation Proof of Concept. This VCE paper should not be missed. I assert that the VCE converged infrastructure platforms–most notably the Vblock 540–are the best off-the-shelf solution for provisioning XtremIO storage array all-flash storage to large numbers of hosts each processing vastly differing workloads (production,test/dev,OLTP,OLAP).

This paper is full of useful information. It explains the XtremIO 24:1 data reduction realized in the test. It also shows a great deal of configuration tips such as controlling I/O on Linux hosts with CGROUPS and on VMware virtual hosts via VMware Storage I/O Control.

The following is an overview of the testing landscape proven in the paper:

  • A high frequency online transaction processing (OLTP) application with Oracle using the Silly Little Oracle Benchmark (SLOB) tool
  • A modern OLTP benchmark simulating a stock trading application representing a second OLTP workload for SQL Server
  • ERP hosted on SAP with an Oracle data store simulating a sell-from-stock business scenario
  • A decision support system (DSS) workload accessing an Oracle database
  • An online analytical processing (OLAP) workload accessing two SQL Server analysis and reporting databases
  • Ten development/test database copies for each of the Oracle and SQL Server OLTP and five development/test copies of the SAP/Oracle system (25 total copies)

The following graphic helps visualize the landscape:

Screen Shot 2016-08-03 at 7.59.16 AM

The following graphic shows an example of one of the test scenario I/O performance metrics discussed in the paper:

Screen Shot 2016-08-03 at 8.01.03 AM

I encourage you to click the following link to download the paper: VCE Solutions for Enterprise Mixed Workloads on Vblock System 540


Filed under: oracle

SUPERVALU Uses Oracle Cloud to Provide Expanded Service Offerings to its Customers

Oracle Press Releases - Wed, 2016-08-03 07:00
Press Release
SUPERVALU Uses Oracle Cloud to Provide Expanded Service Offerings to its Customers Oracle Cloud to enable SUPERVALU to deliver new capabilities, reduce costs, and provide more efficient service offerings to retailers

Eden Prairie, Minn. and Redwood Shores, Calif.—Aug 3, 2016

SUPERVALU INC. (NYSE: SVU), one of the largest grocery wholesalers and retailers in the United States, today announced it has selected the Oracle Cloud as its new technology platform that will enable the company to deliver more robust business management and data analytics capabilities to its customers. With Oracle Cloud, SUPERVALU will be able to provide an integrated portfolio of enterprise-grade cloud services to enhance the performance of its Human Resources and Finance functions.

“We have a terrific opportunity to deliver more professional services and back-end support to our existing wholesale customers, as well as leverage our scale and expertise to reach new customers,” said Randy Burdick, SUPERVALU’s Executive Vice President, Chief Information Officer. “The Oracle Cloud provides us with a more robust infrastructure and a comprehensive solution that we believe will help us drive increased efficiencies, speed decision-making, and enhance the overall customer experience. We’re excited to offer this solution to our customers as another example of how we’re using technology to further enhance their businesses for the future.”

Initially, SUPERVALU will implement Oracle ERP Cloud and Oracle HCM Cloud, part of a broad suite of modern Oracle Cloud applications that are integrated with social, mobile and analytic capabilities, resulting in improved process integration and more complete and impactful reporting. The Oracle Cloud provides an enhanced user experience via a simple, scalable and intuitive design that can be delivered to users on desktop, tablet and mobile applications.  SUPERVALU will use Oracle HCM Cloud’s suite of talent and workforce management offerings designed to provide an engaging and collaborative HR experience to find and retain quality employees.

“Oracle Cloud provides SUPERVALU with a complete set of enterprise-grade cloud applications, offering both breadth and depth of functionality,” said Rondy Ng, Senior Vice President, Applications Development, Oracle. “Our integrated ERP and HCM cloud solutions are uniquely placed to cater to today’s changing business environment and enable our customers to modernize back-office operations and empower their people.”

SUPERVALU will begin a phase-in of these new Oracle Cloud offerings across three key parts of its business: internal administration of its wholesale distribution and retail grocery businesses, and in services the company provides externally to its customers.

Contact Info
Nicole Maloney
Oracle
+1.650.506.0806
nicole.maloney@oracle.com
Jeff Swanson
SUPERVALU INC.
1.952.930.1645
jeffrey.swanson@supervalu.com
About SUPERVALU

SUPERVALU INC. is one of the largest grocery wholesalers and retailers in the U.S. with annual sales of approximately $18 billion. SUPERVALU serves customers across the United States through a network of 3,588 stores composed of 1,796 independent stores serviced primarily by the Company’s food distribution business; 1,360 Save-A-Lot stores, of which 897 are operated by licensee owners; and 200 traditional retail grocery stores (store counts as of February 27, 2016). Headquartered in Minnesota, SUPERVALU has approximately 40,000 employees. For more information about SUPERVALU visit www.supervalu.com.

About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Nicole Maloney

  • +1.650.506.0806

Jeff Swanson

  • 1.952.930.1645

Orchestration 2.0 - creating a storage element

Pat Shuff - Wed, 2016-08-03 02:07
In this blog we will look at an Oracle Orchestration for a Bitnami WordPress instance. We provisioned the instance by going to the http://cloud.oracle.com/marketplace and provisioned a WordPress instance. The instance that we are going to install can be found at https://cloud.oracle.com/marketplace/en_US/listing/4980490?_afrLoop=9282045484046863&_afrWindowMode=0&_afrWindowId=87oc7qxu0_1 and is based on Oracle Linux 6.7. The minimum profile for this instance is an OC3 (1 OCPU, 7.5 GB RAM) and 60 GB of local disk.

When we click on the Get App button for this application it takes us through the installation process for cloud compute services. The process that is kicked off when we click on the Get App button downloads a bootable image from the Marketplace and makes it available as an Image that we can create new instances from any time we want. The default size is 10 GB. We need to grow this installation to be 60 GB to allow MySQL and the WordPress application to operate properly.

To create an instance we go to the compute console and create instance. We select private images to get to the WordPress bitnami instance that we downloaded to boot from. We enter the network security list, ssh keys, and name for the instance.

The default disk size is 10 GB. We will keep this and review the configuration before launching.

Once we click on Create the cloud console creates three orchestration files to initialize the WordPress instance. The first file that is created defines the storage. The file is called bitnami-wordpress-4.5.3-0-linux-_20160721144012_storage.json if we accept all of the defaults. In our example it would be called WordPress_4_5_3_storage.json. This file describes the storage that is needed for booting the operating system and is of the format

{
  "relationships" : [ ],
  "account" : "/Compute-metcsgse00028/default",
  "description" : "",
  "schedule" : {
    "start_time" : "2016-07-21T19:40:35Z",
    "stop_time" : "2016-07-21T21:50:02Z"
  },
  "oplans" : [ {
    "obj_type" : "storage/volume",
    "ha_policy" : "",
    "label" : "bitnami-wordpress-4.5.3-0-linux-_20160721144012_storage",
    "objects" : [ {
      "managed" : true,
      "snapshot_id" : null,
      "snapshot_account" : null,
      "machineimage_name" : "/Compute-metcsgse00028/marketplace01-user@oracleads.com/bitnami-wordpress-4.5.3-0-linux-oel-6.7-x86_64",
      "status_timestamp" : "2016-07-21T19:44:51Z",
      "imagelist" : "/Compute-metcsgse00028/marketplace01-user@oracleads.com/bitnami-wordpress-4.5.3-0-linux-oel-6.7-x86_64",
      "writecache" : false,
      "size" : "10737418240",
      "storage_pool" : "/compute-us2-z12/cheis01nas100-v1_multipath/storagepool/iscsi/latency_1",
      "shared" : false,
      "status" : "Online",
      "description" : "",
      "tags" : [ ],
      "quota" : null,
      "properties" : [ "/oracle/public/storage/default" ],
      "account" : "/Compute-metcsgse00028/default",
      "name" : "/Compute-metcsgse00028/cloud.admin/bitnami-wordpress-4.5.3-0-linux-_20160721144012_storage",
      "bootable" : true,
      "hypervisor" : null,
      "uri" : null,
      "imagelist_entry" : 1,
      "snapshot" : null
    } ]
  } ],
  "user" : "/Compute-metcsgse00028/cloud.admin",
  "name" : "/Compute-metcsgse00028/cloud.admin/bitnami-wordpress-4.5.3-0-linux-_20160721144012_storage"
}
Let’s walk through this file and look at all the structures. Let’s break down the structure without all of the child components.
{
  "relationships" : [ ],
  "account" : "/Compute-metcsgse00028/default",
  "description" : "",
  "schedule" : { …  },
  "oplans" : [ … ],
  "user" : "/Compute-metcsgse00028/cloud.admin",
  "name" : "/Compute-metcsgse00028/cloud.admin/bitnami-wordpress-4.5.3-0-linux-_20160721144012_storage"
}
Note that there are seven elements that define this object. The first is the relationship. In this file, this is the basis of a configuration and it does not have any dependencies on another object. The second is account. Account defines the owner of the object and also defines security associated with the object. We could have a specific account in our identity domain that can access this object or make it accessible to all users through the default label. The third element is the description. There is no description for this object. We could have added more information when we created this information when we provisioned the disk. This is an informative field and is not critical for creation. The fourth field is schedule. The schedule defines when the object was created and logs the start and stop times for the object. The fifth field is the oplans. Oplans defines the object. We obscured the definition at this point and will dive into that next. The sixth field is the user field. The user is who created the object and who owns the object. The final and seventh field is the name of the object. The name consists of the identity domain, the user that created it, and the name of the object. In this example the name is "/Compute-metcsgse00028/cloud.admin/bitnami-wordpress-4.5.3-0-linux-_20160721144012_storage”. In our example it would be "/Compute-metcsgse00028/cloud.admin/WordPress_4_5_3_storage”.

If we dive into the oplans we note that the object is defined with

{
    "obj_type" : "storage/volume",
    "ha_policy" : "",
    "label" : "bitnami-wordpress-4.5.3-0-linux-_20160721144012_storage",
    "objects" : [ … ]
}
The oplans consists of four elements and are defined in the documentation. All of these objects are required with the exception of ha_policy. The first parameter defined is the obj_type. This parameter can be defined as
  • Ip-reservation
  • Launchplan
  • Orchestration
  • Storage/volume
  • Secapplication
  • Seciplist
  • Seclist
  • Secrule

    We are defining this element as a storage/volume. We give it the name associated with the label parameter and the characteristics defined in the objects parameters. We have the option of defining the ha_policy as active or monitor. If we define it as active the object is restarted if it is deleted or fails. Active is only available if the obj_type is launchplan. We can set the ha_policy to monitor for obj_type of launchplan, storage/volume, or orchestration. If the object fails an error is thrown and can be thrown to a monitoring software package but the object is not automatically restarted or recreated. For all other objects, the ha_policy must be set to none or set to an empty field. For our example we would set the label to “WordPress_4_5_3_storage” rather than the default label generated by the bitnami installation.

    The objects field is defined in the documentation. We are going to dive into the storage volume object in this example. The fields that are required for storage is name, size, and properties. The optional fields are description, bootable, and tags. For our example we define the objects parameter as

    "objects" : [ {
          "managed" : true,
          "snapshot_id" : null,
          "snapshot_account" : null,
          "machineimage_name" : "/Compute-metcsgse00028/marketplace01-user@oracleads.com/bitnami-wordpress-4.5.3-0-linux-oel-6.7-x86_64",
          "status_timestamp" : "2016-07-21T19:44:51Z",
          "imagelist" : "/Compute-metcsgse00028/marketplace01-user@oracleads.com/bitnami-wordpress-4.5.3-0-linux-oel-6.7-x86_64",
          "writecache" : false,
          "size" : "10737418240",
          "storage_pool" : "/compute-us2-z12/cheis01nas100-v1_multipath/storagepool/iscsi/latency_1",
          "shared" : false,
          "status" : "Online",
          "description" : "",
          "tags" : [ ],
          "quota" : null,
          "properties" : [ "/oracle/public/storage/default" ],
          "account" : "/Compute-metcsgse00028/default",
          "name" : "/Compute-metcsgse00028/cloud.admin/bitnami-wordpress-4.5.3-0-linux-_20160721144012_storage",
          "bootable" : true,
          "hypervisor" : null,
          "uri" : null,
          "imagelist_entry" : 1,
          "snapshot" : null
        } ]
      } ]
    
    The name of our object is "/Compute-metcsgse00028/cloud.admin/bitnami-wordpress-4.5.3-0-linux-_20160721144012_storage". Note that this consists of the instance domain, username that created the storage, and the label for the storage. In our example it would be "/Compute-metcsgse00028/cloud.admin/WordPress_4_5_3_storage". We define the size as "10737418240" which is in bytes. This correlates to a 10 GB disk for the operating system. Properties is defined as [ "/oracle/public/storage/default" ] which defines the storage as default storage. We could have selected latency rather than default if we required a low latency and high IOPS storage that is typically used for a database. The rest of the fields in this description are optional. The bootable tag defines if this is a bootable image and the status defines if the storage is active or in standby mode. Note that the storage_pool defines that this storage element is an iscsi logical unit number that is made available to the compute node rather than dedicated attached storage.

    All of these fields define what is required to create the storage that we are going to create to boot our operating system and run our WordPress application. We could just as easily have defined this as a 20 GB disk or 200 GB disk. It is important to note that you get 128 GB free with a compute instance and will have to pay $50/TB/month if you go over 128 GB of storage per instance. Up next, how to take this storage and associate it with an instance.

  • Select count(*) on a billion records table.

    Tom Kyte - Wed, 2016-08-03 01:26
    Hello Tom, Thanks for all the great help. I am writing a simple query which is Select count(*) from wsh_exceptions. Currently this table contains 1,091,130,564 records. So because of this more number of records in this table, select count(*) ...
    Categories: DBA Blogs

    CROSS APPLY and OUTER APPLY in 12c

    Tom Kyte - Wed, 2016-08-03 01:26
    Hi, I have learned that we have CROSS APPLY and OUTER APPLY in 12c. However, I see results are same for CROSS APPLY and INNER JOIN, OUTER APPLY and LEFT / RIGHT OUTER JOIN. So when INNER JOIN and LEFT/RIGHT OUTER JOIN are ANSI Standard and yi...
    Categories: DBA Blogs

    CPU Waits

    Tom Kyte - Wed, 2016-08-03 01:26
    Hi, When i am trying to analyze my database performance through AWR reports, my findings are Wait Wait Time Total Avg(ms) %DB Time DB CPU 2,731,581.78 75.49 * System I/O db file parallel write 882,568.7...
    Categories: DBA Blogs

    Contraint/Check _ Prevent any value to be inserted more than twice in any table.

    Tom Kyte - Wed, 2016-08-03 01:26
    I have a table say xyz which contains a field say roll_number, now I want to implement a check so that any roll number is allowed only twice in xyz. OR Suppose I have a table xyz containing roll numbers, and another table abc containg roll numb...
    Categories: DBA Blogs

    Difference Between Two Dates With Time

    Tom Kyte - Wed, 2016-08-03 01:26
    Sir, There are two input dates with format 'DD/MM/YYYY HH:MI:SS' How can I get difference between these two dates like Input ---------------------------------------------- '15/11/2015 11:19:58' '14/10/2014 10:18:57' -----...
    Categories: DBA Blogs

    Is it feasible to use Dynamic SQL for a statement with "into :num" clause?

    Tom Kyte - Wed, 2016-08-03 01:26
    We have the following statement executed in the database by the outside-of-the-database application: "SELECT COUNT(*) INTO :num FROM TABLE1" I'd like to be able to execute this statatement at will via Dynamic SQL to preserve the same sql_id. ...
    Categories: DBA Blogs

    Creating a view using dynamic pivot table

    Tom Kyte - Wed, 2016-08-03 01:26
    Hi Tom I apologize if this has been posted before. I am trying to build a web page that will provide information from AWR tables and I would like to compare two months data in a chart. select * from ( select to_char(end_time,'dd') Day ...
    Categories: DBA Blogs

    ORA-12537: TNS:connection closed

    VitalSoftTech - Wed, 2016-08-03 01:18
    When trying to establish a sqlplus connection I get the "ORA-12537: TNS:connection closed" error messgae. How do I resolve this?
    Categories: DBA Blogs

    Oracle Partner Community - WebLogic & Developers Newsletter July 2016

    The Fusion Middleware flagship on-premises products WebLogic 12.2.1.1, SOA Suite 12.2.1.1 and BPM Suite 12.2.1.1 are available for download! Find out more news from the Weblogic and Developers EMEA...

    We share our skills to maximize your revenue!
    Categories: DBA Blogs

    Fifth Recommended Patch Collection for EBS 12.1.3 Now Available

    Steven Chan - Tue, 2016-08-02 16:55
    We are pleased to announce the availability of the fifth consolidated patch collection for Oracle E-Business Suite Release 12.1.3.  This patchset combines important patches for Release 12.1.3 into a single integrated and regression-tested patchset:

    EBS 12.1.3 RPC 5 screenshot download

    Reasons to Apply RPC5

    Oracle E-Business Suite Release 12.1.3 has been available for a while, and over its lifetime has been significantly enhanced in numerous ways. These enhancements and fixes are provided as individual patches, product family Release Update Packs (RUP), and product-specific Recommended Patch Collections (RPC).

    For example, there are Recommended Patch Collections for Financials products such as General Ledger and Payables. On the technology side, there are patchsets such as the OA Framework 12.1.3.1 Consolidated Update Patch.

    Identifying and applying all the individual patches, RUPs, and RPCs you need can be time-consuming. Complicating matters further, patchsets across multiple product families are generally not tested together.

    Patches included in Oracle E-Business Suite Release 12.1.3+ RPC5 are also available individually, and may contain conditions and prerequisites relevant to their application. RPC5 includes all the recommended patches, along with their dependencies. Applying this patch in a single downtime patching window saves effort and time, rather than applying each recommended patch individually.

    What's Included in RPC5?

    The EBS 12.1.3+ RPC5 patchset includes the latest recommended patches and their dependencies for the following Oracle E-Business Suite products and product families:

    • Applications Technology (ATG)
    • Oracle Customer Relationship Management (CRM)
    • Oracle Financials (FIN)
    • Oracle Human Resource Management System (HRMS)
    • Oracle Procurement (PRC)
    • Oracle Projects (PJ)
    • Oracle Supply Chain Management (SCM)
    • Oracle Value Chain Planning (VCP)
    • Oracle Business Intelligence (BIS)

    Is it necessary to apply this RPC if I only use a subset of Oracle E-Business Suite products?

    Applying this RPC is recommended even if you are using only a subset of Oracle E-Business Suite products. The patching utility 'adpatch' takes care of the files to be applied and generates them based on the product installation status.

    I have already applied a few Family Packs included in this RPC. What happens when I apply this RPC?

    Applying this RPC on top of already applied individual Family Packs will not harm your system because the patching utility 'adpatch' skips the files already applied on the system. You can apply the latest available RPC on your system.

    A patch I need isn't included in this RPC. Why not?

    This RPC combines recommended patches that were released after Release 12.1.3 into a single, integrated and regression-tested patch to help ensure a successful installation. If you don't find the patch in the RPC, it is likely that the patch was released after the patch list for the specific RPC was frozen or the patch was not marked "recommended" at the time the list was generated.

    Should I only install the latest RPC? Do I have to install all prior RPCs? Should I install later individual product family RPCs?

    To keep your system up-to-date with the most current Oracle E-Business Suite recommended patches, you should only apply the latest available RPC (which supersedes any prior release suitewide RPC) and thereafter either install individual product-level recommended patches as required or install the next suitewide level RPC when it becomes available.

    Related Articles
    Categories: APPS Blogs

    Pages

    Subscribe to Oracle FAQ aggregator