Feed aggregator

Virtual Technology Summit - Spotlight on Operating Systems, Virtualization Technologies and Hardware

OTN TechBlog - Thu, 2015-08-20 14:00

Register now for OTN's new Virtual Technology Summit - September 16, 2015. Hear from Oracle ACEs, Java Champions and Oracle Product Experts, as they share their insights and expertise through Hands-on-Labs, highly technical presentations and demos that enable you to master the skills you need to meet today's IT challenges. Chat live with folks and ask your questions as you attend sessions.

Operating Systems, Virtualization Technologies and Hardware Spotlight: Implementing Your Cloud - Most IT organizations have roadmaps for cloud infrastructure. Most vendors have some sort of story as to how they can get you to the cloud. Oracle specifically has itself to the idea that you can run your applications identically in our public cloud and your private cloud. The question is: How? In this track we'll roll up our sleeves and show you how to implement your clouds using Oracle hardware, software, and best practices. There are four sessions in the Systems track:

  • Best Practices Building Efficient and Secure Cloud Infrastructure - Learn how to create virtual machines(VMs), deploy VM's using Templates, rapidly migrate those VM's to the Cloud, and deploy Oracle Applications & Databases in minutes on a flexible, secure, Private Cloud Infrastructure. Additionally, experience Oracle's Enterprise Cloud Infrastructure with Oracle's Enterprise Manager-Cloud Control to automatically provision both Operating Systems and Oracle Databases in a DBaaS model.

  • What's New in Solaris 11.3 - Oracle Solaris 11 is a complete and secure cloud platform. With best of breed technologies for computer, networking and storage, learn how Oracle Solaris can help transform your IT operations to move to the cloud and make it simple to do. In this session we will cover some of the latest innovations engineered in Oracle Solaris 11.3 to manage a secure and integrated, large-scale cloud environment.

  • Optimizing NAS Storage for Secure Cloud Infrastructures - The rapid expansion of secure and reliable cloud capabilities is fundamentally changing IT operations. Over time, an increasing percentage of your data will reside off premise in a public or hybrid cloud. You won't just need fast and efficient storage to accommodate ever increasing information growth. You'll need highly secure storage to assure your critical data is well protected - independent of where it resides. This presentation covers the unique characteristics of Oracle's ZFS Storage Appliance and it's cache centric hybrid architecture ideally suited for cloud applications providing fast, efficient and secure data storage for public, private and hybrid cloud infrastructure, so you can migrate toward the cloud with confidence.

  • Automate your Oracle Solaris 11 and Linux Deployments with Puppet - Puppet is a popular open source configuration management tool that is used by many organizations to automate the setup and configuration of servers and virtual machines. Solaris 11.2 includes native support for puppet and extends the resources that can be managed to Solaris specific things like zones and ZFS. This presentation will give system administrators that are new to puppet an introduction and a way to get started with automating the configuration of Oracle Linux and Oracle Solaris systems, talk about how puppet integrates with version control and other projects and look at the Solaris specific resource types.
Register today!

Become a member of the OTN Community: Register here to start participating in our online community. Share your expertise with other community members!

NEW REWARDS! If you attend this virtual technology summit and are already a member of the Oracle Technology Network Community, you will earn 150 points towards our new Rewards and Recognition program (use the same email for both). Read all about it: Oracle Community - Rewards & Recognition FAQ.

Popularity of Tea, Coffee, Beer and Wine Visualization

Nilesh Jethwa - Thu, 2015-08-20 07:16

If number of searches were an indicator for gauging the popularity of a drink between Tea, Coffee, Beer and Wine then who wins the contest?

Which drink is gaining popularity and which is losing?

Read more at: http://www.infocaptor.com/dashboard/popularity-of-tea-coffee-beer-and-wine-visualization

Expert Oracle Application Express (second edition) available

Dimitri Gielis - Thu, 2015-08-20 06:03
During my holiday Apress released the second edition of Expert Oracle Application Express.
In the second edition we not only updated the content to APEX 5.0, but there're some new chapters in too from new authors. In total you get 14 chapters from 14 different authors.
I believe it's a very nice book with great content and it's for a good cause. Roel describes it very nicely in his blog post "the goal is to raise as much money as we can for the funds that support the relatives of two of the greatest Oracle APEX Development Team members who passed away a few years ago: Carl Backstrom and Scott Spadafore."


You can get it from the Apress website or from Amazon. Happy reading :)

Categories: Development

Virtual Technology Summit – Spotlight on Java

OTN TechBlog - Wed, 2015-08-19 14:00

Register now for OTN's new Virtual Technology Summit - September 16, 2015. Hear from Oracle ACEs, Java Champions and Oracle Product Experts, as they share their insights and expertise through Hands-on-Labs, highly technical presentations and demos that enable you to master the skills you need to meet today's IT challenges. Chat live with folks and ask your questions as you attend sessions.

Java Spotlight: Cloud, IoT and Java 8 - Java is an integral part of any cutting edge IT project. In this VTS, you will get a deep understanding of Cloud-enabled JavaScript stored procedures with Java 8 Nashorn, Java 8 Date and Time API and applications connecting devices with the cloud. There are three sessions in the Java track:

  • Connecting Devices to the Cloud - Taking healthcare as an example, this session will demonstrate using a mobile phone and a smart watch in combination with a Java based gateway, iBeacons and other sensors to monitor the activity of elderly people. With the help of an IoT cloud service this data can be analyzed to detect situations that might be critical (illness, bone fracture etc.). If such a case was detected, the cloud service can trigger enterprise applications. With this approach it might be possible to connect such a system to existing healthcare applications. This session will give you an idea how you can combine existing technologies to do something useful and help elderly people in case of an emergency.
  • Java SE 8 - Date and Time - The Date and Time API introduced in Java 8 provides a comprehensive and standardized package for date and time implementation. In this session you will learn the best ways to apply this new functionality in your projects.
  • Cloud enabled JavaScript Stored Procedures with Java 8 Nashorn - JavaScript is one of the most popular programming language and the natural choice for processing JSON documents. To process millions of JSON documents would you rather perform the processing in-place in RDBMS, returning only the result sets to the invoker, or ship the data to a middle-tier engine? Stored procedures allowing database processing thereby avoiding data shipping. However, to be portable across tiers and databases, JavaScript stored procedures need a standard database access API; Java 8 furnishes the Nashorn JavaScript engine which allows JDBC calls in JavaScript. For Cloud deployment, JavaScript stored procedures may be invoked through RESTful Web Services, turning these into Cloud data services.
Register today!

Become a member of the OTN Community: Register here to start participating in our online community. Share your expertise with other community members!

NEW REWARDS! If you attend this virtual technology summit and are already a member of the Oracle Technology Network Community, you will earn 150 points towards our new Rewards and Recognition program (use the same email for both). Read all about it in our FAQ: Oracle Community - Rewards & Recognition FAQ.

Shrink/Grow Exadata diskgroups

Syed Jaffar - Wed, 2015-08-19 02:42
One of the important tasks that I foresee after an initial Exadata deployment is, mostly prior to DB in production, is to balance/resize the Exadata diskgroups (DATA & RECO).  Generally, the space is distributed as 80(DATA), 20(RECO) or 40(DATA), 60(RECO), depending on the database backup option you choose while deploying. In one of our Exadata setups, we don't need such a huge RECO size, hence, we shrunk the RECO size and increased the DATA diskgroup size. I am pretty sure, many of you might have done and want to do the same. However, shrinking/rebalancing the space is not like a normal ASM resize operation on Exadata, it needs some special consideration and tasks. The following Oracle Support Notes has the better explanation and examples to achieve the task.

Example of dropping RECO diskgroup and adding the space to DATA diskgroup (Doc ID 1905972.1)
NOTE:1465230.1 - Resizing Grid Disks in Exadata: Example of Recreating RECO Grid Disks in a Rolling Manner
How to increase ASM disks size in Exadata with free space in cell disks (Doc ID 1684112.1)
Resizing Grid Disks in Exadata: Examples (Doc ID 1467056.1)



Virtual Technology Summit - Spotlight on Database

OTN TechBlog - Tue, 2015-08-18 14:00

Register now for OTN's new Virtual Technology Summit - September 16, 2015. Hear from Oracle ACEs, Java Champions and Oracle Product Experts, as they share their insights and expertise through Hands-on-Labs, highly technical presentations and demos that enable you to master the skills you need to meet today's IT challenges. Chat live with folks and ask your questions as you attend sessions.

Database Spotlight: Develop, Deploy and Manage Database Applications in the Oracle Cloud - Oracle has delivered the most comprehensive and powerful platform for deploying Cloud-based applications and services. Starting from the Infrastructure as a Service to the Platform as a Service, Oracle delivers a fully integrated cloud --platform and applications. This track provides an in-depth look at Oracle Database Cloud Services and the enabling technologies for developing, deploying and managing applications in the Oracle Cloud. There are three sessions in the database track:

  • Using Oracle SQL Developer and Oracle REST Data Services to enable the Oracle Cloud - This session presents the latest capabilities in Oracle's popular SQL Developer and ORDS tools for database application development and deployment in the cloud. See how to clone and migrate data between Oracle cloud and on-premise implementations and other powerful techniques for developing applications for the cloud, in the cloud.
  • Developing APEX 5.0 Mobile Applications in the Oracle Cloud - This session will walk through the capabilities for rapidly building and deploying responsive web applications using APEX 5.0 showing Oracle Developer Cloud Services in action.
  • How to Deploy and Monitor Cloud Database Applications Using Oracle Enterprise Manager 12c Cloud Control - This session provides an overview of Oracle Database Cloud Services and the management capabilities of Oracle Enterprise Manager 12c Cloud Control. See how to manage Oracle cloud deployment, provisioning, security and application access and monitoring.
Register today!

Become a member of the OTN Community: Register here to start participating in our online community. Share your expertise with other community members!

NEW REWARDS! If you attend this virtual technology summit and are already a member of the Oracle Technology Network Community, you will earn 150 points towards our new Rewards and Recognition program (use the same email for both). Read all about it: Oracle Community - Rewards & Recognition FAQ.

Convert CSV file to Apache Parquet... with Drill

Tugdual Grall - Tue, 2015-08-18 09:44
Read this article on my new blog A very common use case when working with Hadoop is to store and query simple files (CSV, TSV, ...); then to get better performance and efficient storage convert these files into more efficient format, for example Apache Parquet. Apache Parquet is a columnar storage format available to any project in the Hadoop ecosystem. Apache Parquet has the following Tugdual Grallhttps://plus.google.com/103667961621022786141noreply@blogger.com0

It’s all about the Cloud!

OTN TechBlog - Mon, 2015-08-17 14:00

Register now for OTN's next Virtual Technology Summit:

It's all about the Cloud! Hear from Oracle ACEs, Java Champions and Oracle Product Experts, as they share their insights and expertise through Hands-on-Labs, highly technical presentations and demos that enable you to master the skills you need to meet today's IT challenges. Chat live with folks and ask your questions as you attend sessions.

This interactive, online event offers four technical tracks, each with a unique focus on specific tools, technologies, best practices and tips:

  • Java: Java 8 in Action - Java 8 has been out for over a year now. But do you really know and use Java 8 to its full potential? In this Virtual Technology Summit, learn about Java SE cloud applications, Cloud-enabled JavaScript stored procedures with Java 8 Nashorn and the Java 8 Date and Time API.
  • Operating Systems: Virtualization Technologies, and Hardware: Implementing Your Cloud - Most IT organizations have roadmaps for cloud infrastructure. Most vendors have some sort of story as to how they can get you to the cloud. Oracle specifically has itself to the idea that you can run your applications identically in our public cloud and your private cloud. The question is: How? In this track we'll roll up our sleeves and show you how to implement your clouds using Oracle hardware, software, and best practices.
  • Database: Develop, Deploy and Manage Database Applications in the Oracle Cloud - Oracle has delivered the most comprehensive and powerful platform for deploying Cloud-based applications and services. Starting from the Infrastructure as a Service to the Platform as a Service, Oracle delivers a fully integrated cloud --platform and applications. This track provides an in-depth look at Oracle Database Cloud Services and the enabling technologies for developing, deploying and managing applications in the Oracle Cloud.
  • Middleware: Middleware in the Cloud: PaaS Gets Real - The middleware track in the Fall 2015 edition of the OTN Virtual Technology Summit puts the spotlight on Oracle's Mobile Cloud Service (MCS), Process Cloud Service (PCS), and Java Cloud Service (JCS), three of the more the two dozen new services available on the Oracle Cloud Platform. In each of the three deep-dive sessions a recognized expert from the OTN community walks you through a technical how-to to demonstrate how you can use these PaaS services, and compares each to its on-premise counterparts. PaaS services loom large in the future for developers and architects, so if you're developing enterprise mobile applications, or working with Oracle BPM or WebLogic, you'll want to make sure these #OTNVTS sessions are on your calendar.
Register today!

Become a member of the OTN Community: Register here to start participating in our online community. Share your expertise with other community members!

NEW REWARDS! If you attend this virtual technology summit and are already a member of the Oracle Technology Network Community, you will earn 150 points towards our new Rewards and Recognition program (use the same email for both). Read all about it: Oracle Community - Rewards & Recognition FAQ.

How to install node-oracledb on Windows

Christopher Jones - Mon, 2015-08-17 03:19

Bill Christo, one of our valued community members, has created a great YouTube video showing how to install node-oracledb on Windows.

The official installation manual is also handy. See Node-oracledb Installation on Windows.

Update: also see Bill's article on Installing node-oracledb on Microsoft Windows on OTN.

Node-oracledb goes 1.0: The Node.js add-on for Oracle Database

Christopher Jones - Mon, 2015-08-17 02:12
Announcement

Today Oracle released node-oracledb 1.0, the Node.js add-on to enable high performance Oracle Database applications.

Node-oracledb is available from npmjs.com and GitHub.

Each month or so, since our first code bundle was pushed to GitHub earlier this year, we released a node-oracledb update with new functionality. The adoption has been exciting, with important applications already in production. This is our eighth release of node-oracledb and promises to be our best received so far.

The node-oracledb 1.0 add-on for Node.js supports standard and advanced features:

Oracle enhances, maintains and supports node-oracledb via open source channels (i.e. GitHub), similar to Oracle Database drivers for other open source languages. The add-on is under the Apache 2.0 license.

Where to get node-oracledb

The Oracle Technology Network Node.js Developer Center has all the links and information you need to start using node-oracledb.

To jump start, follow these instructions to install node-oracledb.

Changes since the previous release

The major changes in node-oracledb 1.0 since the previous release are:

  • The Stream interface for CLOB and BLOB types was implemented, adding support for LOB queries, inserts, and PL/SQL LOB bind variables. As well as being needed for working with many legacy schemas, having LOB support lets application developers use Oracle Database 12.1.0.2's JSON data type without running into the length limitation of VARCHAR2 storage.

    Customers have been contacting me what seems like every day, asking when LOB support would be available, and pleading for early access. Here it is, and it looks great. We'll be continuing to run load tests, benchmark it, and to enhance it.

    To see how to use LOBs with node-oracledb, checkout the node-oracledb Lob documentation and LOB examples

    General information about Oracle Database JSON support can be found in the documentation or on the JSON team blog.

  • Added Oracledb.fetchAsString and a new execute() property fetchInfo to allow numbers, dates, and ROWIDs to be fetched as strings. These features, available at the application level (for dates and numbers), and per-statement level (for dates, numbers and ROWIDs), can help overcome JavaScript limitations of representation and conversion.

  • Added support for binding DATE, TIMESTAMP, and TIMESTAMP WITH LOCAL TIME ZONE as DATE to DML RETURNING (aka RETURNING INTO) type. You can also bind these types as STRING.

  • The internal Oracle client character set is now always set to AL32UTF8. There's no longer a need to set it externally via NLS_LANG. A related bug with multibyte data reported by users was fixed by correcting the allocation of some internal buffers. Overall the NLS experience is much more consistent.

  • The test suite's and example database credentials can now be set via environment variables. A small change to help testing in automatically provisioned environments. Our test suite already has great coverage numbers, and will continue to be enhanced in future releases.

  • Bug fixes to node-oracledb. These are listed in the CHANGELOG.

What next?

Being an open source project in a dynamically changing environment, our statement of direction has been a brief, flexible goal: We are actively working on supporting Oracle Database features, and on functionality requests from users involved in the project. Our priority list is re-evaluated for each point release.

So now we have version 1.0, what next? This is just the start. There are plenty of important and interesting tasks in front of us. We will begin with a review of the project, from our development processes, the driver functionality, right through to distribution. This review will determine our next tasks. Hearing from users is crucial for prioritization, so don't hesitate to comment at GitHub.

Node.js is undergoing a surge of change at the moment, with the io.js re-merger, and the formation of the Node.js Foundation. As the merged Node.js code base stabilizes and the Foundation's LTS plans solidify, we will be able to be more formal about node-oracledb's schedule. We will work with Node.js and with partners to bring you the best experience. (On a technical note, the V2 release of the compatibility layer NAN was made in the last few days, too late for us to incorporate in node-oracledb 1.0. So, support of the latest, bleeding edge io.js will be in a future node-oracledb version.)

Let me wrap up this announcement by appreciating the growing node-oracledb community, particularly those who have contributed to node-oracledb with code, suggestions and discussions.

Parallel Projection

Randolf Geist - Sun, 2015-08-16 09:09
A recent case at a client reminded me of something that isn't really new but not so well known - Oracle by default performs evaluation at the latest possible point in the execution plan.

So if you happen to have expressions in the projection of a simple SQL statement that runs parallel it might be counter-intuitive that by default Oracle won't evaluate the projection in the Parallel Slaves but in the Query Coordinator - even if it was technically possible - because the latest possible point is the SELECT operation with the ID = 0 of the plan, which is always performed by the Query Coordinator.

Of course, if you make use of expressions that can't be evaluated in parallel or aren't implemented for parallel evaluation, then there is no other choice than doing this in the Query Coordinator.

The specific case in question was a generic export functionality that allowed exporting report results to some CSV or Excel like format, and some of these reports had a lot of rows and complex - in that case CPU intensive - expressions in their projection clause.

When looking at the run time profile of such an export query it became obvious that although it was a (very simple) parallel plan, all of the time was spent in the Query Coordinator, effectively turning this at runtime into a serial execution.

This effect can be reproduced very easily:

create table t_1
compress
as
select /*+ use_nl(a b) */
rownum as id
, rpad('x', 100) as filler
from
(select /*+ cardinality(1e5) */ * from dual
connect by
level <= 1e5) a, (select /*+ cardinality(20) */ * from dual connect by level <= 20) b
;

exec dbms_stats.gather_table_stats(null, 't_1', method_opt=>'for all columns size 1')

alter table t_1 parallel cache;

-- Run some CPU intensive expressions in the projection
-- of a simple parallel Full Table Scan
set echo on timing on time on

set autotrace traceonly statistics

set arraysize 500

select
regexp_replace(filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as some_cpu_intensive_exp1
, regexp_replace(filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i') as some_cpu_intensive_exp2
, regexp_replace(filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm') as some_cpu_intensive_exp3
from t_1
;

-- The plan is clearly parallel
--------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2000K| 192M| 221 (1)| 00:00:01 | | | |
| 1 | PX COORDINATOR | | | | | | | | |
| 2 | PX SEND QC (RANDOM)| :TQ10000 | 2000K| 192M| 221 (1)| 00:00:01 | Q1,00 | P->S | QC (RAND) |
| 3 | PX BLOCK ITERATOR | | 2000K| 192M| 221 (1)| 00:00:01 | Q1,00 | PCWC | |
| 4 | TABLE ACCESS FULL| T_1 | 2000K| 192M| 221 (1)| 00:00:01 | Q1,00 | PCWP | |
--------------------------------------------------------------------------------------------------------------

-- But the runtime profile looks more serial
-- although the Parallel Slaves get used to run the Full Table Scan
-- All time spent in the operation ID = 0
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Pid | Operation | Name | Execs | A-Rows| ReadB | ReadReq | Start | Dur(T)| Dur(A)| Time Active Graph | Parallel Distribution ASH | Parallel Execution Skew ASH | Activity Graph ASH | Top 5 Activity ASH |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | | SELECT STATEMENT | | 5 | 2000K | | | 3 | 136 | 120 | #################### | 1:sqlplus.exe(120)[2000K],P008(0)[0],P009(0)[0],P00A(0)[0],P00B(0)[0] | ################################ | @@@@@@@@@@@@@@@@@@@ ( 98%) | ON CPU(120) |
| 1 | 0 | PX COORDINATOR | | 5 | 2000K | | | 119 | 1 | 1 | # | 1:sqlplus.exe(1)[2000K],P008(0)[0],P009(0)[0],P00A(0)[0],P00B(0)[0] | | ( .8%) | ON CPU(1) |
| 2 | 1 | PX SEND QC (RANDOM)| :TQ10000 | 4 | 2000K | | | 66 | 11 | 2 | ## | 2:P00B(1)[508K],P00A(1)[490K],P008(0)[505K],P009(0)[497K],sqlplus.exe(0)[0] | | (1.6%) | PX qref latch(2) |
| 3 | 2 | PX BLOCK ITERATOR | | 4 | 2000K | | | | | | | 0:P00B(0)[508K],P008(0)[505K],P009(0)[497K],P00A(0)[490K],sqlplus.exe(0)[0] | | | |
|* 4 | 3 | TABLE ACCESS FULL| T_1 | 52 | 2000K | 23M | 74 | | | | | 0:P00B(0)[508K],P008(0)[505K],P009(0)[497K],P00A(0)[490K],sqlplus.exe(0)[0] | | | |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Fortunately there is a simple and straightforward way to make use of the Parallel Slaves for evaluation of projection expressions that can be evaluated in parallel - simply add a suitable NO_MERGE hint for the query block that you want the projection to be evaluated for in the Parallel Slaves.

If you don't want to have side effects on the overall plan shape by not merging views you could always wrap the original query in an outer SELECT and not merging the now inner query block. There seems to be a rule that the projection of a view always get evaluated at the VIEW operator, and if we check the execution plan we can see that the VIEW operator is marked parallel:

set echo on timing on time on

set autotrace traceonly statistics

set arraysize 500

select /*+ no_merge(x) */ * from (
select
regexp_replace(filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as some_cpu_intensive_exp1
, regexp_replace(filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i') as some_cpu_intensive_exp2
, regexp_replace(filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm') as some_cpu_intensive_exp3
from t_1
) x
;

-- View operator is marked parallel
-- This is were the projection clause of the VIEW will be evaluated
---------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
---------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2000K| 11G| 221 (1)| 00:00:01 | | | |
| 1 | PX COORDINATOR | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10000 | 2000K| 11G| 221 (1)| 00:00:01 | Q1,00 | P->S | QC (RAND) |
| 3 | VIEW | | 2000K| 11G| 221 (1)| 00:00:01 | Q1,00 | PCWP | |
| 4 | PX BLOCK ITERATOR | | 2000K| 192M| 221 (1)| 00:00:01 | Q1,00 | PCWC | |
| 5 | TABLE ACCESS FULL| T_1 | 2000K| 192M| 221 (1)| 00:00:01 | Q1,00 | PCWP | |
---------------------------------------------------------------------------------------------------------------

-- Runtime profile now shows effective usage of Parallel Slaves
-- for doing the CPU intensive work
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Pid | Operation | Name | Execs | A-Rows| Start | Dur(T)| Dur(A)| Time Active Graph | Parallel Distribution ASH | Parallel Execution Skew ASH| Activity Graph ASH | Top 5 Activity ASH |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | | SELECT STATEMENT | | 5 | 2000K | | | | | 0:sqlplus.exe(0)[2000K],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0] | | | |
| 1 | 0 | PX COORDINATOR | | 5 | 2000K | 17 | 63 | 10 | # ## # #### | 1:sqlplus.exe(10)[2000K],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0] | #### | * (5.6%) | resmgr:cpu quantum(10) |
| 2 | 1 | PX SEND QC (RANDOM) | :TQ10000 | 4 | 2000K | 5 | 61 | 10 | ## # ## ## ## # | 3:P002(5)[544K],P001(4)[487K],P000(1)[535K],P003(0)[434K],sqlplus.exe(0)[0] | # | (5.6%) | ON CPU(7),resmgr:cpu quantum(3) |
| 3 | 2 | VIEW | | 4 | 2000K | 2 | 82 | 69 | #################### | 4:P003(42)[434K],P001(35)[487K],P000(26)[535K],P002(22)[544K],sqlplus.exe(0)[0] | ############ | @@@@@@@@@@@@@@@@@@@ ( 70%) | ON CPU(125) |
| 4 | 3 | PX BLOCK ITERATOR | | 4 | 2000K | | | | | 0:P002(0)[544K],P000(0)[535K],P001(0)[487K],P003(0)[434K],sqlplus.exe(0)[0] | | | |
|* 5 | 4 | TABLE ACCESS FULL| T_1 | 52 | 2000K | 3 | 78 | 29 | ###### ####### # ### | 4:P000(11)[535K],P002(8)[544K],P001(8)[487K],P003(7)[434K],sqlplus.exe(0)[0] | ### | ***** ( 19%) | resmgr:cpu quantum(30),ON CPU(4) |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
At runtime the duration of the query now gets reduced significantly and we can see the Parallel Slaves getting used when the VIEW operator gets evaluated. Although the overall CPU time used is similar to the previous example, the duration of the query execution is less since this CPU time is now spent in parallel in the slaves instead in the Query Coordinator.

Summary
By default Oracle performs evaluation at the latest possible point of the execution plan. Sometimes you can improve runtime by actively influencing when the projection will be evaluated by preventing view merging and introducing a VIEW operator that will be used to evaluate the projection clause.

The optimizer so far doesn't seem to incorporate such possibilities in its evaluations of possible plan shapes, so this is something you need to do manually up to and including Oracle 12c (version 12.1.0.2 as of time of writing this).

IntelliJ IDEA 14.1.4 adds Spring Initializr

Pas Apicella - Fri, 2015-08-14 21:57
Just upgraded to to Intellij IDEA 14.1.4 and found that the Spring Initializr web page for quickly creating spring boot applications has been added to the New Project dialog. The web site I normally drive new spring boot applications from as follows, is now part of IntelliJ IDEA which is great.

http://start.spring.io/

Some screen shots of this.





Categories: Fusion Middleware

Limit length of listagg

Mike Moore - Fri, 2015-08-14 12:23
SQL> select student_name, course_id from studentx order by student_name

STUDENT_NAME COURSE_ID
------------ ---------
Chris Jones  A102     
Chris Jones  C102     
Chris Jones  C102     
Chris Jones  A102     
Chris Jones  A103     
Chris Jones  A103     
Joe Rogers   B103     
Joe Rogers   A222     
Joe Rogers   A222     
Kathy Smith  B102     
Kathy Smith  A102     
Kathy Smith  A103     
Kathy Smith  B102     
Kathy Smith  A103     
Kathy Smith  A102     
Mark Robert  B103     

16 rows selected.
SQL> WITH x AS
        (SELECT student_name,
                course_id,
                ROW_NUMBER () OVER (PARTITION BY student_name ORDER BY 1) AS grouprownum
           FROM studentx)
  SELECT student_name,
         LISTAGG (CASE WHEN grouprownum < 5 THEN course_id ELSE NULL END, ',')
            WITHIN GROUP (ORDER BY student_name)
            courses
    FROM x
GROUP BY student_name

STUDENT_NAME
------------
COURSES                                                                         
--------------------------------------------------------------------------------
Chris Jones 
A102,A102,C102,C102                                                             
                                                                                
Joe Rogers  
A222,A222,B103                                                                  
                                                                                
Kathy Smith 
A102,A103,B102,B102                                                             
                                                                                
Mark Robert 
B103                                                                            
                                                                                

4 rows selected.

OAM PS3 State-of-the-art

Frank van Bortel - Fri, 2015-08-14 07:25
An attempt to run OAM 11G Release 2 PS3 on Oracle Linux 6.7, WLS 12C, RDBMS 12C. Install Linux Pretty straightforward. Used Oracle 6.7, as 7 is not certified. Create a 200MB /boot, and an LVM for /, both ext4. Install just the server. Deselect *all* options, just X system and X legacy support (the OUI needs it). Some 566 packages will get installed. Make sure it boots, and the network starts. Frankhttp://www.blogger.com/profile/07830428804236732019noreply@blogger.com0

WebLogic Server 12.1.3 Developer Zip - Update 3 Posted

Steve Button - Thu, 2015-08-13 18:48
An update has just been posted on OTN for the WebLogic Server 12.1.3 Developer Zip distribution.

WebLogic Server 12.1.3 Developer Zip Update 3 is built with the fixes from the WebLogic Server 12.1.3.0.4 Patch Set Update, providing developers with access to the latest set of fixes available in the corresponding production release.

See the download page for access to the update:

http://www.oracle.com/technetwork/middleware/weblogic/downloads/wls-for-dev-1703574.html

http://download.oracle.com/otn/nt/middleware/12c/wls/1213/wls1213_dev_update3.zip

The Update 3 README provides details of what has been included:

http://download.oracle.com/otn/nt/middleware/12c/wls/1213/README_WIN_UP3.txt


Oracle Priority Support Infogram for 13-AUG-2015

Oracle Infogram - Thu, 2015-08-13 11:00

RDBMS


Exalogic


Data Warehousing

System Statistics About DOP Downgrades, from The Data Warehouse Insider.

Solaris


Stateful Packet Inspection, from the Solaris Firewall blog.

Authentication


RDBMS Optimizer


Java

Clash Of Slashes ( / versus \ ), from Brewing tests with CAFE BABE.

Two good postings from The Java Source:



Hyperion

Several patch set announcements from Business Analytics - Proactive Support:





EBS

From the Oracle E-Business Suite Support blog:








From the Oracle E-Business Suite Technology blog:




How To Use The SQL Developer Export Connections With Passwords Function

Complete IT Professional - Thu, 2015-08-13 06:00

SQL Developer Export ConnectionsOracle’s SQL Developer tool lets you export a list of connections that you have created. This is great for saving time for your team. Learn how you can do that in this article.

Why Export Connections?

First of all, why would you want to export your connections?

There are a few reasons I can think of.

  1. Keep a separate file that contains your connections in case your computer crashes
  2. Save time setting up your computer if you get a new one
  3. Sharing common connections with the rest of the team so they can easily import and use the same connections

Let’s take a look at how to export connections in SQL Developer.

 

SQL Developer Export Connections Process

I’m going to assume you have at least one connection created in SQL Developer.

If you don’t you can read my guide or watch my video on how to set up a connection in SQL Developer.

In the image here, I have three.

connection 01

Now, to find the SQL Developer Export Connections menu option, right click on the Connections item in the tree, and select Export Connections…

connection 02

The Export Connections wizard will appear.

connections 03

Select the connections you want to export by clicking in the checkbox next to each item. You can select all items by clicking on the top-level Connections item, which is what I have done.

Then, click Next.

You’ll then be asked for an output file.

connections 04

You can enter the full path name for the file (location and filename), or click on Browse.

connections 05

Find the location where you want to save your connections file, and enter a filename. Connections are saved as an XML file, and I’ll show you an example later in this article.

Click Save.

Now, click Next.

You’ll be asked if you want to encrypt passwords or remove passwords.

connections 06

This is an important step, as you don’t want passwords in free-text fields in your exported file. You have two options

  • Encrypt all passwords with a key. You enter an encryption key to use, which you’ll need to enter again in the Verify Key box.
  • Remove all passwords from the exported connections. This removes the passwords, which means you’ll need to re-enter them when you go to use them.
I personally prefer the encryption option, as it saves time. If you’re OK with this, select this option. Enter a key in both boxes and click Next.

The Summary is shown. Click Finish.

connections 07

Your connections file is created, which is an XML file listing all of your connections.

connections 08

So, that’s how you use the SQL Developer Export Connections function.

Lastly, if you enjoy the information and career advice I’ve been providing, sign up to my newsletter below to stay up-to-date on my articles. You’ll also receive a fantastic bonus. Thanks!

Categories: Development

ODTUG APEX Gaming Competition 2015

Joel Kallman - Thu, 2015-08-13 03:11
If you're not aware, there is an APEX Gaming Competition which is already underway, and which is sponsored by the Oracle Development Tools User Group (ODTUG).  For those who don't know what ODTUG is, it is an independent user group and community of professionals, with a primary focus on the tools, products, and frameworks to build solutions and applications with the Oracle technology stack.  Although ODTUG is based in the USA, they have members (thousands of them) around the globe.

The purpose of the APEX Gaming Competition is simply to show off what you can do with APEX, and instead of crafting a business solution or transactional application, the goal here is a bit more whimsical and fun.  The solution can be desktop or mobile or both.  Personally, if I had the time, I'd like to write a blackjack simulator and try and improve upon the basic strategy.  I'm not sure that could be classified as a "game", but it would enable me to go to Las Vegas and clean house!

If you're looking to make a name for yourself in the Oracle community, one way to do it is through ODTUG.  And if you're looking to make a name for yourself in the APEX community, one way to stand out is through the APEX Gaming Competition.  Just ask Robert Schaefer from Köln, Germany.  Robert won the APEX Theming Competition in 2014, and now everyone in the APEX Community knows who Robert is!  I've actually had the good fortune of meeting Robert in person - twice!

Yesterday I listened to the APEX Talkshow podcast with Jürgen Schuster and Shakeeb Rahman (Jürgen is a luminary in the APEX community and Shakeeb is on the Oracle APEX development team, he is the creator of the Universal Theme).  And in this podcast, I was reminded how Shakeeb's first introduction to Oracle was...by winning a competition, when he was a student!  You simply never know what the future holds.  So - whether you're a student or a professional, whether you're in Ireland or the Ivory Coast, this is an opportunity for you to shine in front of this wonderful global APEX Community.  Submissions close in 2 months, so hurry!  Go to http://competition.odtug.com

Cloudera certifies InfoCaptor for Big Data analytics on Hadoop

Nilesh Jethwa - Wed, 2015-08-12 14:12

Cloudera welcomes InfoCaptor as a certified partner for data analytics and visualization. InfoCaptor delivers self-service BI and analytics to data analysts and business users in enterprise organizations, enabling more users to mine and search for data that uncovers valuable business insights and maximizes value from an enterprise data hub

Rudrasoft, the software company that specializes in data analytics dashboard solutions, announced today that it has released an updated version of its popular InfoCaptor software, which includes integration with Cloudera Enterprise. The integration takes advantage of Impala and Apache Hive for analytics.

“Our clients are increasingly looking to adopt Hadoop for their data storage and analytics requirements and their common concern is the lack of an economical web-based platform that works with their traditional data warehouses, RDBMS and with Cloudera Enterprise,”

Cloudera-certified InfoCaptor, adds native Impala functionality within Visualizer so users can leverage Date/time functions for date hierarchy visualizations, time series plots and leverage all the advanced hierarchical visualization natively on Cloudera Enterprise.

“Impala is the fastest SQL engine on Hadoop and InfoCaptor can render millions of data points into beautiful visualizations in just a blink of an eye,” said Nilesh Jethwa [founder]. “This is a great promise for the big data world and affordable analytics with sub-second response time, finally CEOs and CIOs across industries can truly dream of cultivating a data driven culture and make it a reality.”

“Cloudera welcomes InfoCaptor as a certified partner for data analytics and visualization. InfoCaptor delivers self-service BI and analytics to data analysts and business users in enterprise organizations, enabling more users to mine and search for data that uncovers valuable business insights and maximizes value from an enterprise data hub,” said Tim Stevens, vice president of Business and Corporate Development at Cloudera.

InfoCaptor is an Enterprise Business Analytics and Dashboard software meant for:

  • Data Discovery
  • Visualizations
  • Adhoc Reports
  • Dashboards

InfoCaptor brings the power of d3js.org visualizations and the simplicity of Microsoft Excel and puts it in the hands of a non-technical user. This same user can build Circle Pack, Chord, Cluster and Treemap/Sunburst visualizations on top of Cloudera Enterprise using simple drag and drop operations.
InfoCaptor can connect with data from virtually any source in the world, including SQL database from Microsoft Excel, Microsoft Access, Oracle, SQL Server, MySQL, Sqlite, PostgreSQL, IBM DB2 and now Impala and Hive. It supports both JDBC and ODBC protocols.

InfoCaptor also serves as a powerful visualization software and it includes over 30 vector-based map Visualizations, close to 40 types of chart visualizations, over 100 flowchart icons and other HTML widgets. InfoCaptor also provides a free style dashboard editor that allows quick dashboard mockups and prototyping. With this ability users can place widgets directly anywhere on the page and use flowchart style icons and connectors for annotation and storytelling.

Users can download the application and install it within their firewall.
Alternatively, a cloud offering is also available at https://my.infocaptor.com or Download dashboard software

 

InfoCaptor is a very modestly priced Analytics and Visualization software
” Personal Dashboard License can be purchased for $149/year
” Server license starts at $599/year
” Cloud based subscription starts at $29/user/month

Visit http://www.infocaptor.com or email bigdata(at)infocaptor(dot)com for Demo and Price list

Oracle SOA/BPM 12c: Propagation of Flow Instance Title and Instance Abortion

Jan Kettenis - Wed, 2015-08-12 13:23
Recently I wrote this posting regarding an improvement for setting the title of a flow instance in Oracle BPEL, and BPMN 12c. In this posting I will discuss two related improvements that comes with SOA/BPM Suite 12c, being that the flow instance abortion is automatically propagated from one instance to the other, as well as the flow instance title. Or more precisely, for every child instance the initiating instance is shown together with its name.

Since 12c the notion of composite instance is superseded by that of flow instance, which refers to the complete chain of calls starting from one main instance to any other composite, and further. Every flow has a unique flowId which is automatically propagated from one instance to the other.

Propagation of Flow Instance Title
This propagation does not only apply to the flowId, but also to the flowInstanceTitle, meaning that if you set the flowInstanceTitle for the main instance all called composites automatically get the same title.

So if the flowInstanceTitle is set on the main instance:


Then you will automatically see it for every child instance as well:


Trust but verify is my motto, so I tried it for a couple of combinations of composite types calling each other, including:
  • BPM calling BPEL calling another BPEL
  • BPM initiating a another composite with a Mediator and BPEL via an Event
  • Mediator calling BPEL

Flow Instance Abortion
When you abort the instance of the parent, then all child instances are aborted as well.

In the following flow trace you see a main BPM process that kicks of:
  1. A (fire&forget) BPEL process
  2. Throws an Event that is picked up by a Mediator
  3. Calls another BPM process
  4. Schedules a human task

On its turn the BPEL process in step 1 kicks of another BPEL process (request/response). Finally the BPM process in step 3 also has a human task:


Once the instance of the main process is aborted, all child instances are automatically aborted as well, including all Human Tasks and composites that are started indirectly.


The flip-side of the coin is that you will not be able to abort any individual child instance. When you go to a child composite, select a particular child instance and abort, the whole flow will be aborted. That is different from how it worked with 11g, and I can imagine this will not always meet the requirements you have.

Another thing that I find strange is that the Mediator that is started by means of an event, even is aborted when the consistency level is set to 'guaranteed' (which means that event delivery happens in a local instead of a global transaction). Even though an instance is aborted, you may have a requirement to process that event.

But all in all, a lot easier to get rid of a chain of processes instances than with 11g!!

Pages

Subscribe to Oracle FAQ aggregator