Skip navigation.

Feed aggregator

A Taste of FinTech: Bitterballen and Banking in the Cloud with Profource

Usable Apps - Wed, 2016-04-20 07:05

Financial technology (#FinTech) innovation and the future of banking are hot topics. If you wondered for one tiny moment how the importance of financial applications and the cloud fit into the Oracle Cloud User Experience #PaaS4SaaS enablement that Oracle Applications User Experience (OAUX) offers Oracle Partners, well, here's one rocking example from EMEA!

Recently, we (OAUX) held a hands-on enablement event at Oracle Nederland in Utrecht to work alongside one of our leading EMEA partners, Profource B.V., to design and build a simplified UI Banking Cloud solution using the Cloud UX Rapid Development Kit (RDK) for Release 10. This event was the culmination of upfront design and development exploration done collaboratively between the two teams online.

Profource and OAUX Teams in Oracle Nederland

Part of the Profource team with the OAUX enablers: (L-R): Julian Orr, Lancy Silveira, Ronald van Herpen, Martijn Rijpkema, Pam Koertshuis, and Ultan Ó Broin

Held over 2.5 days, the Microsoft PowerPoint Karaoke was cut to a minimum as design and development teams stormed and formed around the solution requirements and worked fast to iterate designs and build a modern banking cloud solution that was then deployed as a service to the Profource cloud. A great success!

Banking Cloud Simplified UI Launch Page Wireframe

Part of the Banking Cloud simplified UI launch experience wireframe. Wireframing the solution using the RDK tools, garnering agreement, and transferring the design to the development team made for rapid, agile innovation and iteration, right through to deployment.

Banking Cloud Statements Wireframe

Simply that simplified UI again: This time for an Oracle ERP Cloud Release 10 solution. This is a wireframe view of part of the user experience flow, built using the RDK's Oracle ADF page templates and declarative components, the Oracle Alta UI design system, and the agile Learn-Design-Build approach of the RDK's guidance.

You can read more about the Banking Cloud event on Profource consultant Pam Koertshuis's (@pkoertshuis) blog: Simplified UI for PAAS 4 SAAS solution.

Profource consultant, Hakan Biroglu (@hakanbiroglu), said about the event:

"We [the Profource team] have learned a lot. Not just on a technical level, on how to reuse your RDK, but also on an architectural and design level. Your workshop forced us to rethink our UX solution and to evaluate every item on every page in every flow, 'Why is this needed? What information does it provide me? What does it trigger me to do?'" 

Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:8.0pt; font-family:"Times New Roman Bold"; color:black; mso-fareast-language:JA;}

If you're heading to OBUG's APPSCONNECTed16 event in Arnhem, check out the sessions about the Profource Banking Cloud and their other Release 10 cloud solutions (HCM, PPM, and Resource Management) and about how you can use the RDK and OAUX enablement to do the same for your business.

Many thanks to the Profource team and to Oracle Nederland.  

If you are an Oracle Partner who wants to get ahead in the Oracle Cloud, you can contact us through the usual OAUX channels or your Oracle PartnerNetwork contacts.

Oracle Usable Apps Catds and Skerches

Oracle Applications Cloud User Experience: Enablement for partners from design to deployment. It starts with a sketch . . . .

Oh, the bittterballen?

Since you asked...

Bitterballen

Bitterballen: An Oracle Nederland culinary delight that somehow always features as part of OAUX enablement events in Utrecht. 

Column Groups

Jonathan Lewis - Wed, 2016-04-20 02:07

Patrick Jolliffe alerted the Oracle-L list to a problem that appears when you combine fixed length character columns (i.e. char() or nchar())  with column group statistics. The underlying cause of the problem is the “blank padding” semantics that Oracle uses by default to compare varchar2 with char, so I’ll start with a little demo of that. First some sample data:


rem     Script:         col_group_char_bug.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Apr 2016

execute dbms_random.seed(0)

create table t1
nologging
as
with generator as (
        select  --+ materialize
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        cast(chr(trunc(dbms_random.value(1,6))+64) as char(1))  c1,
        cast(chr(trunc(dbms_random.value(1,6))+64) as char(2))  c2,
        cast('X' as varchar2(2))                                v2
from
        generator       v1
where
        rownum <= 5 * 5 * 10
;

insert into t1(c1, c2, v2)
select  'X', 'X', 'X'
from    t1
;

update t1 set v2 = c2;
commit;


The little demos I’m going to report here don’t use all the data in this table – there are several other tests in the script that I won’t be reporting – so I’ll just point out that there are 500 rows in the table, half of them have ‘X’ in all three columns, and half of them have a uniform distribution of the letters ‘A’ to ‘E’ in every column.

  • Column c1 is declared as char(1) – so it will hold the data exactly as it was inserted by the script.
  • Column c2 is declared as char(2) – so even though the script apparently inserts a character string of length 1, this will be padded with a space to two characters before being stored.

Now we can create some stats – in particular a frequency histogram on the c2 column – and check the cardinality estimates for a couple of queries:

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          => 'T1',
                method_opt       => 'for all columns size 254'
        );
end;
/

set autotrace traceonly explain

prompt  ==================
prompt  c2 without padding
prompt  ==================

select  *
from    t1
where   c2 = 'X'
;

prompt  ================
prompt  c2 with padding
prompt  ================

select  *
from    t1
where   c2 = 'X '
;

set autotrace off

The first query compares c2 with the single character ‘X’, the second compares it with the two-character string ‘X ‘. But since the comparison is with a char(2) column the optimizer pads the first constant with spaces, and both queries end up predicting the same cardinality:


==================
c2 without padding
==================

Execution Plan
----------------------------------------------------------
Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |   250 |  2000 |    17   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |   250 |  2000 |    17   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("C2"='X')

================
c2 with padding
================

Execution Plan
----------------------------------------------------------
Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |   250 |  2000 |    17   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |   250 |  2000 |    17   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("C2"='X ')


Note that both queries predict the 250 rows where (we know) c2 = ‘X ‘; even though the predicate sections suggest the queries are looking for different data sets. This IS the expected behaviour.

Now let’s make things more complex – we’ll add the predicate “and c1 = ‘X'” to both queries but we’ll create a column group with histogram on (c1, c2) before checking the plans. Again we expect both versions of the new query to predict the same volume of data and (in fact) to produce a perfect prediction because we have so few rows and so few distinct combinations that we should get a perfect frequency histogram:


begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1',
                method_opt       => 'for all columns size 1 for columns (c1, c2) size 254'
        );
end;
/

prompt  ========================
prompt  (c1, c2) without padding
prompt  ========================

select  *
from    t1
where   c1 = 'X' and c2 = 'X'
;

prompt  =====================
prompt  (c1, c2) with padding
prompt  =====================

select  *
from    t1
where   c1 = 'X' and c2 = 'X '
;

And here are the execution plans:

========================
(c1, c2) without padding
========================

Execution Plan
----------------------------------------------------------
Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     2 |    16 |    17   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |     2 |    16 |    17   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("C1"='X' AND "C2"='X')

=====================
(c1, c2) with padding
=====================

Execution Plan
----------------------------------------------------------
Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |   250 |  2000 |    17   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |   250 |  2000 |    17   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("C1"='X' AND "C2"='X ')


If we run the query where the literal is padded with spaces to the correct length (2nd query) then the prediction is correct. But if we haven’t padded the literal the prediction is wrong; the estimate is the one the optimizer would have used for “value not found in histogram”.

I think what’s happening is that the optimizer doesn’t “remember” that the literal is being compared with a char() when making the call to sys_op_combined_hash() that it uses for calculating column group stats so it doesn’t pad the column with spaces before calling the function and, as a consequence, the hashed value isn’t the one it should be using.

I’ve run this test on 11.2.0.4 and 12.1.0.2 – the effects are the same on both versions.

Bottom Line:

Be careful about how you use char() data types in your code, and be especially careful if you think you’re going to be creating column group stats involving char() columns – and then remember that 12c may generate column group stats automatically for you. If you use char() columns you will have to ensure that predicates using literal values should have those values padded to the correct number of spaces if you want to have the best possible chance of getting the correct execution plans.

 


Storage in Amazon S3

Pat Shuff - Wed, 2016-04-20 01:07
To be honest, I am going out on a limb here. I know just enough about Amazon S3 to be dangerous. Most of the reference material that I used was from the amazon web site or Safari Books. The books that I relied upon the most are The key use cases according to the S3 Essentials book are file hosting, storing data on mobile based applications, static web hosting, video hosting, and data backup. We will look at a couple of these configurations and how to deploy and use them.

With Oracle Storage Cloud Services we had the concept of a container. This container had characteristics like spinning disk, archive, ownership, and other features related to ownership and security. Amazon S3 has a similar concept but they call this container a bucket. A bucket can contain nested folders and has properties associated with it. If we look at the AWS Console, we see six types of storage and content delivery

  • S3 - block storage in the cloud
  • Cloud Front - content delivery network
  • Elastic File System - fully managed file system for EC2
  • Glacier - tape storage in the cloud
  • Snowball - large scale data transport to get data into and out of S3
  • Storage Gateway - an appliance to reduce latency for S3 storage
We will be focusing on S3 and Glacier. Snowball is a mechanism to transport large amounts of data to and from the Amazon data center. The Storage Gateway is an appliance in a data center to reduce latency and provide a quicker access to data stored in S3. We will need to dive a little deeper into S3 and the Storage Gateway but not the Cloud Front, and the Elastic File System in this blog.

We first start with the AWS console and click on the S3 console. We can create a new bucket by clicking on Create Bucket.

When we create a new S3 bucket, we can name it and define which data center we want the storage to be allocated into. We have to be careful when we create a bucket name. The namespace is shared with all users. If we want to create a common name, it will probably be used by someone else and we will see an error in creating the name.

If we look at the properties associated with this storage we can see that we have a variety of options to configure.

  • Permissions
  • Static Web Hosting
  • Logging
  • Events
  • Versioning
  • Lifecycle
  • Cross-Region Replication
  • Tags
  • Requester Pays

Let's go through each of these individually. With Permissions, you have the ability to control who can see, modify, delete, and download the contents of the bucket. Bucket policies can get relatively complex and have a variety of conditions and restrictions applied to it. You can find out more at Detailing Advanced Policies. This feature allows you to restrict who can read content by ip address, access keys, or usernames.

Static web hosting allows you to create a web site based on the files in a container. If you have an index.html, it becomes to the basis for accessing all of the other files in this directory. This is both good and bad because you get the basic functionality of a web server but you don't get the configuration and access logs. It has some uses but is limited in how it works. It does make static web page presentation easy because you no longer need an EC2 instance, operating system, or application to host the web site.

Logging allows you to view how, who, and from where files were accessed. You can generate logs to look at access patterns and access locations.

Versioning allows you to keep past copies of files. If a file is edited and changed, previous versions and deltas are tracked. This is a good feature but does cause storage consumption to grow because you never delete older versions of a file but keep the deltas for a fixed amount of history.

Lifecycle allows you to automatically archive files from spinning disk to tape after a fixed amount of time and history of access. If no one has accessed a file in months, it can be written to Glacier for long term lower cost archive.

Cross-Region Replication allows you to replicate blocks between data centers automatically. This allows for high availability in the event that one data center fails or storage at one location is having significant problems.

Tags and Request Payer allows for charge-back features to allow people who consume resources to pay for the download and storage. The person creating the bucket is not charged for usage but has the mechanism to transfer the charges to the person reading the data.

Reading and writing to our newly created bucket requires a user interface or usage of the Amazon Rest api to transfer files. Amazon does provide a user interface to upload and edit the properties of the files and directories. We recommend using another interface like CloudBerry or other graphical tool or the command line utilities because this interface is a bit limiting.

This blog entry is significantly different from the one yesterday. Yesterday we started with pricing then got technical. Today we dove straight into the technical and ignored pricing. Let's dive into pricing. The cost of S3 storage is $30/TB/month plus outbound charges. I suggest using the S3 price list and the S3 price calculator to figure pricing. Attached are screen shots of pricing for 120 TB of storage using the calculator and screen shots of the price list.

One thing that we talked about with the Oracle Storage Cloud and have not talked about here is an on premise virtual machine to reduce latency. Amazon offers this with the AWS Storage Gateway. The key differences between the two products are 1) AWS Gateway uses iSCSI on the client side to provide storage access to the data center and 2) it cost $125/month/gateway. It solves the same latency problem but does it slightly differently. Unfortunately, we are not going to install and configure this virtual instance and play with it because it requires 8 virtual CPUs which is greater than my laptop will offer.

In summary, this is an initial review of S3 storage with Amazon AWS. We did not dive deep into Glacier or the Storage Gateway. We did not review elastic block services (EBS) because these are typically attached to EC2 instances. It is important to note that the focus of S3 is different than Oracle Storage Cloud Services but very similar. Files and directories can be stored in containers and access can be controlled. S3 extends services to provide things like video streaming, static web site hosting, and migrating data to and from tape in the cloud. You can use S3 for backup archives and generic block storage and access it via REST api or AWS api calls. Products like CloudBerry Explorer and S3 Explorer exist to help translate the human interface to S3 storage calls. The cost for S3 is roughly $30/TB/month with additional charges for outbound data on a per GB basis. Archive storage is roughly $7/TB/month with additional charges for data retrieval and outbound data on a per GB basis. The intent of this blog is not to say that one service is better than the other but provide resources to help you make your own decisions and decide what works best for your situation and corporation.

Designing PL/SQL Programs: Series home page

Andrew Clarke - Wed, 2016-04-20 00:57
Designing PL/SQL Programs is a succession of articles published the articles in a nonlinear fashion. Eventually it will evolve into a coherent series. In the meantime this page serves as a map and navigation aid. I will add articles to it as and when I publish them.
IntroductionDesigning PL/SQL Programs
It's all about the interface
Principles and PatternsIntroducing the SOLID principles
Introducing the RCCASS principles
Three more principles
The Dependency Inversion Principle: a practical example
Working with the Interface Segregation Principle Software ArchitectureThe importance of cohesionInterface designTools and Techniques

The importance of cohesion

Andrew Clarke - Wed, 2016-04-20 00:56
"Come on, come on, let's stick together" - Bryan Ferry

There's more to PL/SQL programs than packages, but most of our code will live in packages. The PL/SQL Reference offers the following benefits of organising our code into packages:

Modularity - we encapsulate logically related components into an easy to understand structure.

Easier Application Design - we can start with the interface in the package specification and code the implementation later.

Hidden Implementation Details - the package body is private so we can prevent application users having direct access to certain functionality.

Added Functionality - we can share the state of Package public variables and cursors for the life of a session.

Better Performance - Oracle Database loads the whole package into memory the first time you invoke a package subprogram, which makes subsequent invocations of any other subprogram quicker. Also packages prevent cascading dependencies and unnecessary recompilation.

Grants - we can grant permission on a single package instead of a whole bunch of objects.

However, we can only realise these benefits if the packaged components belong together: in other words, if our package is cohesive.  

The ever reliable Wikipedia defines cohesion like this: "the degree to which the elements of a module belong together"; in other words how it's a measure of the strength of the relationship between components. It's common to think of cohesion as a binary state - either a package is cohesive or it isn't - but actually it's a spectrum. (Perhaps computer science should use  "cohesiveness" which is more expressi but cohesion it is.)
CohesionCohesion owes its origin as a Comp Sci term to Stevens, Myers, and Constantine.  Back in the Seventies they used the terms "module" and "processing elements", but we're discussing PL/SQL so let's use Package and Procedure instead. They defined seven levels of cohesion, with each level being better - more usefully cohesive - than its predecessor.
CoincidentalThe package comprises an arbitrary selection of procedures and functions which are not related in any way. This obviously seems like a daft thing to do, but most packages with "Utility" in their name fall into this category.
LogicalThe package contains procedures which all belong to the same logical class of functions. For instance, we might have a package to collect all the procedures which act as endpoints for REST Data Services.
TemporalThe package consists of procedures which are executed at the same system event. So we might have a package of procedures executed when a user logs on - authentication, auditing, session initialisation - and similar package for tidying up when the user logs off. Other than the triggering event the packaged functions are unrelated to each other.
ProceduralThe package consists of procedures which are executed as part of the same business event. For instance, in an auction application there are a set of actions to follow whenever a bid is made: compare to asking price, evaluate against existing maximum bid, update lot's status, update bidder's history, send an email to the bidder, send an email to the user who's been outbid, etc.
CommunicationalThe package contains procedures which share common inputs or outputs. For example a payroll package may have procedures to calculate base salary, overtime, sick pay, commission, bonuses and produce the overall remuneration for an employee.
SequentialThe package comprises procedures which are executed as a chain, so that the output of one procedure becomes the input for another procedure. A classic example of this is an ETL package with procedures for loading data into a staging area, validating and transforming the data, and then loading records into the target table(s).
FunctionalThe package comprises procedures which are focused on a single task. Not only are all the procedures strongly related to each other but they are fitted to user roles too. So procedures for power users are in a separate package from procedures for normal users. The Oracle built-in packages for Advanced Queuing are a good model of Functional cohesion.
How cohesive is cohesive enough?The grades of cohesion, with Coincidental as the worst and Functional as the best, are guidelines. Not every package needs to have Functional cohesion. In a software architecture we will have modules at different levels. The higher modules will tend to be composed of calls to lower level modules. The low level modules are the concrete implementations and they should aspire to Sequential or Functional cohesion.

The higher level modules can be organised to other levels. For instance we might want to build packages around user roles - Sales, Production, HR, IT - because Procedural cohesion makes it easier for the UI teams to develop screens, especially if they need to skin them for various different technologies (desktop, web, mobile). Likewise we wouldn't want to have Temporally cohesive packages with concrete code for managing user logon or logoff. But there is a value in organising a package which bundles up all the low level calls into a single abstract call for use in schema level AFTER LOGON triggers.    

Cohesion is not an easily evaluated condition. We need cohesion with a purpose, a reason to stick those procedures together. It's not enough to say "this package is cohesive". We must take into consideration how cohesive the package needs to be: how will it be used? what is its relationships with the other packages?

Applying design principles such as Single Responsibility, Common Reuse, Common Closure and Interface Segregation can help us to build cohesive packages. Getting the balance right requires an understanding of the purpose of the package and its place within the overall software architecture.  

Part of the Designing PL/SQL Programs series

Pop Quiz

Michael Feldstein - Wed, 2016-04-20 00:38

By Michael FeldsteinMore Posts (1069)

Which CEO has recently said or done all of the following:

  • Suggested to an audience of VCs and ed tech entrepreneurs at the GSV conference that the importance of big data in education has been overstated
  • Told that same audience that the biggest gains from adaptive learning come when it is wrapped in good pedagogy delivered by good teachers
  • Asked former CIOs from Harvard and MIT, both of whom are senior company employees, to develop collaborations with the academic learning science community
  • Accurately described Benjamin Bloom’s two-sigma research, with special attention to the implications for the bottom half of the bell curve
  • When asked a question by an audience member about an IMS technical interoperability standard in development, correctly described both the goals of the standard and its value to educators in plain English

Answer: David Levin of McGraw Hill.

Yes yes, those are just words. But I have gotten a good look at some of what their ed tech product and data science groups have been up to lately, and I have spoken to Levin at length on a few occasions (and grilled him at length on two of them).

My advice: Pay attention to this company. They are not screwing around.

The post Pop Quiz appeared first on e-Literate.

Jonathan Lewis

Bobby Durrett's DBA Blog - Tue, 2016-04-19 17:09

I am finally getting around to finishing my four-part blog series on people who have had the most influence on my Oracle performance tuning work. The previous three people were Craig ShallahamerDon Burleson, and Cary Millsap. The last person is Jonathan Lewis. These four people, listed and blogged about in chronological order, had the most influence on my understanding of how to do Oracle database performance tuning. There are many other great people out there and I am sure that other DBAs would produce their own, different, list of people who influenced them. But this list reflects my journey through my Oracle database career and the issues that I ran into and the experiences that I had. I ran into Jonathan Lewis’ work only after years of struggling with query tuning and getting advice from others. I ran into his material right around the time that I was beginning to learn about how the Oracle optimizer worked and some of its limits. Jonathan was a critical next step in my understanding of how Oracle’s optimizer worked and why it sometimes failed to pick the most efficient way to run a query.

Jonathan has produced many helpful tuning resources including his blog, his participation in online forums, and his talks at user group conferences, but the first and most profound way he taught me about Oracle performance tuning was through his query tuning book Cost-Based Oracle Fundamentals. It’s $30 on Amazon and that is an incredibly small amount of money to pay compared to the value of the material inside the book. I had spent many hours over several years trying to understand why the Oracle optimizer some times choses the wrong way to run a query. In many cases the fast way to run something was clear to me and the optimizer’s choices left me stumped. The book helped me better understand how the Oracle optimizer chooses what it thinks is the best execution plan. Jonathan’s book describes the different parts of a plan – join types, access methods, etc. – and how the optimizer assigns a cost to the different pieces of a plan. The optimizer chooses the plan with the least cost, but if some mistake causes the optimizer to calculate an unrealistic cost then it might choose a poor plan. Understanding why the optimizer would choose a slow plan helped me understand how to resolve performance issues or prevent them from happening, a very valuable skill.

There is a lot more I could say about what I got from Jonathan Lewis’ book including just observing how he operated. Jonathan filled his book with examples which show concepts that he was teaching. I think that I have emulated the kind of building of test scripts that you see throughout his book and on his blog and community forums. I think I have emulated not only Jonathan’s approach but the approaches of all four of the people who I have spotlighted in this series. Each have provided me with profoundly helpful technical information that has helped me in my career. But they have also provided me with a pattern of what an Oracle performance tuning practitioner looks like. What kind of things do they do? To this point in my career I have found the Oracle performance tuning part of my job to be the most challenging and interesting and probably the most valuable to my employers. Jonathan Lewis and the three others in this four-part series have been instrumental in propelling me along this path and I am very appreciative.

Bobby

Categories: DBA Blogs

April 2016 Critical Patch Update Released

Oracle Security Team - Tue, 2016-04-19 14:02

Oracle today released the April 2016 Critical Patch Update.

This Critical Patch Update provides fixes for a wide range of product families including: Oracle Database Server, Oracle E-Business Suite, Oracle Fusion Middleware, Oracle Sun Products, Oracle Java SE, and Oracle MySQL.

Oracle recommends this Critical Patch Update be applied as soon as possible. A summary and analysis of this Critical Patch Update has been published on My Oracle Support (MOS Note 2126904.1)

For More Information:

The Critical Patch Update Advisory is located at http://www.oracle.com/technetwork/security-advisory/cpuapr2016v3-2985753.html

My Oracle Support Note 2126904.1 is located at https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=2126904.1 (MOS account required).

USING SELECT 'X' in query/subqueries.

Learn DB Concepts with me... - Tue, 2016-04-19 13:26
 
USING SELECT 'X' in query/sub-queries.


--------------------------------------------------------
--  DDL for Table TAB1
--------------------------------------------------------

  CREATE TABLE "ATEST"."TAB1"
   (    "ID" NUMBER,
    "NAME" VARCHAR2(20 BYTE)
   ) ;

Insert into ATEST.TAB1 (ID,NAME) values (1,'AAA');
Insert into ATEST.TAB1 (ID,NAME) values (2,'BBB');
Insert into ATEST.TAB1 (ID,NAME) values (3,'EEE');
Insert into ATEST.TAB1 (ID,NAME) values (4,'FFF');


--------------------------------------------------------
--  DDL for Table TAB2
--------------------------------------------------------

  CREATE TABLE "ATEST"."TAB2"
   (    "ID" NUMBER,
    "NAME" VARCHAR2(20 BYTE)
   ) ;

Insert into ATEST.TAB2 (ID,NAME) values (1,'CCC');
Insert into ATEST.TAB2 (ID,NAME) values (2,'DDD');


Get records that exits in TAB1 and not in TAB2 using select 'X' :


select * from TAB1 f where not exists (select 'X' from TAB2 where id=f.id);
ID    NAME
--    ---- 
4    FFF
3    EEE

IN the above query we get output of all the records from TAB1 that doesnt match with TAB2 ID's.
Hence we do not get the records with ID's 1 & 2 as they only exits in TAB1.
This is just like using "select * from TAB1 f where not exists (select ID from TAB2 where id=f.id);"


Get records that exits in TAB1 and in TAB2 using select 'X' :


select * from TAB1 f where exists (select 'X' from TAB2 where id=f.id);

ID    NAME
--    ---- 
1    AAA
2    BBB

IN the above query we get output of all the records from TAB1 that exist with same ID in TAB2 .
Hence we get only records with ID 1 & 2 as they exists in both TABLES.
This is just like using "select * from TAB1 f where exists (select ID from TAB2 where id=f.id);"
Categories: DBA Blogs

XA Transactions with SOASuite JMS Adapter

Darwin IT - Tue, 2016-04-19 12:39
JMS is perfect for setting transaction boundaries and in OSB it is pretty clear on how JMS transactions are handled. However, in SOASuite using the JMS adapter the SOA Infrastructure is handling your JMS transactions by default; and messages are removed from the queue rightaway because the Get's are Auto-acknowledged. If something fails, you would expect that messages are rolled back to the JMS queue and eventually moved to the error queue. But, again by default, not with the SOASuite/JMS Adapter. In that case the BPEL process, for instance, fails and get's in a recovery state, to be handled in the 'Error Hospital'in Enterprise Manager. But I want JMS to handle it! (Says the little boy...)

So how do we accomplish that? Today I got the chance to figure that out.

Start with a JMS setup with a JMS Server, Module and a Queue with an Error Queue that is configured to be the error destination on the first queue. On the first queue set a redelivery limit to 3 and a redelivery delay on for instance 60000 ms (or something like that). I'm not going in to that here.
Create also a Connection Factory in the JMS Module with a proper jndi, something like 'jms/myApplicationCF'.

In the JMS adapter on SOASuite there are several OutboundConnectionFactories already pre-configured. It is quite convenient to use the one with JNDI 'eis/wls/Queue'. But if you look into that, you'll see that it uses the default WebLogic JMS Connection factory 'weblogic.jms.XAConnectionFactory'. Not much wrong with that, but you can't configure that for your own particular situation. But more over it is configured with 'AcknowledgeMode' = 'AUTO_ACKNOWLEDGE'. As you can read in the docs there are three values for the AcknowledgeMode:
  • DUPS_OK_ACKNOWLEDGE, for consumers that are not concerned about duplicate messages
  • AUTO_ACKNOWLEDGE, in which the session automatically acknowledges the receipt of a message
  • CLIENT_ACKNOWLEDGE, in which the client acknowledges the message by calling the message's acknowledge method
So create a new outbound connection factory, with a JNDI like 'eis/jms/MyApp'. 
Now, apparently we don't want  'AUTO_ACKNOWLEDGE', because that would cause the message-get acknowledged 'On Get'. So you could rollback until 'Saint Juttemis' (as we say in our family) but it won't go back on the queue. Dups aren't ok with me, so I'll choose 'CLIENT_ACKNOWLEDGE' here. Then there's another option: 'IsTransacted'. I want that one on 'true'. Then in ConnectionFactoryLocation, you'd put the JNDI of your JMS Connection factory, in my example 'jms/myApplicationCF'.

So you'll get something like:
 
On the tab Transaction, validate that the transaction support is set to a XA Transaction:

Having done that, you can update/redeploy your JMS Adapter with the changed plan. I figure that how to do that is straight forward, especially when you've done that with DB Adapters already.

I created two SOA Projects (actually I adapted those created by a co-worker). The first one is TestPutJMS:

The project is straight forward with a WSDL refering to an xsd with two fields:







The bpel is then as follows:

It assigns the request to the input variable of the invoke of the JMSPut. The JMS_Put is an jms-adapter configuration, referring to the JNDI 'eis/jms/myApp', defined in the JMS Adapter.

After that there's an if on the action field, where in the case of a certain value a fault is thrown, to validate if the Put is rolled back.

In my case it's more interesting to look at the Get part. That project is as follows:

In this case there's a mediator wired to the get adapter config, also referring to the 'eis/jms/myApp' JNDI. The mediator routes to the bpel process. The transaction handling of a mediator is simple and straight-forward:
  • If there's a transaction it will subscribe to that,
  • if there isn't, a new transaction is created.
The JMS Adapter creates an new XA Transaction. On the JMS Adaptor on WLS we configured that no Auto Acknowledge should occur, and we want a transaction. Thus, this is the transaction that is re-used by the Mediator. But how about the BPEL?  The BPEL is asynchronous request only. Since it has no way to reply the response, or it would be on a response queue.
By default you would have a property 'bpel.config.oneWayDeliveryPolicy' set to 'async.persist'. But that would mean that a new thread is started. Setting it on 'sync' would cause the thread that is started by the Adapter is reused. I also want to subscribe to the already running transaction of the JMS Adapter as it is passed through by the mediator. Setting the property 'bpel.config.transaction' to 'required' will take care of that. Summarized, I set the following properties on the bpel:
  • bpel.config.transaction: required => subscribe to already opened transaction
  • bpel.config.oneWayDeliveryPolicy: sync => reuse existing running thread


The process looks like:

 
Here I have an if with a conditional throw of an exception as well. Based on the value of the action element I can have it to throw a custom exception, that will cause the BPEL to fail and the transaction rolled back.
When I have a redelivery limit to 3, I'll get three retries, so in total 4 tries of the BPEL process. After that, the message is moved to the JMS Error Queue.

A nice article on the JMS Transactions from the A team is found here. However, the setup above leaves the redelivery handling by JMS. So, in 12cR2 that is, I find that the properties of the JMS Queue apparently has preference over the settings I did in the TestJMSGet Service on the composite:

I hope this article clears things up regarding the JMS Adapter configuration for transactions.

Timeout Values for Enterprise Manager Components

Arun Bavera - Tue, 2016-04-19 12:35
Requirements: To have dashboard display continuously without logging in again

There are browser plugins which refreshes the current page automatically for the mentioned frequency. Try this approach first.
I searched “browser plugin to refresh page”



The EM components which affects timeout are:
http://docs.oracle.com/cd/E63000_01/EMADM/appdx_timeout_settings.htm#EMADM15477
ComponentDescriptionTimeout Value (in minutes)CommandApache timeoutNumber of seconds that an Apache session is kept active.
If Apache timeout is set beyond the operating system TCP timeout, it will cause unpredictable results. The operating system timeout is set to 2 hours by default.5 mins by defaultRun the following command:
$ omsvfy show tcp
Parameters Incoming Value
--------------------
tcp_keepalive_time 7200
tcp_keepalive_intvl 75
tcp_fin_timeout 60
--------------------






OMS timeout or Login timeoutThis is theoracle.sysman.eml.maxInactiveTimeparameter that can be set per OMS. To prevent unauthorized access to the Cloud Control, Enterprise Manager will automatically log you out of Cloud Control when there is no activity for a predefined period of time. For example, if you leave your browser open and leave your office. This default behavior prevents unauthorized users from using your Enterprise Manager administrator account.
If you make changes to the login timeout value, be sure to consider the security implications of leaving your session open for other than the default timeout period.
Note: The default timeout value does not apply when you restart the Web server or the OMS. In both of those cases, you will be asked to log in to the Cloud Control Console, regardless of the default timeout value.

45 min by defaultRun the following command: emctl set property -name oracle.sysman.eml.maxInactiveTime -value time_in_minutes -module emoms
Then, restart OMS for the value to take effect.ADF timeoutThis is controlled by the variableoracle.adf.view.rich.poll.timeout. The variable applies to pages that have auto poll. ADF pages may be enabled with automatic poll. After a page does not receive any keyboard or mouse event for duration oforacle.adf.view.rich.poll.timeoutvariable, then the poll stops. From that point on, the page participates in the standard server-side session timeout.10 minNoneAt my previous project we used these values as per client requirements:
--We are setting to 2700 ---> 45minX60= 2700 Seconds
emctl set property -name oracle.sysman.eml.maxInactiveTime -value 2700
cd /u01/app/mw_12cR3/oms/sysman/archives/emgc/deployments/GCDomain/emgc.ear/em.war/WEB-INF  
<!-- Workaround for Bug 18141467- CONFIGURE PERFORMANCE HOME PAGE TIMEOUT
You need to specify the value in milliseconds. Setting to 40Minuites on OCT-23-2014 :
-->
<context-param>
<param-name>oracle.adf.view.rich.poll.TIMEOUT</param-name>
<param-value>2400000</param-value>
</context-param>
………………..
…………………
</web-app>
Also make sure the settings at Load Balancer is proper if they are using one.
Note:
  • In case of multiple OMSs, this must be done on all OMSs.
  • If the OMSs are behind a SLB, and if the SLB name is used in the URL provided to login to the EM Console, check the Whitepaper:
    Enterprise Manager 12c Cloud Control - Configuring OMS High Availability with F5 BIG - IP Local Traffic Manager
    Topic Create the Persistence Profiles - Change the values related to the Console
    The value must be provided there in seconds (So 3600 for a timeout of 60 minutes).
Refer:
12c OEM: Console Session Expires with "Because of inactivity, your session has timed out and is no longer active, Click OK to reload the page (Doc ID 1544516.1)
How to change EM 12c Performance Home page default timeout (Doc ID 1644004.1)




















Categories: Development

nVision Performance Tuning: Coalescing Tree Leaves

David Kurtz - Tue, 2016-04-19 10:09
I have blogged recently about the effectiveness of the Tree Performance options in improving the performance of nVision reports.
However, a consequence of the Tree Performance Access Method suppress join; use literal values is that the resulting SQL in nVision will have a criteria for every leaf on each of the selected nodes on that tree.
nVision Tree Performance Options|

There will be an equality condition for each single value leaf. I normally set Selector Options to Ranges of values (BETWEEN), so I get a between condition for each ranged leaf. Behind the scenes, Oracle rewrites between as a pair of inequalities, so there is no difference, but the SQL generated by nVision is slightly shorter.
The following is typical of nVision SQL with these performance options set.
SELECT A.ACCOUNT,SUM(A.POSTED_TOTAL_AMT) 
FROM
PS_LEDGER A WHERE A.LEDGER='ACTUALS' AND A.FISCAL_YEAR=2015 AND
A.ACCOUNTING_PERIOD BETWEEN 1 AND 12 AND A.CURRENCY_CD='GBP' AND
A.STATISTICS_CODE=' ' AND (A.BUSINESS_UNIT=

) AND (
A.DEPTID='C500' OR A.DEPTID='C512' OR A.DEPTID='C117' OR A.DEPTID='C157' OR
A.DEPTID='C340' OR A.DEPTID='C457' OR A.DEPTID='C510' OR A.DEPTID='A758' OR
A.DEPTID='8220' OR A.DEPTID='A704' OR A.DEPTID='A121' OR A.DEPTID='A110' OR
A.DEPTID BETWEEN 'A153' AND 'A154' OR A.DEPTID BETWEEN 'A151' AND 'A152' OR
A.DEPTID='A724' OR A.DEPTID BETWEEN 'A131' AND 'A133' OR A.DEPTID='A733' OR
A.DEPTID='A217' OR A.DEPTID='A437' OR A.DEPTID='A130' OR A.DEPTID='A134' OR
A.DEPTID='A703' OR A.DEPTID='A714' OR A.DEPTID='A218' OR A.DEPTID='A226' OR
A.DEPTID BETWEEN 'A135' AND 'A138'

A consequence of all the criteria is that Oracle can take a long time to parse the SQL statement. It may only be a few seconds in the case of a single SQL statement, but an nVision report book can consist of thousands of SQL statements.
To produce the following performance profile, I enabled Oracle SQL trace for a report book and profiled the trace. SQL Parse accounted for nearly 8% of the total runtime of this report book, so it can be significant, and can vary widely.
Event Name % Time Seconds Calls - Time per Call - Avg Min Max FETCH calls [CPU] 48.2% 3,699.8440s 16,068 0.2303s 0.0000s 178.0640s db file sequential read 22.5% 1,728.2101s 4,413,352 0.0004s 0.0002s 0.1294s SQL*Net message from client [idle] 8.0% 617.7042s 926 0.6671s 0.0002s 61.3147s PARSE calls [CPU] 7.9% 605.9340s 5,383 0.1126s 0.0000s 11.0500s …





Total 100.0% 7,681.4428s
Reducing the number of criteria in the SQL will reduce the parse time, but that is determined by the way the tree leaves are defined.
The leafcoal.sql script seeks to address this by repeatedly merging two consecutive leaves on the same tree node into a single ranged leaf where possible. It performs two checks before merging adjacent leaves on the same tree node:
  • There is not an intermediate value on the detail field defined in the tree structure record. So if the detail field was DEPT_TBL.DEPTID, the script checks that there are no values of DEPTID on PS_DEPT_TBL that are not currently selected by existing leaves that would be included in the merged leaf.
  • There is not another leaf on another node on the tree that would intersect with the merged leaf.
Instructionsleafcoal.sql was written as an anonymous PL/SQL block, so there is nothing to install. It should be run in SQL*Plus connected as the PeopleSoft owner ID (usually SYSADM). It is expected that there are some adjustments to the script that the user may need to make. As delivered, it runs in a test mode that does not update the database but reports on what it would do. Change k_testmode to FALSE to make it update the database.
k_testmode     CONSTANT BOOLEAN := FALSE; /*set this false to perform update*/
The level of output emitted depends on the variable l_debug_variable
l_debug_level  INTEGER := 4;
  • 1. end of processing message 
  • 2. start of processing for tree 
  • 3. number of leaves in tree and number of leaves coalesced 
  • 4. details of leaves being compressed 
  • 5. start and end of each procedure 
  • 6. parameters passed to functions 
  • 7. number of rows updated/deleted during coalesce 
  • 8. dynamic SQL statement 
The script reports on the work it has done. It does not commit its updates. That is left for the user to either commit or rollback.

.(3)Processing SHARE, ,XXX_ACCOUNT,151201                                       
.(4)634 nodes, 2636 leaves
.(4)1358 leaves coalesced (52%)

(1)Commit changes or rollback
The query at the end of the script determines which trees will be processed and may need to be changed as required. For example, you might choose to coalesce the leaves on

  • specific trees,
  • most recent effective dated trees,
  • trees with literal values performance option

  FOR i IN (
SELECT DISTINCT d.setid, d.setcntrlvalue, d.tree_name, d.effdt
FROM pstreedefn d
, pstreestrct s
, psrecfielddb f
WHERE d.tree_strct_id = s.tree_strct_id
AND s.node_fieldname = 'TREE_NODE'
-- AND d.TREE_ACC_METHOD = 'L' --literal values
AND s.dtl_recname = f.recname
AND s.dtl_fieldname = f.fieldname
-- AND tree_name = 'XXX_ACCOUNT'
) LOOP
Conclusion The number of leaves coalesced depends entirely on how the trees have been built. At one customer it has produced a reduction of over 50%, at another it was only 10%. The reduction in the number of leaves does produce a corresponding reduction in time spent on SQL parse time during nVision reports. ©David Kurtz

Data is Everything, and Everything is Data

Pythian Group - Tue, 2016-04-19 09:33
Exploring the phenomenon of “datafication”

In the year 2000, only a quarter of the world’s stored information was digital; the rest was on paper, film, and other analog media. Today, less than two percent of all stored information is nondigital. (1)

This is largely the result of “datafication”, a process that turns all aspects of life—preferences, opinions, telephone calls and sensor-driven information—into data.

Datafication is the driving force behind Big Data. It’s also causing a threefold shift in how we look for meaning in the information available to us: away from traditional sampling approaches, toward greater tolerance of messy, unstructured data, and into the search for correlations rather than absolute, singular causes to explain trends and events. These changes are already having major impacts in every area of our lives—from scientific research to business and finance, to healthcare, education and transportation.

Watch this video of Pythian President and CEO Paul Vallée, as he talks about datification and generating revenue.

From sampling to knowing

Representative sampling is based on the idea that, within a certain margin of error, we can make inferences about a total population from a small, randomized subset. This works well for simple questions like, “Which of our customers generate the most revenue?” but lacks the detail to effectively answer queries like, “Which customers are most profitable?” or, “Which customers are considering to leave us for another vendor?”

Inexpensive computer memory, powerful processors and sophisticated algorithms now allow us to analyze vast amounts of data rather than small samples. Using Big Data in this way has the considerable advantage of predictive capability—it can identify patterns and trends that aren’t detectable in a small sample, giving an unprecedented view of future behavior.

From clean to messy

What’s new about Big Data isn’t just that there’s lots of it. Because it comes from many different sources in many different formats, it’s not tidy like traditional datasets. Tolerating some inaccuracy may require data analysts to shift their outlooks a little, but when you’re trying to answer big, complex questions, the gain in data scope is a good trade-off against using smaller amounts of very exact data. Here’s an example.

In 2009, Google showed it’s possible to predict locations of seasonal outbreaks of the flu using nothing more than archived records of Google searches. The sheer size of the data set (think a billion searches a day in the U.S. alone) more than compensated for its messiness. After running nearly half a billion calculations against the data, Google identified 45 terms—words such as “headache” and “runny nose”—that had a strong correlation with the CDC’s data on flu outbreaks.

From cause to correlation

The Google example points to a third change brought about by datafication and Big Data: abandoning the search for certainty. Instead of looking for causes, innovative data users are looking for correlations. For example, automotive and aviation engineers are collecting and analyzing massive quantities of information on engines that have failed, looking for patterns that will help them predict when other engines might be at risk of failing in the future. They’re not seeking a single cause for a single event; they’re mapping correlations between huge numbers of events to recognize patterns that can be put to practical, preventative use.

The correlation approach has been used to spot infections in premature babies before overt symptoms appear and to predict everything from manhole cover failures to consumer purchasing habits.

Big Data insights require Big Thinking

Harnessing the powerful, often unpredictable, insights available from Big Data requires three things: as complete a dataset as possible, people with the skills required to collect, manage and analyze that data, and people who know how to ask unexpected, even visionary questions. It’s not just a matter of the right technologies—it’s about a fundamental shift in how we relate to data and what can be done with it.
Sources
1. https://www.foreignaffairs.com/articles/2013-04-03/rise-big-data

Categories: DBA Blogs

Partition Storage -- 1 : Default Partition Sizes in 12c

Hemant K Chitale - Tue, 2016-04-19 09:17
11g 11.2.0.2 introduced a change whereby the default Initial Extent of a Table Partition was 8MB.  However, this did not apply to Index Partitions which could still start with 64KB extents in an AutoAllocate Tablespace.

12cR1 now introduces a parameter to enable large Initial Extent for Index Partitions as well.

SQL> connect / as sysdba                          
Connected.
SQL> select banner from v$version where banner like 'Oracle Database%';

BANNER
--------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production

SQL> show parameter deferred

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
deferred_segment_creation boolean TRUE
SQL>
SQL> connect hemant/hemant
Connected.
SQL> create table my_part_tbl (id_column number(6), data_column varchar2(100))
2 partition by range(id_column)
3 (partition p_100 values less than (101),
4 partition p_200 values less than (201),
5 partition p_300 values less than (301),
6 partition p_400 values less than (401),
7 partition p_max values less than (maxvalue))
8 /

Table created.

SQL> create index my_part_tbl_ndx on my_part_tbl(id_column) local;

Index created.

SQL>
SQL> select segment_name, partition_name, segment_type, bytes/1024
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2;

no rows selected

SQL> 
SQL> insert into my_part_tbl values (51,'Fifty One');

1 row created.

SQL> insert into my_part_tbl values (151,'One Hundred Fifty One');

1 row created.

SQL> insert into my_part_tbl values (251, 'Two Hundred Fifty One');

1 row created.

SQL> select segment_name, partition_name, segment_type, bytes/1024
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2;

SEGMENT_NAME PARTITION_NA SEGMENT_TYPE BYTES/1024
------------------------------ ------------ ------------------ ----------
MY_PART_TBL P_100 TABLE PARTITION 8192
MY_PART_TBL P_200 TABLE PARTITION 8192
MY_PART_TBL P_300 TABLE PARTITION 8192
MY_PART_TBL_NDX P_100 INDEX PARTITION 64
MY_PART_TBL_NDX P_200 INDEX PARTITION 64
MY_PART_TBL_NDX P_300 INDEX PARTITION 64

6 rows selected.

SQL>


I can enable large Index Partition Extent with a parameter (which can be set with ALTER SESSION)


SQL> alter session set "_index_partition_large_extents"=TRUE;

Session altered.

SQL> insert into my_part_tbl values (351,'Three Hundred Fifty One');

1 row created.

SQL> select segment_name, partition_name, segment_type, bytes/1024
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2;

SEGMENT_NAME PARTITION_NA SEGMENT_TYPE BYTES/1024
------------------------------ ------------ ------------------ ----------
MY_PART_TBL P_100 TABLE PARTITION 8192
MY_PART_TBL P_200 TABLE PARTITION 8192
MY_PART_TBL P_300 TABLE PARTITION 8192
MY_PART_TBL P_400 TABLE PARTITION 8192
MY_PART_TBL_NDX P_100 INDEX PARTITION 64
MY_PART_TBL_NDX P_200 INDEX PARTITION 64
MY_PART_TBL_NDX P_300 INDEX PARTITION 64
MY_PART_TBL_NDX P_400 INDEX PARTITION 8192

8 rows selected.

SQL>


However, I can rebuild the Index Partition Extent as well :

SQL> alter index my_part_tbl_ndx rebuild partition p_400 storage (initial 64K);

Index altered.

SQL> select segment_name, partition_name, segment_type, bytes/1024
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2;

SEGMENT_NAME PARTITION_NA SEGMENT_TYPE BYTES/1024
------------------------------ ------------ ------------------ ----------
MY_PART_TBL P_100 TABLE PARTITION 8192
MY_PART_TBL P_200 TABLE PARTITION 8192
MY_PART_TBL P_300 TABLE PARTITION 8192
MY_PART_TBL P_400 TABLE PARTITION 8192
MY_PART_TBL_NDX P_100 INDEX PARTITION 64
MY_PART_TBL_NDX P_200 INDEX PARTITION 64
MY_PART_TBL_NDX P_300 INDEX PARTITION 64
MY_PART_TBL_NDX P_400 INDEX PARTITION 64

8 rows selected.

SQL>


In the next post, we'll see more Extents for the Partitions.
,
,
,
Categories: DBA Blogs

SQL Server Dates, Dates and More Dates

Pythian Group - Tue, 2016-04-19 08:44

 

Working with SQL Server date functions can be frustrating. This purpose of this blog is to share some date statements I use regularly, especially when doing business Intelligence and DataWarehouse solutions.

I hope you find them useful and if you have any questions or any more useful statements in relation to dates in SQL Server, please feel free to leave them in the comments below

----Today
SELECT GETDATE() 'Today'
----Yesterday
SELECT DATEADD(d , -1 , GETDATE()) 'Yesterday'
----First Day of Current Week
SELECT DATEADD(wk , DATEDIFF(wk , 0 , GETDATE()) , 0) 'First Day of Current Week'
----Last Day of Current Week
SELECT DATEADD(wk , DATEDIFF(wk , 0 , GETDATE()) , 6) 'Last Day of Current Week'
----First Day of Last Week
SELECT DATEADD(wk , DATEDIFF(wk , 7 , GETDATE()) , 0) 'First Day of Last Week'
----Last Day of Last Week
SELECT DATEADD(wk , DATEDIFF(wk , 7 , GETDATE()) , 6) 'Last Day of Last Week'
----First Day of Current Month
SELECT DATEADD(mm , DATEDIFF(mm , 0 , GETDATE()) , 0) 'First Day of Current Month'
----Last Day of Current Month
SELECT DATEADD(ms , -3 , DATEADD(mm , 0 , DATEADD(mm , DATEDIFF(mm , 0 , GETDATE()) + 1 , 0))) 'Last Day of Current Month'
----First Day of Last Month
SELECT DATEADD(mm , -1 , DATEADD(mm , DATEDIFF(mm , 0 , GETDATE()) , 0)) 'First Day of Last Month'
----Last Day of Last Month
SELECT DATEADD(ms , -3 , DATEADD(mm , 0 , DATEADD(mm , DATEDIFF(mm , 0 , GETDATE()) , 0))) 'Last Day of Last Month'
----First Day of Current Year
SELECT DATEADD(yy , DATEDIFF(yy , 0 , GETDATE()) , 0) 'First Day of Current Year'
----Last Day of Current Year
SELECT DATEADD(ms , -3 , DATEADD(yy , 0 , DATEADD(yy , DATEDIFF(yy , 0 , GETDATE()) + 1 , 0))) 'Last Day of Current Year'
----First Day of Last Year
SELECT DATEADD(yy , -1 , DATEADD(yy , DATEDIFF(yy , 0 , GETDATE()) , 0)) 'First Day of Last Year'
----Last Day of Last Year
SELECT DATEADD(ms , -3 , DATEADD(yy , 0 , DATEADD(yy , DATEDIFF(yy , 0 , GETDATE()) , 0))) 'Last Day of Last Year'
Categories: DBA Blogs

VirtualBox 5.0.18

Tim Hall - Tue, 2016-04-19 08:22

VirtualBox 5.0.18 has been released.

The downloads and changelog are in the usual places.

So far I’ve only installed it on Windows 7, but I’ll no doubt be doing an install on OS X El Crapitan and Oracle Linux tonight.

Internals of Querying the Concurrent Requests’ Queue – Revisited for R12.2

Pythian Group - Tue, 2016-04-19 08:01

Once upon a time I wrote about the Internal Workflow of an E-Business Suite Concurrent Manager Process. Many things have changed since that blog post, the most obvious change being the release of Oracle e-Business Suite R12.2. I decided to check if the way the concurrent manager queues were processed by concurrent manager processes were still the same. My main goal was to see if the manager processes still don’t attempt any way of coordination to distribute the requests among them.

This is how I did the testing:

  • I used the VM templates provided by Oracle to build my R12.2.4 test environment. By the way, I didn’t expect that the process of getting the environment up would be so simple! Downloading the media files from edelivery.oracle.com was the most time-consuming step, once done – it took me just 1 hour to un-compress everything, import the Virtual Assembly file and bring up the R12.2.4 environment on my laptop.
  • 3 Standard managers are defined by default
  • Sleep seconds were left as is = 30 seconds
  • Cache size was increased from 1 to 5.
  • Identified the 3 DB processes that belong to the Standard managers:
    select sid, serial# from v$session where module='e:FND:cp:STANDARD'
  • I enabled tracing with binds and waits for each of them like this:
    exec dbms_monitor.session_trace_enable(sid,serial#,true,true);
  • Once that was done I submitted one concurrent program – “Active users” and waited for it to complete.
  • I disabled the tracing and collected the trace files.
    exec dbms_monitor.session_trace_disable(sid,serial#);
  • Collected the trace files

I found 2 of the trace files to be very interesting. To make things more simple, the manager process “A” will be the one that executed the concurrent request, and process “B” will be the one that didn’t.

Before the “Active Users” Request Was Submitted

No other requests were running at the time I did the testing, so I clearly observed how both Managers A and B queried the FND_CONCURRENT_REQUESTS table BOTH of the trace files displayed the same method of how requests are picked up from the queue. Note, I’m showing only the lines relevant to display the main query only, and I have formatted the query text to make it more readable:

PARSING IN CURSOR #139643743645920 len=1149 dep=0 uid=100 oct=3 lid=100 tim=1460211399835915 hv=3722997734 ad='d275f750' sqlid='cd23u4zfyhvz6'
SELECT R.Rowid
FROM Fnd_Concurrent_Requests R
WHERE R.Hold_Flag                             = 'N'
AND R.Status_Code                             = 'I'
AND R.Requested_Start_Date                   &amp;amp;amp;amp;amp;lt;= Sysdate
AND (R.Node_Name1                            IS NULL
OR (R.Node_Name1                             IS NOT NULL
AND FND_DCP.target_node_mgr_chk(R.request_id) = 1))
AND (R.Edition_Name                          IS NULL
OR R.Edition_Name                            &amp;amp;amp;amp;amp;lt;= sys_context('userenv', 'current_edition_name'))
AND EXISTS
  (SELECT NULL
  FROM Fnd_Concurrent_Programs P
  WHERE P.Enabled_Flag         = 'Y'
  AND R.Program_Application_Id = P.Application_Id
  AND R.Concurrent_Program_Id  = P.Concurrent_Program_Id
  AND EXISTS
    (SELECT NULL
    FROM Fnd_Oracle_Userid O
    WHERE R.Oracle_Id = O.Oracle_Id
    AND EXISTS
      (SELECT NULL
      FROM Fnd_Conflicts_Domain C
      WHERE P.Run_Alone_Flag = C.RunAlone_Flag
      AND R.CD_Id            = C.CD_Id
      )
    )
  AND (P.Execution_Method_Code                          != 'S'
  OR (R.PROGRAM_APPLICATION_ID,R.CONCURRENT_PROGRAM_ID) IN ((0,98),(0,100),(0,31721),(0,31722),(0,31757)))
  )
AND ((R.PROGRAM_APPLICATION_ID,R.CONCURRENT_PROGRAM_ID) NOT IN ((510,40032),(510,40033),(510,42156),(510,42157),(530,43793),(530,43794),(535,42626),(535,42627),(535,42628)))
ORDER BY NVL(R.priority, 999999999),
  R.Priority_Request_ID,
  R.Request_ID
END OF STMT
EXEC #139643743645920:c=0,e=33,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=3984653669,tim=1460211399835910
FETCH #139643743645920:c=0,e=546,p=0,cr=106,cu=0,mis=0,r=0,dep=0,og=1,plh=3984653669,tim=1460211399836507
WAIT #139643743645920: nam='SQL*Net message to client' ela= 3 driver id=1952673792 #bytes=1 p3=0 obj#=-1 tim=1460211399836572

*** 2016-04-09 10:17:09.837
WAIT #139643743645920: nam='SQL*Net message from client' ela= 30000367 driver id=1952673792 #bytes=1 p3=0 obj#=-1 tim=1460211429836965
...
EXEC #139643743645920:c=0,e=59,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=3984653669,tim=1460211429838767
FETCH #139643743645920:c=0,e=689,p=0,cr=106,cu=0,mis=0,r=0,dep=0,og=1,plh=3984653669,tim=1460211429839587
WAIT #139643743645920: nam='SQL*Net message to client' ela= 4 driver id=1952673792 #bytes=1 p3=0 obj#=-1 tim=1460211429839652

*** 2016-04-09 10:17:39.840
WAIT #139643743645920: nam='SQL*Net message from client' ela= 30000325 driver id=1952673792 #bytes=1 p3=0 obj#=-1 tim=1460211459840003
...

It’s important to observe that:

  • All manager’s processes still compete for the same requests. If the query is executed at the same time, the same list of concurrent requests will be retrieved by all processes.
  • The constants literals used in lines 30-32 mean that the query for checking the queue is still built when the concurrent manager process starts up. These constants are mainly used to implement the specializations rules in the query.
  • Only rowid for the pending requests’ rows in FND_CONCURRENT_REQUESTS are fetched.
  • The sleep time is clearly visible on lines 41,42 and 48,49
After the “Active Users” Request Was Submitted – Starting the Concurrent Request

The manager process A was the first to pick up the submitted requests and it could be observed by the “r=1” (1 row fetched) in the FETCH call for the query we just reviewed:

FETCH #139643743645920:c=0,e=437,p=0,cr=113,cu=0,mis=0,r=1,dep=0,og=1,plh=3984653669,tim=1460211519844640

Immediately after this, the manager process A locked the row in FND_CONCURRENT_REQUESTS table, this way, the request got assigned to this process. Notice the similar where predicates used in this query, these are actually required to make sure that the request is still not picked up by another manager process. However the main thing here is the fact that the request row is accessed by the “rowid” retrieved earlier (row 45, the value of the bind variable “:reqname” is “AAAjnSAA/AAAyn1AAH” in this case). Locking of the row is done by the “FOR UPDATE OF R.status_code NoWait” clause on line 49:

PARSING IN CURSOR #139643743640368 len=4530 dep=0 uid=100 oct=3 lid=100 tim=1460211519864113 hv=4239777398 ad='cde86338' sqlid='6ya6bzgybbrmq'
SELECT R.Conc_Login_Id,
  R.Request_Id,
  ... excluded other 156 columns for brevity...
FROM fnd_concurrent_requests R,
  fnd_concurrent_programs P,
  fnd_application A,
  fnd_user U,
  fnd_oracle_userid O,
  fnd_conflicts_domain C,
  fnd_concurrent_queues Q,
  fnd_application A2,
  fnd_executables E,
  fnd_conc_request_arguments X
WHERE R.Status_code             = 'I'
AND (R.Edition_Name            IS NULL
OR R.Edition_Name              &amp;amp;amp;amp;amp;lt;= sys_context('userenv', 'current_edition_name'))
AND R.Request_ID                = X.Request_ID(+)
AND R.Program_Application_Id    = P.Application_Id(+)
AND R.Concurrent_Program_Id     = P.Concurrent_Program_Id(+)
AND R.Program_Application_Id    = A.Application_Id(+)
AND P.Executable_Application_Id = E.Application_Id(+)
AND P.Executable_Id             = E.Executable_Id(+)
AND P.Executable_Application_Id = A2.Application_Id(+)
AND R.Requested_By              = U.User_Id(+)
AND R.Cd_Id                     = C.Cd_Id(+)
AND R.Oracle_Id                 = O.Oracle_Id(+)
AND Q.Application_Id            = :q_applid
AND Q.Concurrent_Queue_Id       = :queue_id
AND (P.Enabled_Flag            IS NULL
OR P.Enabled_Flag               = 'Y')
AND R.Hold_Flag                 = 'N'
AND R.Requested_Start_Date     &amp;amp;amp;amp;amp;lt;= Sysdate
AND ( R.Enforce_Seriality_Flag  = 'N'
OR ( C.RunAlone_Flag            = P.Run_Alone_Flag
AND (P.Run_Alone_Flag           = 'N'
OR NOT EXISTS
  (SELECT NULL
  FROM Fnd_Concurrent_Requests Sr
  WHERE Sr.Status_Code         IN ('R', 'T')
  AND Sr.Enforce_Seriality_Flag = 'Y'
  AND Sr.CD_id                  = C.CD_Id
  ))))
AND Q.Running_Processes                                     &amp;amp;amp;amp;amp;lt;= Q.Max_Processes
AND R.Rowid                                                  = :reqname
AND ((P.Execution_Method_Code                               != 'S'
OR (R.PROGRAM_APPLICATION_ID,R.CONCURRENT_PROGRAM_ID)       IN ((0,98),(0,100),(0,31721),(0,31722),(0,31757))))
AND ((R.PROGRAM_APPLICATION_ID,R.CONCURRENT_PROGRAM_ID) NOT IN ((510,40032),(510,40033),(510,42156),(510,42157),(530,43793),(530,43794),(535,42626),(535,42627),(535,42628))) 
FOR UPDATE OF R.status_code NoWait

The behavior of the manager process B was a little bit more interesting. It too managed to fetch the same rowid from FND_CONCURRENT_PROCESSES table belonging to the submitted “Active Users” processes. However, when it tried to lock the row in FND_CONCURRENT_REQUESTS (By using exactly the same query), this happened:

PARSING IN CURSOR #139690311998256 len=4530 dep=0 uid=100 oct=3 lid=100 tim=1460211519900924 hv=4239777398 ad='cde86338' sqlid='6ya6bzgybbrmq'
...
BINDS #139690311998256:
...
Bind#2
  oacdty=01 mxl=32(18) mxlc=00 mal=00 scl=00 pre=00
  oacflg=20 fl2=1000001 frm=01 csi=873 siz=0 off=64
  kxsbbbfp=7f0c2f713f20  bln=32  avl=18  flg=01
  value="AAAjnSAA/AAAyn1AAH"
EXEC #139690311998256:c=1000,e=1525,p=0,cr=25,cu=1,mis=0,r=0,dep=0,og=1,plh=4044729389,tim=1460211519902727
ERROR #139690311998256:err=54 tim=1460211519902750

The query failed with “ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired”.
This is how the access to pending concurrent requests is serialized to make sure only one of the manager processes can run it. And, I think, relying on the well-tuned and highly efficient locking mechanism of Oracle Database is a very very smart idea.

Conclusions
  • The coordination between manager processes is still not happening to distribute the requests, but the managers all query the queue the same way and then compete between themselves to lock the requests’ entries on the table 1st. The process that gets the lock also gets to execute the concurrent request.
  • The cache size variable couldn’t be observed in the trace files, but as far as I remember from my previous research the process would only fetch “cache size”-number of rowids using the 1st query in this post. This could be tested by submitting larger volume of requests simultaneously.
  • The “sleep seconds” kicks in only when the manager process didn’t fetch any rowids from the queue. After all the cached requests are attempted/executed by the manager process, the queue is checked again immediately without waiting for the “sleep seconds” (Not explained in detail in this post, but it’s revealed in the trace files)
  • The DMLs used to query the FND_CONCURRENT_REQUESTS and to lock the row are very very similar to Pre-R12.2 releases of e-Business Suite (Another sign that the process hasn’t changed, though one change that I see is the addition of where clause predicates for Checking the Editions).
Categories: DBA Blogs

State of Higher Ed LMS Market for US and Canada: Spring 2016 Edition

Michael Feldstein - Tue, 2016-04-19 05:00

By Phil HillMore Posts (402)

This is the eighth year I have shared the LMS market share graphic, commonly known as the squid graphic, for (mostly) US higher education. The original idea remains – to give a picture of the LMS market in one page, highlighting the story of the market over time. The key to the graphic is that the width of each band represents the percentage of institutions using a particular LMS as its primary system.

This year marks a significant change based on our upcoming LMS subscription service. We are working with LISTedTECH to provide market data and visualizations. This data source provides historical and current measures of institutional adoptions, allowing new insights into how the market has worked and current trends. This current graphic gets all of its data from LISTedTECH. Where previous versions of the graphic used an anchoring technique, combining data from different sources in different years, with interpolation where the data was unavailable. Now, every year’s data is based on this single data source.

This graphic has been in the public domain for years, however, and we think it best to keep it that way. In this way we hope that the new service will provide valuable insight for subscribers but also improve what we continue to share here on the e-Literate blog.

Since we have data over time now and not just snapshots, we have picked the end of each year for that data. For this reason, the data goes through the end of 2015. We have 2016 data but chose not to share partial-year results in an effort to avoid confusion.

LMS_MarketShare_20160316

A few items to note:

  • As noted in previous years, the fastest-growing LMS is Canvas. There is no other solution close in terms of matching the Canvas growth.
  • Blackboard continues to lose market share, although the vast majority of that reduction over the past two years has been from customers leaving ANGEL. Blackboard Learn lost only a handful of clients in the past year.
  • While the end-of-life occurs next year, Pearson’s has announced LearningStudio’s end-of-life for the end of 2017.
  • With the new data set, the rapid rise and market strength of WebCT becomes much more apparent than previous graphics.
  • There is a growing line for “Other”, capturing the growth of those systems with less than 50 active implementations as primary systems; systems like Jenzabar, Edvance360, LoudCloud Systems, WebStudy, Schoology, and CampusCruiser.
  • While we continue to show Canvas in the Open Source area, we have noted a more precise description as an Open Core model.

For a better description of the upcoming LMS subscription service, read this post and / or sign up for more information here.

The post State of Higher Ed LMS Market for US and Canada: Spring 2016 Edition appeared first on e-Literate.

COLLABORATE16: Bots & Virtual Reality

Oracle AppsLab - Tue, 2016-04-19 02:00

Last week, Ben (@goldenmean1618) and I were in Las Vegas for COLLABORATE. We ran two studies which focuses on two trending topics in tech: bots and virtual reality!

Bot Focus Group

Ready for the focus group fun to begin. #oaux #CLV16 pic.twitter.com/sG0amqMGJx

— The AppsLab (@theappslab) April 12, 2016

Our timing for the bot study was perfect! The morning we were to run our focus group on bots in the workplace, Facebook launched it’s bot platform for messenger. They are not the only ones with a platform. Microsoft, Telegram as well as Slack has their own platform too.

The goal of our focus group was to generate ideas on useful bots in the workplace. This can range from the concierge bot that Facebook has to workflow bots that Slack has. To generate as many ideas as we could, without groupthink, we had everyone silently write down their ideas using the “I WANT [PAT] TO…SO I CAN…” Tower of Want framework I stumbled upon at the GDC16 conference last March.

Not only do you distill the participant’s motivations, intents and needs, but you also acquire soft goals to guide the bot’s development. Algorithms are extremely literal. The Harvard Business Review notes how social media sites were once “quickly filled with superficial and offensive material.”

The algorithm was simple, find the articles with the most clicks and feed them to the users. Somewhere, the goal of QUALITY highly engaged articles were lost to highly engaged articles at the expense of QUALITY. Intention is everything.

“Algorithms don’t understand trade-offs; they pursue objectives single-mindedly.”

Soft goals are in place to steer a bot away from unintended actions.

After the ideas were generated and shared, we had them place their bot tasks on a pain/frequency chart: How painful is this task for you to do? and How frequently do you need to do this task?

Focus group preparation underway #oaux #CLV16 pic.twitter.com/74tccV5kLZ

— The AppsLab (@theappslab) April 12, 2016

Then it was time for the business origami! Business Origami is similar to a task flow analysis that uses folded paper cutouts as memory nudges. We now have our bot tasks, but we do not know (a) what triggers the task, (b) what the bot needs to know to do its job and (c) what the desired output is. We modified the Business Origami activity with the inputs and outputs that a Resource Flow activity demands.

Before our customers created their own flows based on their best bot task idea, we did we group warm up. The flow below illustrates the flow of scheduling and booking meeting rooms. Everyone was involved as they talked about the myriad of ways that would trigger the act of scheduling a meeting, the mediums of communication used, what they would need to know in order to schedule that, and what feedback is needed when the task is done.

Business origami taking shape #oaux #CLV16 pic.twitter.com/PJARBrZGka

— The AppsLab (@theappslab) April 12, 2016

Virtual Reality Guerrilla Test

For 3 days, Ben and I ran a guerrilla study to get customer’s and partner’s thoughts on VR and where they might find it useful in their work/industry.

Getting ready for some #VR #oaux research #Collaborate16 w @iheartthannie and Ben pic.twitter.com/qfVjs9QadE

— The AppsLab (@theappslab) April 12, 2016

Our customers experienced virtual reality through the Samsung Gear VR. It relies on our Samsung Note 5 to deliver the immersive experience.

$99 Samsung Gear VR. The consumer version of the Oculus powered head mount display (HMD).

Because of the makeup of our audience at the demo pod, we had to ensure that our study took approximately 5 minutes. We had 2 experiences to show them: an under water adventure with the blue whale in the Artic Ocean (theBlu) and a heart-pounding task of diffusing a bomb (Keep Talking and Nobody Explodes).

Everyone really wanted to reach out and touch the sea animals. 2 reached out and accidentally touched Ben and I and freaked out at how realistic the experience was! Another case for haptic gloves?

Storage in the Oracle Cloud

Pat Shuff - Tue, 2016-04-19 01:07
This week we are going to focus on storage. Storage is a slippery slope and difficult conversation to have. Are we talking about a file synchronization like dropbox.com, google.com/docs, or box.com? Are we talking about raw block storage or long term archive storage? There are many services available from many vendors. We are going to focus on block storage in the cloud that can be used for files if desired or for backups of databases and virtual machines. Some of the cloud vendors have specific focused storage like Azure tables that offer a noSQL type storage or Amazon S3 allowing you to run a website without a web server. Today we will look at the Oracle IaaS Storage set of products. This is different than the Oracle PaaS Documents option which is more of a Google Docs like solution. The IaaS Storage is a block of storage that you pay for either on a metered usage or non-metered usage basis.

Notice from the cloud.oracle.com web page, we click on Infrastructure and follow the Storage route. We see that we get the raw block storage or the archive storage as options. We also have the option of an on-site cache front end that reduces latency and offers an NFS front end to the users providing more of a document management strategy rather than a raw block option.

Before we dive a little deeper into the options and differences between the storage appliance, spinning disk, and spinning tape in the cloud, we need to have a discussion about pricing and usage models. If you click on the Pricing tab at the top of the screen you see the screens below.

Metered pricing consists of three parts. 1) how much storage are you going to start with, 2) how much storage are you going to grow to, and 3) how much are you going to read back? Metering is difficult to guestimate and unfortunately it has a significant cost associated with being wrong. Many long term customers of AWS S3 understand this and have gotten sticker shock when the first bill comes in. The basic cost for outbound transfer is measured on a per GB basis. The more that you read across the internet, the more you pay. You can circumvent this by reading into a compute server in the Oracle cloud and not have to pay the outbound transfer. If, for example, you are backing up video surveillance data and uploading 24 hours of video at the end of they day, you can read the 24 hour bundle into a compute server and extract the 10-15 minutes that you are interested in and pay for the outbound charges on compute for the smaller video file.

Non-Metered pricing consists of one part. How much storage are you going to use over the year. Oracle does not charge for the amount of data transferred in-bound or out-bound with this storage. You can read and write as much as you want and there is no charge for data transfer across the internet. In the previous example you could read the 24 hours of video from the cloud storage, throw away 90% of it from a server in your data center, and not incur any charges for the volume of transfer.

Given that pricing is difficult to calculate, we created our own spreadsheet to estimate pricing as well as part numbers that should be ordered when consuming Oracle cloud resources. The images below show the cost of 120 TB of archive storage, metered block storage, and non-metered block storage.

Note that the data transfer price is non-trivial. Reading the data back from the cloud can get significantly more expensive than the cost of the storage itself. A good rule of thumb is the cost of spinning disk in the cloud should not exceed $30/TB/month or $400/TB/year. If you look at the cost of a NetApp or EMC storage system, you are looking at $3K-$4K/TB purchase price with 10% annual maintenance per year ($300-$400). If you are currently running out of storage and your NFS filer is filling up, you can purchase cloud resources for a few months and see if it works. It won't cost you anything more than paying support and you can grow your cloud storage as needed rather than buying 3 years ahead as you would with a filer in your data center. The key issue with cloud storage is latency and access times. Access to a filer in your data center is typically 10ms where access time to cloud storage is typically 80+ms. All cloud storage vendors have on site appliance solutions that act as cache front ends to address this latency problem. Oracle has one that talks NFS. Amazon has one that talks iSCSI. Microsoft has one that talk SMB. There truly is no single vendor with a generic solution that addresses all problems.

Enough with the business side of storage. Unfortunately, storage is a commodity so the key conversation is economics, reliability, and security. We have already addressed economics. When it comes to reliability the three cloud vendors address data replication and availability in different ways. Oracle triple mirrors the data and provides public-private key encryption of all data uploaded to the cloud. Data can be mirrored to another data center in the same geography but can not be mirrored across an ocean. This selection is done post configuration and is tied to your account as a storage configuration.

Now to the ugly part of block storage. Traditionally, block storage has been addressed through an operating system as a logical unit or aggregation of blocks on a disk drive. Terms like tracks and sectors bleed into the conversation. With cloud storage, it is not part of the discussion. Storage in the cloud is storage. It is accessed through an interface called a REST api. The data can be created, read, updated, and deleted using html calls. All of this is documented in the Oracle Documents - Cloud Storage web site.

The first step is to authenticate to the cloud site with an instance name, username, and password. What is passed back is an authentication token. Fortunately, there are a ton of tools to help read and write HTML code and are specifically tuned to help create headers and JSON structured data packets for the REST api interfaces. The screen below shows the Postman interface available through Chrome. A similar one exists for Firefox called RESTClient API. Unfortunately, there is no extension for Internet Explorer.

The first step is to get an auth header by typing in the username and password into the Basic Authentication screen.

Once we are authorized, we connect to the service by going to https://storage.us2.oraclecloud.com/v1/Storage-(identity domain) where identity domain is the cloud provider account that we have been assigned. In our example we are connecting to metcsgse00029 as our identity domain and logging in as the user cloud.admin. We can see what "containers" are available by sending a GET call or create a new container by sending a PUT call with the new container name at the end of our html string. I use the word container because the top level of storage consists of different areas. These areas are not directories. They are not file systems. The are containers that hold special properties. We can create a container that is standard storage which represents spinning disk in the cloud or we can create a container that is archive storage which represents a tape unit in the cloud. This is done by sending the X-Storage-Class header. If there is no header, the default is block storage and spinning disk. If the X-Storage-Class is assigned to Archive it is tape in the cloud. Some examples of creating a container are shown below. We can do this via Postman inside Chrome or a command line

From the command line this would look like

export OUID=cloud.admin
export OPASS=mypassword
export ODOMAIN=metcsgse00029
c url -is -X GET -H "X-Storage-User:Storage-$ODOMAIN:$OUID" 
                 -H "X-Storage-Pass:$OPASS" 
                 https://$ODOMAIN.storage.oraclecloud.com/auth/v1.0

This should return an html header with HTTP 200 OK and an embedded header of X-Auth-Token: AUTH_tk578061b9ae7f864ae9cde3cfdd75d706. Note that the value after the X-Auth-Token is what we will use to pass into all other requests. This token will change with each request and is good for 30 minutes from first execution. Once we have the authentication finished we either change the request type from a GET to a PUT and append the container name to the end. The screen above shows how to do this with Postman. The results should look like the screen below. We can do this from the command line as show below as well.

c url -is -X PUT -H "X-Auth-Token: AUTH_tk578061b9ae7f864ae9cde3cfdd75d706" 
                 https://storage.us2.oraclecloud.com/v1/Storage-$ODOMAIN/new_area
In this example we create a new container from the command line called new_area. We can verify this by reviewing the cloud storage by changing the PUT to a GET.

c url -is -X GET -H "X-Auth-Token: AUTH_tk578061b9ae7f864ae9cde3cfdd75d706" 
                 https://storage.us2.oraclecloud.com/v1/Storage-$ODOMAIN
Both of these methods allow us to see the storage that we created. I personally do not like this interface. It is not intended to be human consumable. Uploading and downloading a file is difficult at best. A user interface that makes dragging and dropping files is desirable. This is where dropbox and google docs shine. They allow you to drag and drop as well as synchronize directories to cloud storage. The Oracle Storage Cloud is not intended to be this solution. It is designed so that you can drop a new library into your rman backup and backup straight from your database to the cloud. You can point your ComVault or Legato backup software to a cloud instance and replicate your data to the cloud. If you want a human readable interface you need to purchase something like the Cloudberry Explorer from Cloudberry. This give you a Windows Explorer like interface and allows your to drag and drop files, create containers and directories, and schedule archives or backups as desired.

Note that the way that you create a block storage container vs an archive container is a simple menu selection. Retrieving the archive storage is a little more complex because the tape unit must stage the file from the tape to disk and notify you that the restoration has been completed. This is a little more complex and we will defer this discussion to a later blog.

Copying files is little more than dragging and dropping a file between sections of a window in Cloudberry.

For completeness, I have included the command line screen shots so that you can see the request/response of a command line interaction.

It is important to remember our objective. We can use the cloud block storage as a repository for things like database and a holding point for our backups. When we configure a database in the cloud, we backup and restore from this storage. This is configured in the database provisioning screen. The Storage-metcsgse00029/backup is the location of RMAN backup and restores. The backup container is created through the REST api or Cloudberry interface. We can also attach to the cloud storage through the cloud storage appliance software which runs inside a virtual machine and listens for NFS requests and translates them into REST api calls. A small disk is attached to the virtual machine and it acts as a cache front end to the cloud storage. As files are written via NFS they are copied to the cloud storage. As the cache fills up, files contents are dropped from local storage and the metadata pointing to where the files are located are updated relocating the storage to the cloud rather than the cache disk. If a file is retrieved via NFS, the file is read from cache or retrieved from the cloud and inserted into the cache as it is written to the client that requested it.

In summary, we covered the economics behind why you would select cloud storage over on site storage. We talked about how to access the storage from a browser based interface, web based interface, or command line. We talked about improving latency and security. Overall, cloud based storage is something that everyone is familiar with. Products like Facebook, Picaso, or Instagram do nothing more than store photos in cloud storage for you to retrieve when you want. You pay for these services by advertisements injected into the web page. Corporations are turning more and more towards cloud storage as a cheaper way to consume long term storage at a much lower price. The Oracle Storage Cloud service is first of three that we will evaluate this week.