# Feed aggregator

### Control Charts

Chen Shapira - Sun, 2013-10-13 22:43

Last week, while working on customer engagement, I learned a new method of quantifying behavior of time-series data. The method is called “Control Chart” and credit to Josh Wills, our director of data science, for pointing it out. I thought I’ll share it with my readers as its easy to understand, easy to implement, flexible and very useful in many situations.

The problem is ages old – you collect measurements over time and want to know when your measurements indicate abnormal behavior. “Abnormal” is not well defined, and thats on purpose – we want our method to be flexible enough to match what you define as an issue.

For example, lets say Facebook are interested in tracking usage trend for each user, catching those with decreasing use

There are few steps to the Control Chart method:

1. Collect all relevant data points. In our case, number of minutes of Facebook use per day for each user.
2. Calculate a baseline – this can be average use for each user or average use for similar demographics.  Even adaptive average of the type calculated by Oracle Enterprise Manager, to take into account decreased Facebook use over the weekend.
3. Calculate “zones” one, two and three standard deviations around the baseline

Those zones can be used to define rules for normal and abnormal behaviors of the system. These rules are what makes the system valuable.
Examples of rules that define abnormal behavior can be:

1. Any point 3 standard deviations above the baseline. This will indicate extreme sudden increase.
2. 7 consecutive measurements more than one standard deviation over the baseline. This indicates a sustained increase.
3. 9 consecutive measurement each higher than previous one. This indicates steady upward trend.
4. 6 consecutive measurements each more than two standard deviations away from baseline, each one of different side of the baseline than the previous measurement. This indicates instability of the system.

There are even sets of standard rules used in various industries, best practices of sorts. Western Electric rules and Nelson rules are particularly well know.

Note how flexible the method is – you can use any combination of rules that will highlight abnormalities you are interested in highlighting.
Also note that while the traditional use indeed involves charts, the values and rules are very easy to calculate programmatically and visualization can be useful but not mandatory.
If you measure CPU utilization on few servers, visualizing the chart and actually seeing the server behavior can be useful. If you are Facebook and monitor user behavior, visualizing a time series of every one of their millions of users is hopeless. Calculating baselines, standard deviations and rules for each use is trivial.

Also note how this problem is “embarrassingly parallel“. To calculate behavior for each user, you only need to look at data for that particular user. Parallel, share-nothing platform like Hadoop can be used to scale the calculation indefinitely simply by throwing increasing number of servers on the problem. The only limit is the time it takes to calculate the rules for a single user.

Naturally, I didn’t dive into some of the complexities in using Control Charts.  Such as how to select a good baseline, how to calculate standard deviation (or whether to use another statistic to define zones) and how many measurements should be examined before a behavior signals a trend. If you think this tool is useful for you, I encourage you to investigate more deeply.

Categories: DBA Blogs

### 'Medalling' in Humour

Gary Myers - Sun, 2013-10-13 03:31
Yesterday I competed, and came third, in the Humorous Speech contest of the Hawkesbury Division of Toastmasters. I'm pretty chuffed with that.

Partly, I'll blame Yuri for this. He recommended Toastmasters as a way of getting some extra public speaking skills. Luckily for me, there's a group that meets one lunchtime a week at work, which makes attendance relatively easy. Much easier than trying to fit in an evening on top of my kids' busy social and sporting calendars. I started going just before Christmas and became a paid up member in January this year.

Next, I'll blame Neil Sequeira who prompted me to enter the club contest a month ago on the basis that attendees regularly laughed at my speeches....in a nice way. I won that, and then it dawned on me that I'd be going into the Area level contest. Our club was hosting that event, so I had a slight 'home ground' advantage, but I was still in front of a bunch of people I didn't know, most of whom have been honing their speaking skills for YEARS.

I won that, which meant going up to the Division level contest last night. That was in a church hall, unfamiliar territory for me. We were speaking from an elevated stage, and with a headset microphone. Getting into the big leagues.

I was a bit ruffled because my trip there was delayed with my phone unaccountably deciingd it couldn't find any GPS signal, and refusing to tell me where I was or how to get where I was supposed to be. My destination was the other side of Middle Harbour so my regular tactic of pointing the car vaguely in the right direction and hoping for the best was foiled by its inability to fly across the water. Resorting to my trusty and dusty Gregory's Street Directory I made the 40 minute journey in a mere 80 minutes.

My speech starts with the other Gary Myers, multi-time champion of Summernats (which I've mentioned before in my blog ) and land speed record holder. Oh, and candidate for one of the NSW Senate spots for Federal parliament. He didn't win, but a candidate for the same party did get a spot for Victoria. I suspect this came as somewhat a surprise to him, as the Senate voting system is complex and highly unpredictable. An unemployed ex-sawmill worker who didn't own a suit and has kangaroo poo chucking as a hobby will be an interesting addition to the chamber.

I was more than  happy with my third place finish in the contest. The winner, who also took the Table Topics contest top prize, was excellent. And second place went to a Dave the Happy Singer who took the mick out of homoeopathy and similar rubbish, so I won't criticise that. I get a small trophy and a certificate. And an update to my LinkedIn profile.

And, for posterity, my certificates:

### Oracle Big Data Meetup - 09-OCT-2013

Andrew Clarke - Sat, 2013-10-12 12:19
The Oracle guys running the Big Data 4 the Enterprise Meetup are always apologetic about marketing. The novelty is quite amusing. They do this because most Big Data Meetups are full of brash young people from small start-ups who use cool open source software. They choose cool open source software partly because they're self-styled hackers who like being able to play with their software any way they choose. But mainly it is because the budgetary constraints of being a start-up mean they have to choose between a Clerkenwell office and Aeron chairs, or enterprise software licenses, and that's no choice at all.

But an Oracle Big Data meetup has a different constituency. We come from an enterprise background, we've all been using Oracle software for a long time and we know what to expect from an Oracle event. We're prepared to tolerate a certain amount of Oracle marketing because we want to hear the Oracle take on things, and we come prepared with our shields up. Apart from anything else, the Meetup sponsor is always cut some slack, in exchange for the beer'n'pizza.

Besides the Oracle Big Data Appliance is quite at easy sell, certainly compared to the rest of the engineered systems. The Exa stack largely comprises machines which replace existing servers whereas Big Data is a new requirement. Most Oracle shops probably don't have a pool of Linux/Java/Network hackers on hand to cobble together a parallel cluster of machines and configure them to run Hadoop. A pre-configured Exadoop appliance with Oracle's imprimatur is just what those organisations need. The thing is, it seems a bit cheeky to charge a six figure sum for a box with a bunch of free software on it. No matter how good box is. Particularly when it can be so hard to make the business case for a Big Data initiative.

Stephen Sheldon's presentation on Big Data Analytics As A Service addressed exactly this point. He works for Detica. They have stood up an Oracle BDA instance which they rent out for a couple of months to organisations who want to try a Big Data initiative. Detica provide a pool of data scientists and geeks to help out with the processing and analytics. At the end of the exercise the customer has a proven case showing whether Big Data can give them sufficient valuable insights into their business. This strikes me as a highly neat idea, one which other companies will wish they had thought of first.

Ian Sharp (one of the apologetic Oracle guys) presented on Oracle's Advanced Analytics. The big idea here is R embedded in the database. This gives data scientists access to orders of magnitude more data than they're used to having on their desktop R instances. Quants working in FS organisations will most likely have an accident when they realise just how great an idea this is. Unfortunately, Oracle R Enterprise is part of the Advanced Analytics option, so probably only the big FS companies will go for it. But the Oracle R distro is still quite neat, and free.

Mark Sampson from Cloudera rounded off the evening with a talk on a new offering, Cloudera Search. This basically provides a mechanism for building a Google / Amazon style search facility over a Hadoop cluster. The magic here is that Apache Solr is integrated into the Hadoop architecture instead of as a separate cluster, plus a UI building tool. I spent five years on a project which basically did this with an Oracle RDBMS, hand-rolled ETL and XML generators and lots of Java code plumbing an external search engine into the front-end. It was a great system, loved by its users and well worth the effort at the time. But I expect we could do the whole thing again in a couple of months with this tool set. Which is good news for the next wave of developers.

Some people regard attending technical meetups a bit odd. I mean, giving up your free time to listen to a bunch of presentations on work matters? But if you find this stuff interesting you can't help yourself. And if you work with Oracle tech and are interested in data then this meetup is definitely worth a couple of hours of your free time.

### Share & Enjoy : Using a JDeveloper Project as an MDS Store

Antony Reynolds - Sat, 2013-10-12 01:24
Share & Enjoy : Sharing Resources through MDS

One of my favorite radio shows was the Hitchhikers Guide to the Galaxy by the sadly departed Douglas Adams.  One of the characters, Marvin the Paranoid Android, was created by the Sirius Cybernetics Corporation whose corporate song was entitled Share and Enjoy!  Just like using the products of the Sirius Cybernetics Corporation, reusing resources through MDS is not fun, but at least it is useful and avoids some problems in SOA deployments.  So in this blog post I am going to show you how to re-use SOA resources stored in MDS using JDeveloper as a development tool.

The Plan

We would like to have some SOA resources such as WSDLs, XSDs, Schematron files, DVMs etc. stored in a shared location.  This gives us the following benefits

• Single source of truth for artifacts
• Remove cross composite dependencies which can cause deployment and startup problems
• Easier to find and reuse resources if stored in a single location

So we will store a WSDL and XSD in MDS, using a JDeveloper project to maintain the shared artifact and using File based MDS to access it from development and Database based MDS to access it from runtime.  We will create the shared resources in a JDeveloper project and deploy them to MDS.  We will then deploy a project that exposes a service based on the WSDL.  Finally we will deploy a client project to the previous project that uses the same MDS resources.

Creating Shared Resources in a JDeveloper Project

First lets create a JDeveloper project and put our shared resources into that project.  To do this

1. In a JDeveloper Application create a New Generic Project (File->New->All Technologies->General->Generic Project)
2. In that project create a New Folder called apps (File->New->All Technologies->General->Folder) – It must be called apps for local File MDS to work correctly.
3. In the project properties delete the existing Java Source Paths (Project Properties->Project Source Paths->Java Source Paths->Remove)
4. In the project properties a a new Java Source Path pointing to the just created apps directory (Project Properties->Project Source Paths->Java Source Paths->Add)

Having created the project we can now put our resources into that project, either copying them from other projects or creating them from scratch.

Create a SOA Bundle to Deploy to a SOA Instance

Having created our resources we now want to package them up for deployment to a SOA instance.  To do this we take the following steps.

1. Create a new JAR deployment profile (Project Properties->Deployment->New->Jar File)
2. In JAR Options uncheck the Include Manifest File
3. In File Groups->Project Output->Contributors uncheck all existing contributors and check the Project Source Path
4. Create a new SOA Bundle deployment profile (Application Properties->Deployment->New->SOA Bundle)
5. In Dependencies select the project jar file from the previous steps.
6. On Application Properties->Deployment unselect all options.

The bundle can now be deployed to the server by selecting Deploy from the Application Menu.

Create a Database Based MDS Connection in JDeveloper

Having deployed our shared resources it would be good to check they are where we expect them to be so lets create a Database Based MDS Connection in JDeveloper to let us browse the deployed resources.

1. Create a new MDS Connection (File->All Technologies->General->Connections->SOA-MDS Connection)
2. Make the Connection Type DB Based MDS and choose the database Connection and parition.  The username of the connection will be the <PREFIX>_mds user and the MDS partition will be soa-infra.

Browse the repository to make sure that your resources deplyed correctly under the apps folder.  Note that you can also use this browser to look at deployed composites.  You may find it intersting to look at the /deployed-composites/deployed-composites.xml file which lists all deployed composites.

Create an File Based MDS Connection in JDeveloper

We can now create a File based MDS connection to the project we just created.  A file based MDS connection allows us to work offline without a database or SOA server.  We will create a file based MDS that actually references the project we created earlier.

1. Create a new MDS Connection (File->All Technologies->General->Connections->SOA-MDS Connection)
2. Make the Connection Type File Based MDS and choose the MDS Root Folder to be the location of the JDeveloper project previously created (not the source directory, the top level project directory).

We can browse the file based MDS using the IDE Connections Window in JDeveloper.  This lets us check that we can see the contents of the repository.

Using File Based MDS

Now that we have MDS set up both in the database and locally in the file system we can try using some resources in a composite.  To use a WSDL from the file based repository:

1. Insert a new Web Service Reference or Service onto your composite.xml.
2. Browse the Resource Palette for the WSDL in the File Based MDS connection and import it.
3. Do not copy the resource into the project.
4. If you are creating a reference, don’t worry about the warning message, that can be fixed later.  Just say Yes you do want to continue and create the reference.

Note that when you import a resource from an MDS connection it automatically adds a reference to that MDS into the applications adf-config.xml.  SOA applications do not deploy their adf-config.xml, they use it purely to help resolve oramds protocol references in SOA composites at design time.  At runtime the soa-infra applications adf-config.xml is used to help resolve oramds protocol references.

The reason we set file based MDS to point to the project directory rather than the apps directory underneath is because when we deploy SOA resources to MDS as a SOA bundle the resources are all placed under the apps MDS namespace.  To make sure that our file based MDS includes an apps namespace we have to rename the src directory to be apps and then make sure that our file based MDS points to the directory aboive the new source directory.

Patching Up References

When we use an abstract WSDL as a service then the SOA infrastructure automatically adds binging and service information at run time.  An abstract WSDL used as a reference needs to have binding and service information added in order to compile successfully.  By default the imported MDS reference for an abstract WSDL will look like this:

<reference name="Service3"
ui:wsdlLocation="oramds:/apps/shared/WriteFileProcess.wsdl">
<interface.wsdl interface="
http://xmlns.oracle.com/Test/SyncWriteFile/WriteFileProcess# wsdl.interface(WriteFileProcess)"/>
<binding.ws port="" location=""/>
</reference>

Note that the port and location properties of the binding are empty.  We need to replace the location with a runtime WSDL location that includes binding information, this can be obtained by getting the WSDL URL from the soa-infra application or from EM.  Be sure to remove any MDS instance strings from the URL.

The port information is a little more complicated.  The first part of the string should be the target namespace of the service, usually the same as the first part of the interface attribute of the interface.wsdl element.  This is followed by a #wsdl.endpoint and then in parenthesis the service name from the runtime WSDL and port name from the WSDL, separated by a /.  The format should look like this:

{Service Namespace}#wsdl.endpoint({Service Name}/{Port Name})

So if we have a WSDL like this:

<wsdl:definitions
…

targetNamespace=
"http://xmlns.oracle.com/Test/SyncWriteFile/WriteFileProcess"
>
…
<wsdl:service name="writefileprocess_client_ep">
<wsdl:port name="WriteFileProcess_pt"
binding="client:WriteFileProcessBinding">
</wsdl:port>
</wsdl:service>
</wsdl:definitions>

Then we get a binding.ws port like this:

http://xmlns.oracle.com/Test/SyncWriteFile/WriteFileProcess# wsdl.endpoint(writefileprocess_client_ep/WriteFileProcess_pt)

Note that you don’t have to set actual values until deployment time.  The following binding information will allow the composite to compile in JDeveloper, although it will not run in the runtime:

<binding.ws port="dummy#wsdl.endpoint(dummy/dummy)" location=""/>

The binding information can be changed in the configuration plan.  Deferring this means that you have to have a configuration plan in order to be able to invoke the reference and this means that you reduce the risk of deploying composites with references that are pointing to the wrong environment.

Summary

In this blog post I have shown how to store resources in MDS so that they can be shared between composites.  The resources can be created in a JDeveloper project that doubles as an MDS file repository.  The MDS resources can be reused in composites.  If using an abstract WSDL from MDS I have also shown how to fix up the binding information so that at runtime the correct endpoint can be invoked.  Maybe it is more fun than dealing with the Sirius Cybernetics Corporation!

### Once again APEX helps ORACLE TEAM USA win the America's Cup

David Peake - Fri, 2013-10-11 16:17

Found some interesting pieces on how ORACLE TEAM USA used Application Express to help them win the 34rd America's Cup in San Francisco.
Of course these and other great links are available on the Oracle Application Express OTN page.

### Mobile App Designer

Tim Dexter - Thu, 2013-10-10 14:45

Back in August a new Oracle mobile solution jumped out of the gate, the Mobile App Designer (MAD). I seem to have been on the road every week for the last, goodness knows how many weeks. I have finally found some time this week to get down and work with it. Its pretty cool and above all, its capable of providing a mobile platform independent reporting solution.

But you already have a mobile application! Yep, and I think they both sit quite comfortably together. The Oracle BI Mobile Application is available from the App Store for Apple users. Its a great app, build reports, dashboards and BIP reports for your browser based users and your Apple app users can take advantage of them immediately.

MAD takes the next step forward. Maybe you don't use or can not use Apple mobile devices? Maybe you need to build something more specific around a business area that provides users with a richer experience, beyond what Answers and Dashboards can offer. However, you do not want to have to rely of the tech folks to build the mobile application, thats just piling more work on them. You also want to be platform agnostic, you might have a mix of mobile platforms. MAD can help.

For those of you that have already used the Online Template layout editor with BI Publisher, you already know how to build a mobile application. The MAD interface is essentially the online template builder user interface, tweaked for a mobile destination ie a phone or tablet.

You build your data model as you would normally including the newer direct data model build on a subject area from OBIEE.

Then start to create the 'pages' of your application and the content to sit on those pages. All the normal stuff, tables, pivot tables, charts, images plus accordians, filters and repeating objects. On top of that is the ability to then extend the visual objects that are available to users. Maps (google or oracle), D3 visuals, gantt charts, org charts, if you can either write the code or leverage an existing javascript library, MAD has the extension framework to support it.

You can build and test in a browser and then deploy to your own BI App Store. Users, on their mobile devices, can then subscribe to an application. They can open and interact with your app using their phone or tablet's interactive features just as they would with a native application.  As you update your app and add new features the changes will be picked up the next time your users open the application.

Interested? Want to know more? The Oracle MAD home page has a ton of content including tutorials, etc. We are planning to dig into MAD in forthcoming posts. The geek in me wanted to be able to build plugins using the D3 and other visuals. I have been working with Leslie on some of the documentation and we'll be sharing some of that 'plugin' doc and how tos in the coming weeks.

Categories: BI & Warehousing

### Multi-Colored SQL

Scott Spendolini - Thu, 2013-10-10 07:14
My OCD was delighted this morning when I came across a new feature of SQL Developer: Connection Color.  Brace yourselves, as this feature ain't much, but could have a huge impact on reducing accidental commands in the wrong schema or server.

To use it, simply create or edit a database connection, and set the Connection Color to whichever color you choose:

Once set, any and all windows associated with that connection will be outlined in that color.  That's it!  I already gleefully went through my connection list and associated different colors with different types of connections.  For example, our development schemas got green:

While our production schemas got red:

Now, no matter what type of object I am editing, I will instantly know which schema it belongs to based on the color of the border.  Simple AND effective!

### ★ Native JSON Support in Oracle Database 12c

Eddie Awad - Thu, 2013-10-10 07:00

If you want JSON support in your Oracle database today you can use PL/JSON, an open source JSON parser written in PL/SQL.

However, as Marco reported from Oracle OpenWorld, native JSON support may be an upcoming new feature in Oracle Database 12c.

This new feature allows the storage of JSON documents in table columns with existing data types like VARCHAR2, CLOB, RAW, BLOB and BFILE.

A new check constraint makes sure only valid JSON is inserted.

For example: CHECK column IS JSON.

New built-in operators allow you to work with stored JSON documents. For example, JSON_VALUE enables you to query JSON data and return the result as a SQL value. Other operators include JSON_QUERY, JSON_EXISTS and JSON_TABLE.

Cool stuff!

### Multiple SOA Developers Using a Single Install

Antony Reynolds - Wed, 2013-10-09 16:37
Running Multiple SOA Developers from a Single Install

A question just came up about how to run multiple developers from a single software install.  The objective is to have a single software installation on a shared server and then provide different OS users with the ability to create their own domains.  This is not a supported configuration but it is attractive for a development environment.

Out of the Box

Before we do anything special lets review the basic installation.

• Oracle WebLogic Server 10.3.6 installed using oracle user in a Middleware Home
• Oracle SOA Suite 11.1.1.7 installed using oracle user
• Software installed with group oinstall
• Developer users dev1, dev2 etc
• Each developer user is a member of oinstall group and has access to the Middleware Home.
Customizations

To get this to work I did the following customization

• In the Middleware Home make all user readable files/directories group readable and make all user executable files/directories group executable.
• find \$MW_HOME –perm /u+r ! –perm /g+r | xargs –Iargs chmod g+r args
• find \$MW_HOME –perm /u+x ! –perm /g+x | xargs –Iargs chmod g+x args
Domain Creation

When creating a domain for a developer note the following:

• Each developer will need their own FMW repository, perhaps prefixed by their username, e.g. dev1, dev2 etc.
• Each developer needs to use a unique port number for all WebLogic channels
• Any use of Coherence should use Well Known Addresses to avoid cross talk between developer clusters (note SOA and OSB both use Coherence!)
• If using Node Manager each developer will need their own instance, using their own configuration.

### Getting Started with Oracle SOA B2B Integration: A hands On Tutorial

Antony Reynolds - Tue, 2013-10-08 17:14
Book: Getting Started with Oracle SOA B2B Integration: A hands On Tutorial

Before OpenWorld I received a copy of a new book by Scott Haaland, Alan Perlovsky & Krishnaprem Bhatia entitled Getting Started with Oracle SOA B2B Integration: A hands On Tutorial.  A free download is available of Chapter 3 to help you get a feeling for the style for the book.

A useful new addition to the growing library of Oracle SOA Suite books, it starts off by putting B2B into context and identifying some common B2B message patterns and messaging protocols.  The rest of the book then takes the form of tutorials on how to use Oracle B2B interspersed with useful tips, such as how to set up B2B as a hub to connect different trading partners, similar to the way a VAN works.

The book goes a little beyond a tutorial by providing suggestions on best practice, giving advice on what is the best way to do things in certain circumstances.

I found the chapter on reporting & monitoring to be particularly useful, especially the BAM section, as I find many customers are able to use BAM reports to sell a SOA/B2B solution to the business.

The chapter on Preparing to Go-Live should be read closely before the go live date, at the very least pay attention to the “Purging data” section

Not being a B2B expert I found the book helpful in explaining how to accomplish tasks in Oracle B2B, and also in identifying the capabilities of the product.  Many SOA developers, myself included, view B2B as a glorified adapter, and in many ways it is, but it is an adapter with amazing capabilities.

The editing seems a little loose, the language is strange in places and there are references to colors on black and white diagrams, but the content is solid and helpful to anyone tasked with implementing Oracle B2B.

Eddie Awad - Tue, 2013-10-08 07:00

Note that some presenters chose not to make their sessions available.

Via the Oracle OpenWorld Blog.

### Convert SQLServer to Oracle using files - Part 4

Barry McGillin - Mon, 2013-10-07 21:30
This, the last part of a four part tutorial goes over the movement of data using files generated by Oracle SQL Developer.  In part 1 we generated the offline capture scripts to take to the SQL Server machine, unloaded the metadata, zipped it up and copied it back to out local machine. In part 2 we used SQL Developer to create a migration project and load the capture files into SQL Developer.  We then converted the metadata into its Oracle equivalent. In  Part 3, we were able to generate DDL and run this DDL against an Oracle database.
Looking at the data move scripts that we generated in an earlier part.  We need to zip up the files and copy them to the SQL Server machine to run.  Lets look at that now.  The images below show the files moved to our SQLServer machine.  We go into the main directory under data move and run the bat file MicrosoftSQLServer_data.bat.  This batch file takes a number of parameters

This script then unloads the data from the database for the databases selected earlier.  We can see the dat files in the image above.  Now, we just need to go and transfer the data to the Oracle database machine for loading.  We can go back out to the main datamove directory and zip up the entire directory including the scripts.  We then need to ftp that to the database machine.
The files need to be unzipped on the machine and cd into the main directory until you find a file called oracle_loader.sh.

We can run the files as below.  The output below shows the exact output of running the Oracle_loader.sh script on the data we have taken from SQL Server.

[oracle@Unknown-08:00:27:c8:2a:1c 2013-10-08_00-05-16]\$ sh ./oracle_loader.sh orcl blog blog
/scratch/datamove/2013-10-08_00-05-16/Northwind /scratch/datamove/2013-10-08_00-05-16
/scratch/datamove/2013-10-08_00-05-16/Northwind/dbo_Northwind /scratch/datamove/2013-10-08_00-05-16/Northwind

SQL*Plus: Release 11.2.0.2.0 Production on Mon Oct 7 18:58:42 2013

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

Table altered.
Table altered.
Trigger altered.
Trigger altered.
Trigger altered.
Trigger altered.
Trigger altered.
Trigger altered.
Trigger altered.
Table altered.
Table altered.
Table altered.
Table altered.
Table altered.
Table altered.
Table altered.
Table altered.
Table altered.
Table altered.
Table altered.
Table altered.
Table altered.
Table altered.
Table altered.
Table altered.
Table altered.
Table altered.
Table altered.
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:43 2013

Commit point reached - logical record count 1
Commit point reached - logical record count 2
Commit point reached - logical record count 3
Commit point reached - logical record count 4
Commit point reached - logical record count 5
Commit point reached - logical record count 6
Commit point reached - logical record count 7
Commit point reached - logical record count 8
Commit point reached - logical record count 9

SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:44 2013

Commit point reached - logical record count 1
Commit point reached - logical record count 2
Commit point reached - logical record count 3
Commit point reached - logical record count 4
Commit point reached - logical record count 5
Commit point reached - logical record count 6
Commit point reached - logical record count 7
Commit point reached - logical record count 8

SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:44 2013

Commit point reached - logical record count 49

SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:44 2013

Commit point reached - logical record count 53

SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:45 2013

SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:45 2013

SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:45 2013

Commit point reached - logical record count 64
Commit point reached - logical record count 77

SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:45 2013

Commit point reached - logical record count 4

SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:46 2013

Commit point reached - logical record count 64
Commit point reached - logical record count 128
Commit point reached - logical record count 192
Commit point reached - logical record count 256
Commit point reached - logical record count 320
Commit point reached - logical record count 384
Commit point reached - logical record count 448
Commit point reached - logical record count 512
Commit point reached - logical record count 576
Commit point reached - logical record count 640
Commit point reached - logical record count 704
Commit point reached - logical record count 768
Commit point reached - logical record count 832
Commit point reached - logical record count 896
Commit point reached - logical record count 960
Commit point reached - logical record count 1024
Commit point reached - logical record count 1088
Commit point reached - logical record count 1152
Commit point reached - logical record count 1216
Commit point reached - logical record count 1280
Commit point reached - logical record count 1344
Commit point reached - logical record count 1408
Commit point reached - logical record count 1472
Commit point reached - logical record count 1536
Commit point reached - logical record count 1600
Commit point reached - logical record count 1664
Commit point reached - logical record count 1728
Commit point reached - logical record count 1792
Commit point reached - logical record count 1856
Commit point reached - logical record count 1920
Commit point reached - logical record count 1984
Commit point reached - logical record count 2048
Commit point reached - logical record count 2112
Commit point reached - logical record count 2155

SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:46 2013

Commit point reached - logical record count 64
Commit point reached - logical record count 128
Commit point reached - logical record count 192
Commit point reached - logical record count 256
Commit point reached - logical record count 320
Commit point reached - logical record count 384
Commit point reached - logical record count 448
Commit point reached - logical record count 512
Commit point reached - logical record count 576
Commit point reached - logical record count 640
Commit point reached - logical record count 704
Commit point reached - logical record count 768
Commit point reached - logical record count 830

SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:47 2013

Commit point reached - logical record count 1
Commit point reached - logical record count 2
Commit point reached - logical record count 3
Commit point reached - logical record count 4
Commit point reached - logical record count 5
Commit point reached - logical record count 6
Commit point reached - logical record count 7
Commit point reached - logical record count 8
Commit point reached - logical record count 9
Commit point reached - logical record count 10
Commit point reached - logical record count 11
Commit point reached - logical record count 12
Commit point reached - logical record count 13
Commit point reached - logical record count 14
Commit point reached - logical record count 15
Commit point reached - logical record count 16
Commit point reached - logical record count 17
Commit point reached - logical record count 18
Commit point reached - logical record count 19
Commit point reached - logical record count 20
Commit point reached - logical record count 21
Commit point reached - logical record count 22
Commit point reached - logical record count 23
Commit point reached - logical record count 24
Commit point reached - logical record count 25
Commit point reached - logical record count 26
Commit point reached - logical record count 27
Commit point reached - logical record count 28
Commit point reached - logical record count 29

SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:47 2013

Commit point reached - logical record count 3

SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:47 2013

Commit point reached - logical record count 64
Commit point reached - logical record count 91

Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
/scratch/datamove/2013-10-08_00-05-16/Northwind
/scratch/datamove/2013-10-08_00-05-16
[oracle@Unknown-08:00:27:c8:2a:1c 2013-10-08_00-05-16]\$

We can now take a look at some data in the Oracle database by going to the dbo_northwind connection we made earlier and look at the data.

And thats it. In these four parts we have generated capture scripts from SQLDeveloper to unload metadata from SQLServer. In part two, we loaded the metadata and converted it into an Oracle version. In Part three, we generated the DDL and ran it creating the new Oracle users. In part 4, we unloaded the data and copied it to the oracle machine.  We then loaded it using the scripts we generated from Oracle SQL Developer.

### Convert SQLServer to Oracle using files - Part 3

Barry McGillin - Mon, 2013-10-07 20:23
In part 1 we generated the offline capture scripts to take to the SQL Server machine, unloaded the metadata, zipped it up and copied it back to out local machine. In part 2 we used SQL Developer to create a migration project and load the capture files into SQL Developer.  We then converted the metadata into its Oracle equivalent.
In this episode we will try and generate DDL from our migration project.  Right now, We can see the Oracle objects in the Converted Database Objects node.
If we right click on Converted Database objects and choose generate, we can generate DDL to create the Oracle Schema and Objects.

The wizard appears again with the introduction screen.  Clicking next takes us directly to the Target database Screen.

Click on offline to choose generation of files.  For specifics of how the files get generated, click on advanced options

You can select what way you want to generate your files, all in one file, a file per object type or a file per object. You can also choose the types of objects you want to generate and run.
In this demo, I will just generate tables, data and supporting objects.   Clicking next  will take us to the data move page where we again choose offline to generate files.
Once we have chosen our options, we click next and review the summary.
Finally, we click finish and the files are generated in the output directory we specified when setting up the project in part 2.
Now, Lets go see what we generated.  If we go to the output directory we specified in the project, we can see the list of files we generated.  Remember the options we chose for generation.
We also get the master.sql file opened in SQL Developer which looks like this

SET ECHO OFF
SET VERIFY OFF
SET FEEDBACK OFF
SET DEFINE ON
CLEAR SCREEN
set serveroutput on

COLUMN date_time NEW_VAL filename noprint;
SELECT to_char(systimestamp,'yyyy-mm-dd_hh24-mi-ssxff') date_time FROM DUAL;
spool democapture_&filename..log

PROMPT Creating Role
@role.sql

prompt creating user Emulation
@@Emulation/user.sql

prompt creating user dbo_Northwind
@@dbo_Northwind/user.sql

prompt creating user dbo_pubs
@@dbo_pubs/user.sql

prompt Building objects in Emulation
@@Emulation/master.sql

prompt Building objects in dbo_Northwind
@@dbo_Northwind/master.sql

prompt Building objects in dbo_pubs
@@dbo_pubs/master.sql

Now, lets try and run this file and create the users and objects.  Firstly, we choose a connection to run the script.  This user must have the privileges to create users and all their ancillary objects.
We can run this script to create the users.  Notice the worksheet output showing the output of the files.
Once this is complete, we can create a connection in SQL Developer to one of the users created, dbo_Northwind, dbo_pubs and emulation.

Now, we have created the schema from the DDL which was generated.  In the next and final episode of this, we will visit the data move.  We will run the data move scripts on SQL Server and extract the data which we can load via SQL Loader or external tables.

### APEX 5.0 - Page Designer

David Peake - Mon, 2013-10-07 20:05
Ever since we started showing off the new Page Designer scheduled for release with Application Express 5.0 almost everyone has been hounding me for screen shots. Generally I never make future functionality public as it changes often and sometimes in very dramatic ways between feature conception, initial implementation, and final deliverable. However, given how much of a "game-changer" this feature will be for all current APEX Developers I have released a slide deck on OTN: Application Express 5.0 - Page Designer.

Please review Oracle's Safe Harbor within the slide deck.

### Convert SQL Server to Oracle using files - Part 2

Barry McGillin - Mon, 2013-10-07 17:47
Ok, Now we have the files as generated and moved in part 1, we can now start SQL Developer to load the files. Start up SQL Developer  and create a connection with the following privileges: CONNECT, RESOURCE and CREATE VIEW.

When the connection is opened, right click on it and choose Migration Repository then Associate Migration Repository.  This will create the repository in the connection.

Now, We can start the migration wizard. You can do this by either going to the tools menu and selecting migrate from the migration menu, or you can select the migrate icon from the migration project navigator.  The wizard will popup and you can walk through the steps as outlined below.
Clicking the next button selects the repository page which we can choose the repository connection we just made.
Next page and we need to create a project to hold the captured databases.
The output directory in the page above is the directory where any log files or generated files will be placed.  When we generate DDL or data move files, this is where they will get generated.  Next page is the capture page.  For using the files from Part 1, we need to choose offline which will then show the page below, which asks us to select the offline capture file.
This offline capture file is in the zip file we brought over from SQL Server.  Browse to the sqlserver2008.ocp.  This file tells SQL Developer what to expect in the directory.  It will look for the databases that have been unloaded.
When its selected, SQL Developer parses the files and shows you a list of the databases you ran the offline capture scripts for in Part 1.

Choose both databases and click next.
The next page shows a list of the datatypes of SQL Server on the left and a list of equivalent data types on the right.  You can choose a different type if you want and you can also create a new mapping by clicking on the "Add new Rule".
The next page lists the objects to be translated.  Because we have not captured anything yet, the best we can do is to tell SQL Developer to translate everything.  We can come back later and choose specific  stored programs to convert and translate.

At this stage, we can click proceed to summary and then finish once you review the summary page.
When finish is pressed, SQL Developer will capture the database metadata from the files and convert it to its Oracle equivalent.

When this completes, you will see a new node with the project name you chose earlier. If you click on it, you will get an editor on the right hand side with a summary of the data captured and converted.

### Oracle Linux 5.10 channels are now published

Wim Coekaerts - Mon, 2013-10-07 16:54
We just released Oracle Linux 5.10 channels on both http://public-yum.oracle.com and on the Unbreakable Linux Network. ISO's are going to be updated on edelivery in a few days. The channels are available immediately.

As many of you know, we are now using a CDN to distribute the RPMS for public-yum globally so you should have good bandwidth everywhere to freely access the RPMs.

### Convert SQL Server to Oracle using files - Part 1

Barry McGillin - Mon, 2013-10-07 16:45
Many people want to migrate their SQL Server databases and do not have direct network access to the database. In Oracle SQL Developer, we can migrate from SQL Developer to Oracle using a connection  to SQL Server or  using files to extract the metadata from SQL Server and convert it to an Oracle equivilent.

Today, we'll show you how to use scripts to convert SQL Server.  First we need to start up SQL Developer and choose the Tools menu, then select Migration and Create Offline Capture Scripts

When the dialog appears, choose the SQL Server and the appropriate version you want.  You will also need to choose a directory to put the scripts into.
This will generate a set of files which we will need to move to our SQL Server machine to run.
So on disk, these look like this.
Now, we can zip this up and ftp it to the SQL Server machine you want to migrate, or in my case, I'll scp it to the machine.

Now, lets go to SQL Server and run the scripts against the SQL Server database.  Looking below, I have opened up a command window and created a directory called blog and moved the sqlserver.zip file into that directory.
Now, we have the scripts on the SQL Server box and ready to run.  Its important that when you run the scripts on a server, that you always run it from the same place.  The script which is run takes a number of parameters to run.

This will unload the metadata from the database to flat files.  You need to run this script once for each database you want to migrate.  You'll see something like these as you go.

This is one run for the northwind database.  I've run this again for the pubs database and lets look and see what files exist now.
Now, we go up a directory and zip all this up so we can move it to the machine where we will translate it.
Now, we can move that zip file.  Take a look at it, it is very small in size for this demo, but even for a large system, we are only capturing the metadata structure of the database.  If you are working with a partner or SI, this is the file you will want to send them for analysis.

Ok, for those of you who are doing this right now, read on.

When you have the capture.zip file transferred, unzip it into a clean directory.  We will use SQL Developer on this to  convert these metadata files into DDL to create the new Oracle schema and the data move scripts which can be used to unload the data from SQL Server and load it into Oracle.

Now, we use SQL Developer to load this data.  We will need access to an Oracle database to create a schema to use as a repository. The repository is used to hold the source database information and the converted data.

The next post will walk through SQL Developer loading these files and converting the metadata to an Oracle equivalent.

### Access Manager 11G Rel 2 and APEX 4.2

Frank van Bortel - Mon, 2013-10-07 11:06
There is some documentation regarding APEX and OAM, but it is flawed. Make sure APEX functions with standard (APEX user based) security, even through OAM; this means Allow /APEX/** Allow /i/** Protect /apex/apex_authentication.callback Page 9 states "OAM_REMOTE_USER with a value of \$user.userid is created by default".Not true, just add it. What the extra entries are for is beyond me, APEX willFrankhttp://www.blogger.com/profile/07830428804236732019noreply@blogger.com0

### ★ Oracle Database 12c In-Memory Option Explained

Eddie Awad - Mon, 2013-10-07 07:00

Jonathan Lewis explains the recently announced Oracle Database 12c in-memory option:

The in-memory component duplicates data (specified tables – perhaps with a restriction to a subset of columns) in columnar format in a dedicated area of the SGA. The data is kept up to date in real time, but Oracle doesn’t use undo or redo to maintain this copy of the data because it’s never persisted to disc in this form, it’s recreated in-memory (by a background process) if the instance restarts. The optimizer can then decide whether it would be faster to use a columnar or row-based approach to address a query.

The intent is to help systems which are mixed OLTP and DSS – which sometimes have many “extra” indexes to optimise DSS queries that affect the performance of the OLTP updates. With the in-memory columnar copy you should be able to drop many “DSS indexes”, thus improving OLTP response times – in effect the in-memory stuff behaves a bit like non-persistent bitmap indexing.

### Big Data News from Oracle OpenWorld 2013

Chen Shapira - Sun, 2013-10-06 20:00

Only a week after Oracle OpenWorld concluded and I already feel like I’m hopelessly behind on posting news and impressions. Behind or not, I have news to share!

The most prominent feature announced at OpenWorld is the “In-Memory Option”  for Oracle Database 12c.  This option is essentially a new part of the SGA that caches tables in column formats. This is expected to make data warehouse queries significantly faster and more efficient. I would have described the feature in more details, but Jonathan Lewis gave a better overview in this forum discussion, so just go read his post.

Why am I excited about a feature that has nothing to do with Hadoop?

First, because I have a lot of experience with large data warehouses. So I know that big data often means large tables, but only few columns used in each query. And I know that in order to optimize these queries and to avoid expensive disk reads every time each query runs, we build indexes on those columns, which makes data loading slow. In-memory option will allow us to drop those indexes and just store the columns we need in memory.

Second, because I’m a huge fan of in-memory data warehouses, and am happy that Oracle is now making these feasible. Few TB of memory in a large server are no longer science fiction, which means that most of your data warehouse will soon fit in memory. Fast analytics for all! And what do you do with the data that won’t fit in memory? Perhaps store it in your Hadoop cluster.

Now that I’m done being excited about the big news, lets talk about small news that you probably didn’t notice but you should.

Oracle announced two cool new features for the Big Data Appliance. Announced may be a big word, Larry Ellison did not stand up on stage and talked about them. Instead the features sneaked quietly into the last upgrade and appeared in the documentation.

Perfect Balance – If you use Hadoop as often as I do, you know how data skew can mess with query performance. You run a job with several reducers, each aggregates data for a subset of keys. Unless you took great care in partitioning your data, the data will not be evenly distributed between the reducers, usually because it wasn’t evenly distributed between the keys. As a result, you will spend 50% of the time waiting for that one last reducer to finish already.

Oracle’s Perfect Balance makes the “took great case in partitioning your data” part much much easier. This blog post is just a quick overview, not an in-depth blog post, so I won’t go into details of how this works (wait for my next post on this topic!). I’ll just mention that Perfect Balance can be used without any change to the application code, so if you are using BDA, there is no excuse not to use it.

And no excuse to play solitaire while waiting for the last reducer to finish.

Oracle XQuery for Hadoop – Announced but not officially released yet, which is why I’m pointing you at an Amis blog post. For now thats the best source of information about this feature. This feature, combined with the existing Oracle Loader for Hadoop will allow running XQuery operations on XMLs stored in Hadoop, pushing down the entire data processing bit to Map Reduce on the Hadoop cluster. Anyone who knows how slow, painful and CPU intensive XML processing can be on an Oracle database server will appreciate this feature. I wish I had it a year ago when I had to ingest XMLs at a very high rate. It is also so cool that I’m a bit sorry that we never developed more awesome XQuery capabilities for Hive and Impala. Can’t wait for the release so I can try that!

During OpenWorld there was also additional exposure for existing, but perhaps not very well known Oracle Big Data features – Hadoop for ODI, Hadoop for OBIEE and using GoldenGate with Hadoop. I’ll try to write more about those soon.

Meanwhile, let me know what you think of In-Memory, Perfect Balance and OXH.

Categories: DBA Blogs