Feed aggregator

October 2013 Critical Patch Update Released

Oracle Security Team - Tue, 2013-10-15 14:42

Hello, this is Eric Maurice.

Oracle today released the October 2013 Critical Patch Update.  As previously announced, this Critical Patch Update is the first one to integrate Java SE.  In other words, moving forward planned security fixes for Oracle products, including Java SE, will released on the same schedule.  As a result, the average number of fixes delivered through each future Critical Patch Update is, at least for the foreseeable future, likely to be greater than in previous Critical Patch Updates.   As an additional reminder, Oracle publishes the release dates of future Critical patch Updates a year in advance.  The release dates for the next 4 Critical Patch Update releases are located on the Security Alerts and Critical Patch Updates page at http://www.oracle.com/technetwork/topics/security/alerts-086861.html.

Today’s Critical Patch Update provides 127 new security fixes across a wide variety of product families including:  Oracle Database, Oracle Fusion Middleware, Oracle Enterprise Manager Grid Control, Oracle E-Business Suite, Oracle Supply Chain Products Suite, Oracle PeopleSoft Enterprise, Oracle Siebel CRM, Oracle iLearning, Oracle industry Applications, Oracle FLEXCUBE, Oracle Primavera, Oracle Java SE, Oracle and Sun Systems Products Suite,  Oracle Linux and Virtualization, and Oracle MySQL.

Out of these 127 new fixes, 2 are specifically for Oracle Database.  The maximum CVSS Base Score for the database vulnerabilities is 5.5.  One of these database vulnerabilities has already been fixed in all supported version and platform combinations.  The other vulnerability is remotely exploitable without authentication.  Fixing this second vulnerability requires that customers configure network encryption between their clients and servers if data is sent over untrusted networks.   Note that Oracle recently updated its licensing agreement so that network encryption (native network encryption and SSL/TLS) and strong authentication services (Kerberos, PKI, and RADIUS) are no longer part of Oracle Advanced Security and are available in all licensed editions of all supported releases of the Oracle database.  Finally note that 2 Oracle Fusion Middleware fixes are applicable to database deployments and as such are listed in the Database Risk Matrix of the Critical Patch Update advisory.

Out of these 127 fixes, 51 are for Java SE.  50 of the Java SE vulnerabilities fixed in this Critical patch Update are remotely exploitable without authentication.  The maximum CVSS Base Score for these Java SE vulnerabilities is 10.0, which denotes a complete takeover of the targeted system (down to the operating system) in instances where Java executes with administrative privileges (i.e. system privileges).  Out of these 51 Java vulnerabilities, only 8 are applicable to client and server deployments of Java.  40 apply to client deployment of Java, including one which is only exploitable during Java client deployment.  One of the vulnerabilities applies to the JHat developer tool.  The last 2 vulnerabilities apply to sites that run the Javadoc tool as a service.   For more information about these Java vulnerabilities, see the security matrix for Java SE located on the Critical Patch Update Advisory.

As a reminder, desktop users can leverage the Java Autoupdate or visit Java.com to ensure that they are running the most recent version of Java.  Java SE security fixes delivered through the Critical Patch Update program are cumulative.  In other words, running the most recent version of Java provides users with the protection resulting from all previously-released security fixes.   Oracle strongly recommends that Java users, particularly home users, keep up with Java releases so as to protect themselves against malicious exploitation of Java vulnerabilities. 

This Critical Patch Update release also provides 17 security fixes for Oracle Fusion Middleware, 12 of which are for vulnerabilities which are remotely exploitable without authentication.  The maximum CVSS Base Score for these Fusion Middleware vulnerabilities is 7.5. 

4 new security fixes are for Oracle Enterprise Manager Grid Control.  All of these Enterprise Manager Grid Control vulnerabilities are remotely exploitable without authentication, and the maximum CVSS Base Score for these vulnerabilities is 4.3.

This Critical Patch Update release also provides 22 new security fixes for Oracle applications as follows: 1 for Oracle E-Business Suite, 2 for Oracle Supply Chain Products Suite, 8 for PeopleSoft Enterprise, 9 for Siebel CRM, and 2 for iLearning.  It furthermore provides 6 new security fixes for Oracle Industry Applications and 1 for Oracle Financial Services Software.  

Finally, this Critical Patch Update delivers 12 new security fixes for the Oracle and Sun Systems Products Suite.   5 of these vulnerabilities are remotely exploitable without authentication.   The maximum CVSS Base Score for these vulnerabilities is 6.9.

As usual, Oracle recommends that this Critical Patch Update be applied as soon as possible.  To a large extent, maintaining a proper security posture requires that organizations keep up to date with Oracle’s security patches and supported releases so as to take advantage of Oracle’s ongoing security assurance effort. 

For More Information:

The Critical Patch Updates and Security Alerts page is located at http://www.oracle.com/technetwork/topics/security/alerts-086861.html

The Advisory for the October 2013 Critical Patch Update is located at http://www.oracle.com/technetwork/topics/security/cpuoct2013-1899837.html

The Oracle Software Security Assurance web site is located at http://www.oracle.com/us/support/assurance/overview/index.html. 

Desktop users can make sure they run the most recent version of Java by visiting http://java.com/en/download/installed.jsp.

RTO and RPO

Mike Rothouse - Mon, 2013-10-14 14:17
What is the difference between Recovery Time Objective (RTO) and Recovery Point Objective (RPO)?  That was a question that came up recently in a disaster recovery discussion.  Good explanation provided by Dejan Kosutic’s Blog entry. In summary: Recovery Time Objective (RTO) is “target time set for resumption of product, service or activity delivery after an incident”. […]

This blog is now closed.

Billy Cripe - Mon, 2013-10-14 13:14

Thank you for visiting.  This blog has been closed down and merged with the WebCenter Blog, which contains blog posts and other information about ECM, WebCenter Content, the content-enabling of business applications and other relevant topics.  Please be sure to visit and bookmark https://blogs.oracle.com/webcenter/ and subscribe to stay informed about these topics and many more.   From there, use the #ECM hashtag to narrow your focus to topics that are strictly related to ECM.

See you there! 

Categories: Fusion Middleware

T-Shirt slogans

Andrew Clarke - Mon, 2013-10-14 01:53
One of the Cloudera chaps at the Oracle Big Data meetup had a T-shirt with this cool slogan: Data is the new baconEven as a vegetarian I can appreciate the humour. However I think it has a corollary, which would also make a good T-shirt: Metadata is the new Kevin BaconBecause metadata is the thing which connects us all.

Control Charts

Chen Shapira - Sun, 2013-10-13 22:43

Last week, while working on customer engagement, I learned a new method of quantifying behavior of time-series data. The method is called “Control Chart” and credit to Josh Wills, our director of data science, for pointing it out. I thought I’ll share it with my readers as its easy to understand, easy to implement, flexible and very useful in many situations.

The problem is ages old – you collect measurements over time and want to know when your measurements indicate abnormal behavior. “Abnormal” is not well defined, and thats on purpose – we want our method to be flexible enough to match what you define as an issue.

For example, lets say Facebook are interested in tracking usage trend for each user, catching those with decreasing use

There are few steps to the Control Chart method:

  1. Collect all relevant data points. In our case, number of minutes of Facebook use per day for each user.
  2. Calculate a baseline – this can be average use for each user or average use for similar demographics.  Even adaptive average of the type calculated by Oracle Enterprise Manager, to take into account decreased Facebook use over the weekend.
  3. Calculate “zones” one, two and three standard deviations around the baseline

Those zones can be used to define rules for normal and abnormal behaviors of the system. These rules are what makes the system valuable.
Examples of rules that define abnormal behavior can be:

  1. Any point 3 standard deviations above the baseline. This will indicate extreme sudden increase.
  2. 7 consecutive measurements more than one standard deviation over the baseline. This indicates a sustained increase.
  3. 9 consecutive measurement each higher than previous one. This indicates steady upward trend.
  4. 6 consecutive measurements each more than two standard deviations away from baseline, each one of different side of the baseline than the previous measurement. This indicates instability of the system.

There are even sets of standard rules used in various industries, best practices of sorts. Western Electric rules and Nelson rules are particularly well know.

Note how flexible the method is – you can use any combination of rules that will highlight abnormalities you are interested in highlighting.
Also note that while the traditional use indeed involves charts, the values and rules are very easy to calculate programmatically and visualization can be useful but not mandatory.
If you measure CPU utilization on few servers, visualizing the chart and actually seeing the server behavior can be useful. If you are Facebook and monitor user behavior, visualizing a time series of every one of their millions of users is hopeless. Calculating baselines, standard deviations and rules for each use is trivial.

Also note how this problem is “embarrassingly parallel“. To calculate behavior for each user, you only need to look at data for that particular user. Parallel, share-nothing platform like Hadoop can be used to scale the calculation indefinitely simply by throwing increasing number of servers on the problem. The only limit is the time it takes to calculate the rules for a single user.

Naturally, I didn’t dive into some of the complexities in using Control Charts.  Such as how to select a good baseline, how to calculate standard deviation (or whether to use another statistic to define zones) and how many measurements should be examined before a behavior signals a trend. If you think this tool is useful for you, I encourage you to investigate more deeply.

 


Categories: DBA Blogs

'Medalling' in Humour

Gary Myers - Sun, 2013-10-13 03:31
Yesterday I competed, and came third, in the Humorous Speech contest of the Hawkesbury Division of Toastmasters. I'm pretty chuffed with that.

Partly, I'll blame Yuri for this. He recommended Toastmasters as a way of getting some extra public speaking skills. Luckily for me, there's a group that meets one lunchtime a week at work, which makes attendance relatively easy. Much easier than trying to fit in an evening on top of my kids' busy social and sporting calendars. I started going just before Christmas and became a paid up member in January this year.

Next, I'll blame Neil Sequeira who prompted me to enter the club contest a month ago on the basis that attendees regularly laughed at my speeches....in a nice way. I won that, and then it dawned on me that I'd be going into the Area level contest. Our club was hosting that event, so I had a slight 'home ground' advantage, but I was still in front of a bunch of people I didn't know, most of whom have been honing their speaking skills for YEARS.

I won that, which meant going up to the Division level contest last night. That was in a church hall, unfamiliar territory for me. We were speaking from an elevated stage, and with a headset microphone. Getting into the big leagues.

I was a bit ruffled because my trip there was delayed with my phone unaccountably deciingd it couldn't find any GPS signal, and refusing to tell me where I was or how to get where I was supposed to be. My destination was the other side of Middle Harbour so my regular tactic of pointing the car vaguely in the right direction and hoping for the best was foiled by its inability to fly across the water. Resorting to my trusty and dusty Gregory's Street Directory I made the 40 minute journey in a mere 80 minutes.

My speech starts with the other Gary Myers, multi-time champion of Summernats (which I've mentioned before in my blog ) and land speed record holder. Oh, and candidate for one of the NSW Senate spots for Federal parliament. He didn't win, but a candidate for the same party did get a spot for Victoria. I suspect this came as somewhat a surprise to him, as the Senate voting system is complex and highly unpredictable. An unemployed ex-sawmill worker who didn't own a suit and has kangaroo poo chucking as a hobby will be an interesting addition to the chamber.

I was more than  happy with my third place finish in the contest. The winner, who also took the Table Topics contest top prize, was excellent. And second place went to a Dave the Happy Singer who took the mick out of homoeopathy and similar rubbish, so I won't criticise that. I get a small trophy and a certificate. And an update to my LinkedIn profile.

And, for posterity, my certificates:




Oracle Big Data Meetup - 09-OCT-2013

Andrew Clarke - Sat, 2013-10-12 12:19
The Oracle guys running the Big Data 4 the Enterprise Meetup are always apologetic about marketing. The novelty is quite amusing. They do this because most Big Data Meetups are full of brash young people from small start-ups who use cool open source software. They choose cool open source software partly because they're self-styled hackers who like being able to play with their software any way they choose. But mainly it is because the budgetary constraints of being a start-up mean they have to choose between a Clerkenwell office and Aeron chairs, or enterprise software licenses, and that's no choice at all.

But an Oracle Big Data meetup has a different constituency. We come from an enterprise background, we've all been using Oracle software for a long time and we know what to expect from an Oracle event. We're prepared to tolerate a certain amount of Oracle marketing because we want to hear the Oracle take on things, and we come prepared with our shields up. Apart from anything else, the Meetup sponsor is always cut some slack, in exchange for the beer'n'pizza.

Besides the Oracle Big Data Appliance is quite at easy sell, certainly compared to the rest of the engineered systems. The Exa stack largely comprises machines which replace existing servers whereas Big Data is a new requirement. Most Oracle shops probably don't have a pool of Linux/Java/Network hackers on hand to cobble together a parallel cluster of machines and configure them to run Hadoop. A pre-configured Exadoop appliance with Oracle's imprimatur is just what those organisations need. The thing is, it seems a bit cheeky to charge a six figure sum for a box with a bunch of free software on it. No matter how good box is. Particularly when it can be so hard to make the business case for a Big Data initiative.

Stephen Sheldon's presentation on Big Data Analytics As A Service addressed exactly this point. He works for Detica. They have stood up an Oracle BDA instance which they rent out for a couple of months to organisations who want to try a Big Data initiative. Detica provide a pool of data scientists and geeks to help out with the processing and analytics. At the end of the exercise the customer has a proven case showing whether Big Data can give them sufficient valuable insights into their business. This strikes me as a highly neat idea, one which other companies will wish they had thought of first.

Ian Sharp (one of the apologetic Oracle guys) presented on Oracle's Advanced Analytics. The big idea here is R embedded in the database. This gives data scientists access to orders of magnitude more data than they're used to having on their desktop R instances. Quants working in FS organisations will most likely have an accident when they realise just how great an idea this is. Unfortunately, Oracle R Enterprise is part of the Advanced Analytics option, so probably only the big FS companies will go for it. But the Oracle R distro is still quite neat, and free.

Mark Sampson from Cloudera rounded off the evening with a talk on a new offering, Cloudera Search. This basically provides a mechanism for building a Google / Amazon style search facility over a Hadoop cluster. The magic here is that Apache Solr is integrated into the Hadoop architecture instead of as a separate cluster, plus a UI building tool. I spent five years on a project which basically did this with an Oracle RDBMS, hand-rolled ETL and XML generators and lots of Java code plumbing an external search engine into the front-end. It was a great system, loved by its users and well worth the effort at the time. But I expect we could do the whole thing again in a couple of months with this tool set. Which is good news for the next wave of developers.

Some people regard attending technical meetups a bit odd. I mean, giving up your free time to listen to a bunch of presentations on work matters? But if you find this stuff interesting you can't help yourself. And if you work with Oracle tech and are interested in data then this meetup is definitely worth a couple of hours of your free time.

Share & Enjoy : Using a JDeveloper Project as an MDS Store

Antony Reynolds - Sat, 2013-10-12 01:24
Share & Enjoy : Sharing Resources through MDS

One of my favorite radio shows was the Hitchhikers Guide to the Galaxy by the sadly departed Douglas Adams.  One of the characters, Marvin the Paranoid Android, was created by the Sirius Cybernetics Corporation whose corporate song was entitled Share and Enjoy!  Just like using the products of the Sirius Cybernetics Corporation, reusing resources through MDS is not fun, but at least it is useful and avoids some problems in SOA deployments.  So in this blog post I am going to show you how to re-use SOA resources stored in MDS using JDeveloper as a development tool.

The Plan

We would like to have some SOA resources such as WSDLs, XSDs, Schematron files, DVMs etc. stored in a shared location.  This gives us the following benefits

  • Single source of truth for artifacts
  • Remove cross composite dependencies which can cause deployment and startup problems
  • Easier to find and reuse resources if stored in a single location

So we will store a WSDL and XSD in MDS, using a JDeveloper project to maintain the shared artifact and using File based MDS to access it from development and Database based MDS to access it from runtime.  We will create the shared resources in a JDeveloper project and deploy them to MDS.  We will then deploy a project that exposes a service based on the WSDL.  Finally we will deploy a client project to the previous project that uses the same MDS resources.

Creating Shared Resources in a JDeveloper Project

First lets create a JDeveloper project and put our shared resources into that project.  To do this

  1. In a JDeveloper Application create a New Generic Project (File->New->All Technologies->General->Generic Project)
  2. In that project create a New Folder called apps (File->New->All Technologies->General->Folder) – It must be called apps for local File MDS to work correctly.
  3. In the project properties delete the existing Java Source Paths (Project Properties->Project Source Paths->Java Source Paths->Remove)
  4. In the project properties a a new Java Source Path pointing to the just created apps directory (Project Properties->Project Source Paths->Java Source Paths->Add)
    JavaSourcePaths

Having created the project we can now put our resources into that project, either copying them from other projects or creating them from scratch.

Create a SOA Bundle to Deploy to a SOA Instance

Having created our resources we now want to package them up for deployment to a SOA instance.  To do this we take the following steps.

  1. Create a new JAR deployment profile (Project Properties->Deployment->New->Jar File)
  2. In JAR Options uncheck the Include Manifest File
  3. In File Groups->Project Output->Contributors uncheck all existing contributors and check the Project Source Path
  4. Create a new SOA Bundle deployment profile (Application Properties->Deployment->New->SOA Bundle)
  5. In Dependencies select the project jar file from the previous steps.
    SOABundle
  6. On Application Properties->Deployment unselect all options.
    SOABundle2

The bundle can now be deployed to the server by selecting Deploy from the Application Menu.

Create a Database Based MDS Connection in JDeveloper

Having deployed our shared resources it would be good to check they are where we expect them to be so lets create a Database Based MDS Connection in JDeveloper to let us browse the deployed resources.

  1. Create a new MDS Connection (File->All Technologies->General->Connections->SOA-MDS Connection)
  2. Make the Connection Type DB Based MDS and choose the database Connection and parition.  The username of the connection will be the <PREFIX>_mds user and the MDS partition will be soa-infra.

Browse the repository to make sure that your resources deplyed correctly under the apps folder.  Note that you can also use this browser to look at deployed composites.  You may find it intersting to look at the /deployed-composites/deployed-composites.xml file which lists all deployed composites.

DbMDSbrowse

    Create an File Based MDS Connection in JDeveloper

    We can now create a File based MDS connection to the project we just created.  A file based MDS connection allows us to work offline without a database or SOA server.  We will create a file based MDS that actually references the project we created earlier.

    1. Create a new MDS Connection (File->All Technologies->General->Connections->SOA-MDS Connection)
    2. Make the Connection Type File Based MDS and choose the MDS Root Folder to be the location of the JDeveloper project previously created (not the source directory, the top level project directory).
      FileMDS

    We can browse the file based MDS using the IDE Connections Window in JDeveloper.  This lets us check that we can see the contents of the repository.

    Using File Based MDS

    Now that we have MDS set up both in the database and locally in the file system we can try using some resources in a composite.  To use a WSDL from the file based repository:

    1. Insert a new Web Service Reference or Service onto your composite.xml.
    2. Browse the Resource Palette for the WSDL in the File Based MDS connection and import it.
      BrowseRepository
    3. Do not copy the resource into the project.
    4. If you are creating a reference, don’t worry about the warning message, that can be fixed later.  Just say Yes you do want to continue and create the reference.
      ConcreteWSDLWarning

    Note that when you import a resource from an MDS connection it automatically adds a reference to that MDS into the applications adf-config.xml.  SOA applications do not deploy their adf-config.xml, they use it purely to help resolve oramds protocol references in SOA composites at design time.  At runtime the soa-infra applications adf-config.xml is used to help resolve oramds protocol references.

    The reason we set file based MDS to point to the project directory rather than the apps directory underneath is because when we deploy SOA resources to MDS as a SOA bundle the resources are all placed under the apps MDS namespace.  To make sure that our file based MDS includes an apps namespace we have to rename the src directory to be apps and then make sure that our file based MDS points to the directory aboive the new source directory.

    Patching Up References

    When we use an abstract WSDL as a service then the SOA infrastructure automatically adds binging and service information at run time.  An abstract WSDL used as a reference needs to have binding and service information added in order to compile successfully.  By default the imported MDS reference for an abstract WSDL will look like this:

    <reference name="Service3"
       ui:wsdlLocation="oramds:/apps/shared/WriteFileProcess.wsdl">
      <interface.wsdl interface="
    http://xmlns.oracle.com/Test/SyncWriteFile/WriteFileProcess# wsdl.interface(WriteFileProcess)"/>
      <binding.ws port="" location=""/>
    </reference>

    Note that the port and location properties of the binding are empty.  We need to replace the location with a runtime WSDL location that includes binding information, this can be obtained by getting the WSDL URL from the soa-infra application or from EM.  Be sure to remove any MDS instance strings from the URL.

    EndpointInfo

    The port information is a little more complicated.  The first part of the string should be the target namespace of the service, usually the same as the first part of the interface attribute of the interface.wsdl element.  This is followed by a #wsdl.endpoint and then in parenthesis the service name from the runtime WSDL and port name from the WSDL, separated by a /.  The format should look like this:

    {Service Namespace}#wsdl.endpoint({Service Name}/{Port Name})

    So if we have a WSDL like this:

    <wsdl:definitions
       …
      
    targetNamespace=
       "http://xmlns.oracle.com/Test/SyncWriteFile/WriteFileProcess"
    >
       …
       <wsdl:service name="writefileprocess_client_ep">
          <wsdl:port name="WriteFileProcess_pt"
                binding="client:WriteFileProcessBinding">
             <soap:address location=… />
          </wsdl:port>
       </wsdl:service>
    </wsdl:definitions>

    Then we get a binding.ws port like this:

    http://xmlns.oracle.com/Test/SyncWriteFile/WriteFileProcess# wsdl.endpoint(writefileprocess_client_ep/WriteFileProcess_pt)

    Note that you don’t have to set actual values until deployment time.  The following binding information will allow the composite to compile in JDeveloper, although it will not run in the runtime:

    <binding.ws port="dummy#wsdl.endpoint(dummy/dummy)" location=""/>

    The binding information can be changed in the configuration plan.  Deferring this means that you have to have a configuration plan in order to be able to invoke the reference and this means that you reduce the risk of deploying composites with references that are pointing to the wrong environment.

    Summary

    In this blog post I have shown how to store resources in MDS so that they can be shared between composites.  The resources can be created in a JDeveloper project that doubles as an MDS file repository.  The MDS resources can be reused in composites.  If using an abstract WSDL from MDS I have also shown how to fix up the binding information so that at runtime the correct endpoint can be invoked.  Maybe it is more fun than dealing with the Sirius Cybernetics Corporation!

    Once again APEX helps ORACLE TEAM USA win the America's Cup

    David Peake - Fri, 2013-10-11 16:17

    Found some interesting pieces on how ORACLE TEAM USA used Application Express to help them win the 34rd America's Cup in San Francisco.
    Of course these and other great links are available on the Oracle Application Express OTN page.


    Mobile App Designer

    Tim Dexter - Thu, 2013-10-10 14:45

    Back in August a new Oracle mobile solution jumped out of the gate, the Mobile App Designer (MAD). I seem to have been on the road every week for the last, goodness knows how many weeks. I have finally found some time this week to get down and work with it. Its pretty cool and above all, its capable of providing a mobile platform independent reporting solution.

    But you already have a mobile application! Yep, and I think they both sit quite comfortably together. The Oracle BI Mobile Application is available from the App Store for Apple users. Its a great app, build reports, dashboards and BIP reports for your browser based users and your Apple app users can take advantage of them immediately.

    MAD takes the next step forward. Maybe you don't use or can not use Apple mobile devices? Maybe you need to build something more specific around a business area that provides users with a richer experience, beyond what Answers and Dashboards can offer. However, you do not want to have to rely of the tech folks to build the mobile application, thats just piling more work on them. You also want to be platform agnostic, you might have a mix of mobile platforms. MAD can help.

    For those of you that have already used the Online Template layout editor with BI Publisher, you already know how to build a mobile application. The MAD interface is essentially the online template builder user interface, tweaked for a mobile destination ie a phone or tablet.

    You build your data model as you would normally including the newer direct data model build on a subject area from OBIEE.

    Then start to create the 'pages' of your application and the content to sit on those pages. All the normal stuff, tables, pivot tables, charts, images plus accordians, filters and repeating objects. On top of that is the ability to then extend the visual objects that are available to users. Maps (google or oracle), D3 visuals, gantt charts, org charts, if you can either write the code or leverage an existing javascript library, MAD has the extension framework to support it.

    You can build and test in a browser and then deploy to your own BI App Store. Users, on their mobile devices, can then subscribe to an application. They can open and interact with your app using their phone or tablet's interactive features just as they would with a native application.  As you update your app and add new features the changes will be picked up the next time your users open the application.

    Interested? Want to know more? The Oracle MAD home page has a ton of content including tutorials, etc. We are planning to dig into MAD in forthcoming posts. The geek in me wanted to be able to build plugins using the D3 and other visuals. I have been working with Leslie on some of the documentation and we'll be sharing some of that 'plugin' doc and how tos in the coming weeks.

    Categories: BI & Warehousing

    Multi-Colored SQL

    Scott Spendolini - Thu, 2013-10-10 07:14
    My OCD was delighted this morning when I came across a new feature of SQL Developer: Connection Color.  Brace yourselves, as this feature ain't much, but could have a huge impact on reducing accidental commands in the wrong schema or server.

    To use it, simply create or edit a database connection, and set the Connection Color to whichever color you choose:


    Once set, any and all windows associated with that connection will be outlined in that color.  That's it!  I already gleefully went through my connection list and associated different colors with different types of connections.  For example, our development schemas got green:


    While our production schemas got red:


    Now, no matter what type of object I am editing, I will instantly know which schema it belongs to based on the color of the border.  Simple AND effective!

    ★ Native JSON Support in Oracle Database 12c

    Eddie Awad - Thu, 2013-10-10 07:00

    json

    If you want JSON support in your Oracle database today you can use PL/JSON, an open source JSON parser written in PL/SQL.

    However, as Marco reported from Oracle OpenWorld, native JSON support may be an upcoming new feature in Oracle Database 12c.

    This new feature allows the storage of JSON documents in table columns with existing data types like VARCHAR2, CLOB, RAW, BLOB and BFILE.

    A new check constraint makes sure only valid JSON is inserted.

    For example: CHECK column IS JSON.

    New built-in operators allow you to work with stored JSON documents. For example, JSON_VALUE enables you to query JSON data and return the result as a SQL value. Other operators include JSON_QUERY, JSON_EXISTS and JSON_TABLE.

    Cool stuff!

    © Eddie Awad's Blog, 2013. | Permalink | Add a comment | Topic: Oracle | Tags: ,

    Multiple SOA Developers Using a Single Install

    Antony Reynolds - Wed, 2013-10-09 16:37
    Running Multiple SOA Developers from a Single Install

    A question just came up about how to run multiple developers from a single software install.  The objective is to have a single software installation on a shared server and then provide different OS users with the ability to create their own domains.  This is not a supported configuration but it is attractive for a development environment.

    Out of the Box

    Before we do anything special lets review the basic installation.

    • Oracle WebLogic Server 10.3.6 installed using oracle user in a Middleware Home
    • Oracle SOA Suite 11.1.1.7 installed using oracle user
    • Software installed with group oinstall
    • Developer users dev1, dev2 etc
      • Each developer user is a member of oinstall group and has access to the Middleware Home.
    Customizations

    To get this to work I did the following customization

    • In the Middleware Home make all user readable files/directories group readable and make all user executable files/directories group executable.
      • find $MW_HOME –perm /u+r ! –perm /g+r | xargs –Iargs chmod g+r args
      • find $MW_HOME –perm /u+x ! –perm /g+x | xargs –Iargs chmod g+x args
    Domain Creation

    When creating a domain for a developer note the following:

    • Each developer will need their own FMW repository, perhaps prefixed by their username, e.g. dev1, dev2 etc.
    • Each developer needs to use a unique port number for all WebLogic channels
    • Any use of Coherence should use Well Known Addresses to avoid cross talk between developer clusters (note SOA and OSB both use Coherence!)
    • If using Node Manager each developer will need their own instance, using their own configuration.

    Getting Started with Oracle SOA B2B Integration: A hands On Tutorial

    Antony Reynolds - Tue, 2013-10-08 17:14
    Book: Getting Started with Oracle SOA B2B Integration: A hands On Tutorial

    Before OpenWorld I received a copy of a new book by Scott Haaland, Alan Perlovsky & Krishnaprem Bhatia entitled Getting Started with Oracle SOA B2B Integration: A hands On Tutorial.  A free download is available of Chapter 3 to help you get a feeling for the style for the book.

    A useful new addition to the growing library of Oracle SOA Suite books, it starts off by putting B2B into context and identifying some common B2B message patterns and messaging protocols.  The rest of the book then takes the form of tutorials on how to use Oracle B2B interspersed with useful tips, such as how to set up B2B as a hub to connect different trading partners, similar to the way a VAN works.

    The book goes a little beyond a tutorial by providing suggestions on best practice, giving advice on what is the best way to do things in certain circumstances.

    I found the chapter on reporting & monitoring to be particularly useful, especially the BAM section, as I find many customers are able to use BAM reports to sell a SOA/B2B solution to the business.

    The chapter on Preparing to Go-Live should be read closely before the go live date, at the very least pay attention to the “Purging data” section

    Not being a B2B expert I found the book helpful in explaining how to accomplish tasks in Oracle B2B, and also in identifying the capabilities of the product.  Many SOA developers, myself included, view B2B as a glorified adapter, and in many ways it is, but it is an adapter with amazing capabilities.

    The editing seems a little loose, the language is strange in places and there are references to colors on black and white diagrams, but the content is solid and helpful to anyone tasked with implementing Oracle B2B.

    ★ Now Available for Download: Presentations from Oracle OpenWorld 2013

    Eddie Awad - Tue, 2013-10-08 07:00

    Oracle OpenWorld Head to the Content Catalog and start downloading your favorite sessions. No registration needed. Sessions will be available for download until March 2014.

    Note that some presenters chose not to make their sessions available.

    Via the Oracle OpenWorld Blog.

    © Eddie Awad's Blog, 2013. | Permalink | 2 comments | Topic: Oracle | Tags:

    Convert SQLServer to Oracle using files - Part 4

    Barry McGillin - Mon, 2013-10-07 21:30
    This, the last part of a four part tutorial goes over the movement of data using files generated by Oracle SQL Developer.  In part 1 we generated the offline capture scripts to take to the SQL Server machine, unloaded the metadata, zipped it up and copied it back to out local machine. In part 2 we used SQL Developer to create a migration project and load the capture files into SQL Developer.  We then converted the metadata into its Oracle equivalent. In  Part 3, we were able to generate DDL and run this DDL against an Oracle database.
    Looking at the data move scripts that we generated in an earlier part.  We need to zip up the files and copy them to the SQL Server machine to run.  Lets look at that now.  The images below show the files moved to our SQLServer machine.  We go into the main directory under data move and run the bat file MicrosoftSQLServer_data.bat.  This batch file takes a number of parameters

    This script then unloads the data from the database for the databases selected earlier.  We can see the dat files in the image above.  Now, we just need to go and transfer the data to the Oracle database machine for loading.  We can go back out to the main datamove directory and zip up the entire directory including the scripts.  We then need to ftp that to the database machine.  
    The files need to be unzipped on the machine and cd into the main directory until you find a file called oracle_loader.sh.

    We can run the files as below.  The output below shows the exact output of running the Oracle_loader.sh script on the data we have taken from SQL Server.


    [oracle@Unknown-08:00:27:c8:2a:1c 2013-10-08_00-05-16]$ sh ./oracle_loader.sh orcl blog blog
    /scratch/datamove/2013-10-08_00-05-16/Northwind /scratch/datamove/2013-10-08_00-05-16
    /scratch/datamove/2013-10-08_00-05-16/Northwind/dbo_Northwind /scratch/datamove/2013-10-08_00-05-16/Northwind

    SQL*Plus: Release 11.2.0.2.0 Production on Mon Oct 7 18:58:42 2013

    Copyright (c) 1982, 2010, Oracle. All rights reserved.


    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options

    Table altered.
    Table altered.
    Trigger altered.
    Trigger altered.
    Trigger altered.
    Trigger altered.
    Trigger altered.
    Trigger altered.
    Trigger altered.
    Table altered.
    Table altered.
    Table altered.
    Table altered.
    Table altered.
    Table altered.
    Table altered.
    Table altered.
    Table altered.
    Table altered.
    Table altered.
    Table altered.
    Table altered.
    Table altered.
    Table altered.
    Table altered.
    Table altered.
    Table altered.
    Table altered.
     Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options

    SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:43 2013

    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

    Commit point reached - logical record count 1
    Commit point reached - logical record count 2
    Commit point reached - logical record count 3
    Commit point reached - logical record count 4
    Commit point reached - logical record count 5
    Commit point reached - logical record count 6
    Commit point reached - logical record count 7
    Commit point reached - logical record count 8
    Commit point reached - logical record count 9

    SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:44 2013

    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

    Commit point reached - logical record count 1
    Commit point reached - logical record count 2
    Commit point reached - logical record count 3
    Commit point reached - logical record count 4
    Commit point reached - logical record count 5
    Commit point reached - logical record count 6
    Commit point reached - logical record count 7
    Commit point reached - logical record count 8

    SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:44 2013

    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

    Commit point reached - logical record count 49

    SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:44 2013

    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

    Commit point reached - logical record count 53

    SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:45 2013

    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.


    SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:45 2013

    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.


    SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:45 2013

    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

    Commit point reached - logical record count 64
    Commit point reached - logical record count 77

    SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:45 2013

    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

    Commit point reached - logical record count 4

    SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:46 2013

    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

    Commit point reached - logical record count 64
    Commit point reached - logical record count 128
    Commit point reached - logical record count 192
    Commit point reached - logical record count 256
    Commit point reached - logical record count 320
    Commit point reached - logical record count 384
    Commit point reached - logical record count 448
    Commit point reached - logical record count 512
    Commit point reached - logical record count 576
    Commit point reached - logical record count 640
    Commit point reached - logical record count 704
    Commit point reached - logical record count 768
    Commit point reached - logical record count 832
    Commit point reached - logical record count 896
    Commit point reached - logical record count 960
    Commit point reached - logical record count 1024
    Commit point reached - logical record count 1088
    Commit point reached - logical record count 1152
    Commit point reached - logical record count 1216
    Commit point reached - logical record count 1280
    Commit point reached - logical record count 1344
    Commit point reached - logical record count 1408
    Commit point reached - logical record count 1472
    Commit point reached - logical record count 1536
    Commit point reached - logical record count 1600
    Commit point reached - logical record count 1664
    Commit point reached - logical record count 1728
    Commit point reached - logical record count 1792
    Commit point reached - logical record count 1856
    Commit point reached - logical record count 1920
    Commit point reached - logical record count 1984
    Commit point reached - logical record count 2048
    Commit point reached - logical record count 2112
    Commit point reached - logical record count 2155

    SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:46 2013

    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

    Commit point reached - logical record count 64
    Commit point reached - logical record count 128
    Commit point reached - logical record count 192
    Commit point reached - logical record count 256
    Commit point reached - logical record count 320
    Commit point reached - logical record count 384
    Commit point reached - logical record count 448
    Commit point reached - logical record count 512
    Commit point reached - logical record count 576
    Commit point reached - logical record count 640
    Commit point reached - logical record count 704
    Commit point reached - logical record count 768
    Commit point reached - logical record count 830

    SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:47 2013

    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

    Commit point reached - logical record count 1
    Commit point reached - logical record count 2
    Commit point reached - logical record count 3
    Commit point reached - logical record count 4
    Commit point reached - logical record count 5
    Commit point reached - logical record count 6
    Commit point reached - logical record count 7
    Commit point reached - logical record count 8
    Commit point reached - logical record count 9
    Commit point reached - logical record count 10
    Commit point reached - logical record count 11
    Commit point reached - logical record count 12
    Commit point reached - logical record count 13
    Commit point reached - logical record count 14
    Commit point reached - logical record count 15
    Commit point reached - logical record count 16
    Commit point reached - logical record count 17
    Commit point reached - logical record count 18
    Commit point reached - logical record count 19
    Commit point reached - logical record count 20
    Commit point reached - logical record count 21
    Commit point reached - logical record count 22
    Commit point reached - logical record count 23
    Commit point reached - logical record count 24
    Commit point reached - logical record count 25
    Commit point reached - logical record count 26
    Commit point reached - logical record count 27
    Commit point reached - logical record count 28
    Commit point reached - logical record count 29

    SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:47 2013

    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

    Commit point reached - logical record count 3

    SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:47 2013

    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

    Commit point reached - logical record count 64
    Commit point reached - logical record count 91

    Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    /scratch/datamove/2013-10-08_00-05-16/Northwind
    /scratch/datamove/2013-10-08_00-05-16
    [oracle@Unknown-08:00:27:c8:2a:1c 2013-10-08_00-05-16]$


    We can now take a look at some data in the Oracle database by going to the dbo_northwind connection we made earlier and look at the data.


    And thats it. In these four parts we have generated capture scripts from SQLDeveloper to unload metadata from SQLServer. In part two, we loaded the metadata and converted it into an Oracle version. In Part three, we generated the DDL and ran it creating the new Oracle users. In part 4, we unloaded the data and copied it to the oracle machine.  We then loaded it using the scripts we generated from Oracle SQL Developer.

    Convert SQLServer to Oracle using files - Part 3

    Barry McGillin - Mon, 2013-10-07 20:23
    In part 1 we generated the offline capture scripts to take to the SQL Server machine, unloaded the metadata, zipped it up and copied it back to out local machine. In part 2 we used SQL Developer to create a migration project and load the capture files into SQL Developer.  We then converted the metadata into its Oracle equivalent.
    In this episode we will try and generate DDL from our migration project.  Right now, We can see the Oracle objects in the Converted Database Objects node.
    If we right click on Converted Database objects and choose generate, we can generate DDL to create the Oracle Schema and Objects.

    The wizard appears again with the introduction screen.  Clicking next takes us directly to the Target database Screen.


    Click on offline to choose generation of files.  For specifics of how the files get generated, click on advanced options

     You can select what way you want to generate your files, all in one file, a file per object type or a file per object. You can also choose the types of objects you want to generate and run.
     In this demo, I will just generate tables, data and supporting objects.   Clicking next  will take us to the data move page where we again choose offline to generate files.
     Choosing advanced options allows us to be specific about date masks and delimiters for data unload.
     Once we have chosen our options, we click next and review the summary.
     Finally, we click finish and the files are generated in the output directory we specified when setting up the project in part 2.
    Now, Lets go see what we generated.  If we go to the output directory we specified in the project, we can see the list of files we generated.  Remember the options we chose for generation.
    We also get the master.sql file opened in SQL Developer which looks like this


    SET ECHO OFF
    SET VERIFY OFF
    SET FEEDBACK OFF
    SET DEFINE ON
    CLEAR SCREEN
    set serveroutput on

    COLUMN date_time NEW_VAL filename noprint;
    SELECT to_char(systimestamp,'yyyy-mm-dd_hh24-mi-ssxff') date_time FROM DUAL;
    spool democapture_&filename..log

    -- Password file execution
    @passworddefinition.sql

    PROMPT Creating Role
    @role.sql

    prompt creating user Emulation
    @@Emulation/user.sql

    prompt creating user dbo_Northwind
    @@dbo_Northwind/user.sql

    prompt creating user dbo_pubs
    @@dbo_pubs/user.sql

    prompt Building objects in Emulation
    @@Emulation/master.sql

    prompt Building objects in dbo_Northwind
    @@dbo_Northwind/master.sql

    prompt Building objects in dbo_pubs
    @@dbo_pubs/master.sql

    Now, lets try and run this file and create the users and objects.  Firstly, we choose a connection to run the script.  This user must have the privileges to create users and all their ancillary objects.
    We can run this script to create the users.  Notice the worksheet output showing the output of the files.
    Once this is complete, we can create a connection in SQL Developer to one of the users created, dbo_Northwind, dbo_pubs and emulation.


    Now, we have created the schema from the DDL which was generated.  In the next and final episode of this, we will visit the data move.  We will run the data move scripts on SQL Server and extract the data which we can load via SQL Loader or external tables.


    APEX 5.0 - Page Designer

    David Peake - Mon, 2013-10-07 20:05
    Ever since we started showing off the new Page Designer scheduled for release with Application Express 5.0 almost everyone has been hounding me for screen shots. Generally I never make future functionality public as it changes often and sometimes in very dramatic ways between feature conception, initial implementation, and final deliverable. However, given how much of a "game-changer" this feature will be for all current APEX Developers I have released a slide deck on OTN: Application Express 5.0 - Page Designer.

    Please review Oracle's Safe Harbor within the slide deck.

    Convert SQL Server to Oracle using files - Part 2

    Barry McGillin - Mon, 2013-10-07 17:47
    Ok, Now we have the files as generated and moved in part 1, we can now start SQL Developer to load the files. Start up SQL Developer  and create a connection with the following privileges: CONNECT, RESOURCE and CREATE VIEW.

    When the connection is opened, right click on it and choose Migration Repository then Associate Migration Repository.  This will create the repository in the connection.

     Now, We can start the migration wizard. You can do this by either going to the tools menu and selecting migrate from the migration menu, or you can select the migrate icon from the migration project navigator.  The wizard will popup and you can walk through the steps as outlined below.
     Clicking the next button selects the repository page which we can choose the repository connection we just made.
     Next page and we need to create a project to hold the captured databases.
    The output directory in the page above is the directory where any log files or generated files will be placed.  When we generate DDL or data move files, this is where they will get generated.  Next page is the capture page.  For using the files from Part 1, we need to choose offline which will then show the page below, which asks us to select the offline capture file.
     This offline capture file is in the zip file we brought over from SQL Server.  Browse to the sqlserver2008.ocp.  This file tells SQL Developer what to expect in the directory.  It will look for the databases that have been unloaded.
     When its selected, SQL Developer parses the files and shows you a list of the databases you ran the offline capture scripts for in Part 1.

     Choose both databases and click next.
     The next page shows a list of the datatypes of SQL Server on the left and a list of equivalent data types on the right.  You can choose a different type if you want and you can also create a new mapping by clicking on the "Add new Rule".
     The next page lists the objects to be translated.  Because we have not captured anything yet, the best we can do is to tell SQL Developer to translate everything.  We can come back later and choose specific  stored programs to convert and translate.

     At this stage, we can click proceed to summary and then finish once you review the summary page.
     When finish is pressed, SQL Developer will capture the database metadata from the files and convert it to its Oracle equivalent.

     When this completes, you will see a new node with the project name you chose earlier. If you click on it, you will get an editor on the right hand side with a summary of the data captured and converted.


    Oracle Linux 5.10 channels are now published

    Wim Coekaerts - Mon, 2013-10-07 16:54
    We just released Oracle Linux 5.10 channels on both http://public-yum.oracle.com and on the Unbreakable Linux Network. ISO's are going to be updated on edelivery in a few days. The channels are available immediately.

    As many of you know, we are now using a CDN to distribute the RPMS for public-yum globally so you should have good bandwidth everywhere to freely access the RPMs.

    Pages

    Subscribe to Oracle FAQ aggregator