Feed aggregator

Share & Enjoy : Using a JDeveloper Project as an MDS Store

Antony Reynolds - Sat, 2013-10-12 01:24
Share & Enjoy : Sharing Resources through MDS

One of my favorite radio shows was the Hitchhikers Guide to the Galaxy by the sadly departed Douglas Adams.  One of the characters, Marvin the Paranoid Android, was created by the Sirius Cybernetics Corporation whose corporate song was entitled Share and Enjoy!  Just like using the products of the Sirius Cybernetics Corporation, reusing resources through MDS is not fun, but at least it is useful and avoids some problems in SOA deployments.  So in this blog post I am going to show you how to re-use SOA resources stored in MDS using JDeveloper as a development tool.

The Plan

We would like to have some SOA resources such as WSDLs, XSDs, Schematron files, DVMs etc. stored in a shared location.  This gives us the following benefits

  • Single source of truth for artifacts
  • Remove cross composite dependencies which can cause deployment and startup problems
  • Easier to find and reuse resources if stored in a single location

So we will store a WSDL and XSD in MDS, using a JDeveloper project to maintain the shared artifact and using File based MDS to access it from development and Database based MDS to access it from runtime.  We will create the shared resources in a JDeveloper project and deploy them to MDS.  We will then deploy a project that exposes a service based on the WSDL.  Finally we will deploy a client project to the previous project that uses the same MDS resources.

Creating Shared Resources in a JDeveloper Project

First lets create a JDeveloper project and put our shared resources into that project.  To do this

  1. In a JDeveloper Application create a New Generic Project (File->New->All Technologies->General->Generic Project)
  2. In that project create a New Folder called apps (File->New->All Technologies->General->Folder) – It must be called apps for local File MDS to work correctly.
  3. In the project properties delete the existing Java Source Paths (Project Properties->Project Source Paths->Java Source Paths->Remove)
  4. In the project properties a a new Java Source Path pointing to the just created apps directory (Project Properties->Project Source Paths->Java Source Paths->Add)
    JavaSourcePaths

Having created the project we can now put our resources into that project, either copying them from other projects or creating them from scratch.

Create a SOA Bundle to Deploy to a SOA Instance

Having created our resources we now want to package them up for deployment to a SOA instance.  To do this we take the following steps.

  1. Create a new JAR deployment profile (Project Properties->Deployment->New->Jar File)
  2. In JAR Options uncheck the Include Manifest File
  3. In File Groups->Project Output->Contributors uncheck all existing contributors and check the Project Source Path
  4. Create a new SOA Bundle deployment profile (Application Properties->Deployment->New->SOA Bundle)
  5. In Dependencies select the project jar file from the previous steps.
    SOABundle
  6. On Application Properties->Deployment unselect all options.
    SOABundle2

The bundle can now be deployed to the server by selecting Deploy from the Application Menu.

Create a Database Based MDS Connection in JDeveloper

Having deployed our shared resources it would be good to check they are where we expect them to be so lets create a Database Based MDS Connection in JDeveloper to let us browse the deployed resources.

  1. Create a new MDS Connection (File->All Technologies->General->Connections->SOA-MDS Connection)
  2. Make the Connection Type DB Based MDS and choose the database Connection and parition.  The username of the connection will be the <PREFIX>_mds user and the MDS partition will be soa-infra.

Browse the repository to make sure that your resources deplyed correctly under the apps folder.  Note that you can also use this browser to look at deployed composites.  You may find it intersting to look at the /deployed-composites/deployed-composites.xml file which lists all deployed composites.

DbMDSbrowse

    Create an File Based MDS Connection in JDeveloper

    We can now create a File based MDS connection to the project we just created.  A file based MDS connection allows us to work offline without a database or SOA server.  We will create a file based MDS that actually references the project we created earlier.

    1. Create a new MDS Connection (File->All Technologies->General->Connections->SOA-MDS Connection)
    2. Make the Connection Type File Based MDS and choose the MDS Root Folder to be the location of the JDeveloper project previously created (not the source directory, the top level project directory).
      FileMDS

    We can browse the file based MDS using the IDE Connections Window in JDeveloper.  This lets us check that we can see the contents of the repository.

    Using File Based MDS

    Now that we have MDS set up both in the database and locally in the file system we can try using some resources in a composite.  To use a WSDL from the file based repository:

    1. Insert a new Web Service Reference or Service onto your composite.xml.
    2. Browse the Resource Palette for the WSDL in the File Based MDS connection and import it.
      BrowseRepository
    3. Do not copy the resource into the project.
    4. If you are creating a reference, don’t worry about the warning message, that can be fixed later.  Just say Yes you do want to continue and create the reference.
      ConcreteWSDLWarning

    Note that when you import a resource from an MDS connection it automatically adds a reference to that MDS into the applications adf-config.xml.  SOA applications do not deploy their adf-config.xml, they use it purely to help resolve oramds protocol references in SOA composites at design time.  At runtime the soa-infra applications adf-config.xml is used to help resolve oramds protocol references.

    The reason we set file based MDS to point to the project directory rather than the apps directory underneath is because when we deploy SOA resources to MDS as a SOA bundle the resources are all placed under the apps MDS namespace.  To make sure that our file based MDS includes an apps namespace we have to rename the src directory to be apps and then make sure that our file based MDS points to the directory aboive the new source directory.

    Patching Up References

    When we use an abstract WSDL as a service then the SOA infrastructure automatically adds binging and service information at run time.  An abstract WSDL used as a reference needs to have binding and service information added in order to compile successfully.  By default the imported MDS reference for an abstract WSDL will look like this:

    <reference name="Service3"
       ui:wsdlLocation="oramds:/apps/shared/WriteFileProcess.wsdl">
      <interface.wsdl interface="
    http://xmlns.oracle.com/Test/SyncWriteFile/WriteFileProcess# wsdl.interface(WriteFileProcess)"/>
      <binding.ws port="" location=""/>
    </reference>

    Note that the port and location properties of the binding are empty.  We need to replace the location with a runtime WSDL location that includes binding information, this can be obtained by getting the WSDL URL from the soa-infra application or from EM.  Be sure to remove any MDS instance strings from the URL.

    EndpointInfo

    The port information is a little more complicated.  The first part of the string should be the target namespace of the service, usually the same as the first part of the interface attribute of the interface.wsdl element.  This is followed by a #wsdl.endpoint and then in parenthesis the service name from the runtime WSDL and port name from the WSDL, separated by a /.  The format should look like this:

    {Service Namespace}#wsdl.endpoint({Service Name}/{Port Name})

    So if we have a WSDL like this:

    <wsdl:definitions
       …
      
    targetNamespace=
       "http://xmlns.oracle.com/Test/SyncWriteFile/WriteFileProcess"
    >
       …
       <wsdl:service name="writefileprocess_client_ep">
          <wsdl:port name="WriteFileProcess_pt"
                binding="client:WriteFileProcessBinding">
             <soap:address location=… />
          </wsdl:port>
       </wsdl:service>
    </wsdl:definitions>

    Then we get a binding.ws port like this:

    http://xmlns.oracle.com/Test/SyncWriteFile/WriteFileProcess# wsdl.endpoint(writefileprocess_client_ep/WriteFileProcess_pt)

    Note that you don’t have to set actual values until deployment time.  The following binding information will allow the composite to compile in JDeveloper, although it will not run in the runtime:

    <binding.ws port="dummy#wsdl.endpoint(dummy/dummy)" location=""/>

    The binding information can be changed in the configuration plan.  Deferring this means that you have to have a configuration plan in order to be able to invoke the reference and this means that you reduce the risk of deploying composites with references that are pointing to the wrong environment.

    Summary

    In this blog post I have shown how to store resources in MDS so that they can be shared between composites.  The resources can be created in a JDeveloper project that doubles as an MDS file repository.  The MDS resources can be reused in composites.  If using an abstract WSDL from MDS I have also shown how to fix up the binding information so that at runtime the correct endpoint can be invoked.  Maybe it is more fun than dealing with the Sirius Cybernetics Corporation!

    Once again APEX helps ORACLE TEAM USA win the America's Cup

    David Peake - Fri, 2013-10-11 16:17

    Found some interesting pieces on how ORACLE TEAM USA used Application Express to help them win the 34rd America's Cup in San Francisco.
    Of course these and other great links are available on the Oracle Application Express OTN page.


    Mobile App Designer

    Tim Dexter - Thu, 2013-10-10 14:45

    Back in August a new Oracle mobile solution jumped out of the gate, the Mobile App Designer (MAD). I seem to have been on the road every week for the last, goodness knows how many weeks. I have finally found some time this week to get down and work with it. Its pretty cool and above all, its capable of providing a mobile platform independent reporting solution.

    But you already have a mobile application! Yep, and I think they both sit quite comfortably together. The Oracle BI Mobile Application is available from the App Store for Apple users. Its a great app, build reports, dashboards and BIP reports for your browser based users and your Apple app users can take advantage of them immediately.

    MAD takes the next step forward. Maybe you don't use or can not use Apple mobile devices? Maybe you need to build something more specific around a business area that provides users with a richer experience, beyond what Answers and Dashboards can offer. However, you do not want to have to rely of the tech folks to build the mobile application, thats just piling more work on them. You also want to be platform agnostic, you might have a mix of mobile platforms. MAD can help.

    For those of you that have already used the Online Template layout editor with BI Publisher, you already know how to build a mobile application. The MAD interface is essentially the online template builder user interface, tweaked for a mobile destination ie a phone or tablet.

    You build your data model as you would normally including the newer direct data model build on a subject area from OBIEE.

    Then start to create the 'pages' of your application and the content to sit on those pages. All the normal stuff, tables, pivot tables, charts, images plus accordians, filters and repeating objects. On top of that is the ability to then extend the visual objects that are available to users. Maps (google or oracle), D3 visuals, gantt charts, org charts, if you can either write the code or leverage an existing javascript library, MAD has the extension framework to support it.

    You can build and test in a browser and then deploy to your own BI App Store. Users, on their mobile devices, can then subscribe to an application. They can open and interact with your app using their phone or tablet's interactive features just as they would with a native application.  As you update your app and add new features the changes will be picked up the next time your users open the application.

    Interested? Want to know more? The Oracle MAD home page has a ton of content including tutorials, etc. We are planning to dig into MAD in forthcoming posts. The geek in me wanted to be able to build plugins using the D3 and other visuals. I have been working with Leslie on some of the documentation and we'll be sharing some of that 'plugin' doc and how tos in the coming weeks.

    Categories: BI & Warehousing

    Multi-Colored SQL

    Scott Spendolini - Thu, 2013-10-10 07:14
    My OCD was delighted this morning when I came across a new feature of SQL Developer: Connection Color.  Brace yourselves, as this feature ain't much, but could have a huge impact on reducing accidental commands in the wrong schema or server.

    To use it, simply create or edit a database connection, and set the Connection Color to whichever color you choose:


    Once set, any and all windows associated with that connection will be outlined in that color.  That's it!  I already gleefully went through my connection list and associated different colors with different types of connections.  For example, our development schemas got green:


    While our production schemas got red:


    Now, no matter what type of object I am editing, I will instantly know which schema it belongs to based on the color of the border.  Simple AND effective!

    ★ Native JSON Support in Oracle Database 12c

    Eddie Awad - Thu, 2013-10-10 07:00

    json

    If you want JSON support in your Oracle database today you can use PL/JSON, an open source JSON parser written in PL/SQL.

    However, as Marco reported from Oracle OpenWorld, native JSON support may be an upcoming new feature in Oracle Database 12c.

    This new feature allows the storage of JSON documents in table columns with existing data types like VARCHAR2, CLOB, RAW, BLOB and BFILE.

    A new check constraint makes sure only valid JSON is inserted.

    For example: CHECK column IS JSON.

    New built-in operators allow you to work with stored JSON documents. For example, JSON_VALUE enables you to query JSON data and return the result as a SQL value. Other operators include JSON_QUERY, JSON_EXISTS and JSON_TABLE.

    Cool stuff!

    © Eddie Awad's Blog, 2013. | Permalink | Add a comment | Topic: Oracle | Tags: ,

    Multiple SOA Developers Using a Single Install

    Antony Reynolds - Wed, 2013-10-09 16:37
    Running Multiple SOA Developers from a Single Install

    A question just came up about how to run multiple developers from a single software install.  The objective is to have a single software installation on a shared server and then provide different OS users with the ability to create their own domains.  This is not a supported configuration but it is attractive for a development environment.

    Out of the Box

    Before we do anything special lets review the basic installation.

    • Oracle WebLogic Server 10.3.6 installed using oracle user in a Middleware Home
    • Oracle SOA Suite 11.1.1.7 installed using oracle user
    • Software installed with group oinstall
    • Developer users dev1, dev2 etc
      • Each developer user is a member of oinstall group and has access to the Middleware Home.
    Customizations

    To get this to work I did the following customization

    • In the Middleware Home make all user readable files/directories group readable and make all user executable files/directories group executable.
      • find $MW_HOME –perm /u+r ! –perm /g+r | xargs –Iargs chmod g+r args
      • find $MW_HOME –perm /u+x ! –perm /g+x | xargs –Iargs chmod g+x args
    Domain Creation

    When creating a domain for a developer note the following:

    • Each developer will need their own FMW repository, perhaps prefixed by their username, e.g. dev1, dev2 etc.
    • Each developer needs to use a unique port number for all WebLogic channels
    • Any use of Coherence should use Well Known Addresses to avoid cross talk between developer clusters (note SOA and OSB both use Coherence!)
    • If using Node Manager each developer will need their own instance, using their own configuration.

    Getting Started with Oracle SOA B2B Integration: A hands On Tutorial

    Antony Reynolds - Tue, 2013-10-08 17:14
    Book: Getting Started with Oracle SOA B2B Integration: A hands On Tutorial

    Before OpenWorld I received a copy of a new book by Scott Haaland, Alan Perlovsky & Krishnaprem Bhatia entitled Getting Started with Oracle SOA B2B Integration: A hands On Tutorial.  A free download is available of Chapter 3 to help you get a feeling for the style for the book.

    A useful new addition to the growing library of Oracle SOA Suite books, it starts off by putting B2B into context and identifying some common B2B message patterns and messaging protocols.  The rest of the book then takes the form of tutorials on how to use Oracle B2B interspersed with useful tips, such as how to set up B2B as a hub to connect different trading partners, similar to the way a VAN works.

    The book goes a little beyond a tutorial by providing suggestions on best practice, giving advice on what is the best way to do things in certain circumstances.

    I found the chapter on reporting & monitoring to be particularly useful, especially the BAM section, as I find many customers are able to use BAM reports to sell a SOA/B2B solution to the business.

    The chapter on Preparing to Go-Live should be read closely before the go live date, at the very least pay attention to the “Purging data” section

    Not being a B2B expert I found the book helpful in explaining how to accomplish tasks in Oracle B2B, and also in identifying the capabilities of the product.  Many SOA developers, myself included, view B2B as a glorified adapter, and in many ways it is, but it is an adapter with amazing capabilities.

    The editing seems a little loose, the language is strange in places and there are references to colors on black and white diagrams, but the content is solid and helpful to anyone tasked with implementing Oracle B2B.

    ★ Now Available for Download: Presentations from Oracle OpenWorld 2013

    Eddie Awad - Tue, 2013-10-08 07:00

    Oracle OpenWorld Head to the Content Catalog and start downloading your favorite sessions. No registration needed. Sessions will be available for download until March 2014.

    Note that some presenters chose not to make their sessions available.

    Via the Oracle OpenWorld Blog.

    © Eddie Awad's Blog, 2013. | Permalink | 2 comments | Topic: Oracle | Tags:

    Convert SQLServer to Oracle using files - Part 4

    Barry McGillin - Mon, 2013-10-07 21:30
    This, the last part of a four part tutorial goes over the movement of data using files generated by Oracle SQL Developer.  In part 1 we generated the offline capture scripts to take to the SQL Server machine, unloaded the metadata, zipped it up and copied it back to out local machine. In part 2 we used SQL Developer to create a migration project and load the capture files into SQL Developer.  We then converted the metadata into its Oracle equivalent. In  Part 3, we were able to generate DDL and run this DDL against an Oracle database.
    Looking at the data move scripts that we generated in an earlier part.  We need to zip up the files and copy them to the SQL Server machine to run.  Lets look at that now.  The images below show the files moved to our SQLServer machine.  We go into the main directory under data move and run the bat file MicrosoftSQLServer_data.bat.  This batch file takes a number of parameters

    This script then unloads the data from the database for the databases selected earlier.  We can see the dat files in the image above.  Now, we just need to go and transfer the data to the Oracle database machine for loading.  We can go back out to the main datamove directory and zip up the entire directory including the scripts.  We then need to ftp that to the database machine.  
    The files need to be unzipped on the machine and cd into the main directory until you find a file called oracle_loader.sh.

    We can run the files as below.  The output below shows the exact output of running the Oracle_loader.sh script on the data we have taken from SQL Server.


    [oracle@Unknown-08:00:27:c8:2a:1c 2013-10-08_00-05-16]$ sh ./oracle_loader.sh orcl blog blog
    /scratch/datamove/2013-10-08_00-05-16/Northwind /scratch/datamove/2013-10-08_00-05-16
    /scratch/datamove/2013-10-08_00-05-16/Northwind/dbo_Northwind /scratch/datamove/2013-10-08_00-05-16/Northwind

    SQL*Plus: Release 11.2.0.2.0 Production on Mon Oct 7 18:58:42 2013

    Copyright (c) 1982, 2010, Oracle. All rights reserved.


    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options

    Table altered.
    Table altered.
    Trigger altered.
    Trigger altered.
    Trigger altered.
    Trigger altered.
    Trigger altered.
    Trigger altered.
    Trigger altered.
    Table altered.
    Table altered.
    Table altered.
    Table altered.
    Table altered.
    Table altered.
    Table altered.
    Table altered.
    Table altered.
    Table altered.
    Table altered.
    Table altered.
    Table altered.
    Table altered.
    Table altered.
    Table altered.
    Table altered.
    Table altered.
    Table altered.
     Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options

    SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:43 2013

    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

    Commit point reached - logical record count 1
    Commit point reached - logical record count 2
    Commit point reached - logical record count 3
    Commit point reached - logical record count 4
    Commit point reached - logical record count 5
    Commit point reached - logical record count 6
    Commit point reached - logical record count 7
    Commit point reached - logical record count 8
    Commit point reached - logical record count 9

    SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:44 2013

    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

    Commit point reached - logical record count 1
    Commit point reached - logical record count 2
    Commit point reached - logical record count 3
    Commit point reached - logical record count 4
    Commit point reached - logical record count 5
    Commit point reached - logical record count 6
    Commit point reached - logical record count 7
    Commit point reached - logical record count 8

    SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:44 2013

    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

    Commit point reached - logical record count 49

    SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:44 2013

    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

    Commit point reached - logical record count 53

    SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:45 2013

    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.


    SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:45 2013

    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.


    SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:45 2013

    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

    Commit point reached - logical record count 64
    Commit point reached - logical record count 77

    SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:45 2013

    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

    Commit point reached - logical record count 4

    SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:46 2013

    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

    Commit point reached - logical record count 64
    Commit point reached - logical record count 128
    Commit point reached - logical record count 192
    Commit point reached - logical record count 256
    Commit point reached - logical record count 320
    Commit point reached - logical record count 384
    Commit point reached - logical record count 448
    Commit point reached - logical record count 512
    Commit point reached - logical record count 576
    Commit point reached - logical record count 640
    Commit point reached - logical record count 704
    Commit point reached - logical record count 768
    Commit point reached - logical record count 832
    Commit point reached - logical record count 896
    Commit point reached - logical record count 960
    Commit point reached - logical record count 1024
    Commit point reached - logical record count 1088
    Commit point reached - logical record count 1152
    Commit point reached - logical record count 1216
    Commit point reached - logical record count 1280
    Commit point reached - logical record count 1344
    Commit point reached - logical record count 1408
    Commit point reached - logical record count 1472
    Commit point reached - logical record count 1536
    Commit point reached - logical record count 1600
    Commit point reached - logical record count 1664
    Commit point reached - logical record count 1728
    Commit point reached - logical record count 1792
    Commit point reached - logical record count 1856
    Commit point reached - logical record count 1920
    Commit point reached - logical record count 1984
    Commit point reached - logical record count 2048
    Commit point reached - logical record count 2112
    Commit point reached - logical record count 2155

    SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:46 2013

    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

    Commit point reached - logical record count 64
    Commit point reached - logical record count 128
    Commit point reached - logical record count 192
    Commit point reached - logical record count 256
    Commit point reached - logical record count 320
    Commit point reached - logical record count 384
    Commit point reached - logical record count 448
    Commit point reached - logical record count 512
    Commit point reached - logical record count 576
    Commit point reached - logical record count 640
    Commit point reached - logical record count 704
    Commit point reached - logical record count 768
    Commit point reached - logical record count 830

    SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:47 2013

    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

    Commit point reached - logical record count 1
    Commit point reached - logical record count 2
    Commit point reached - logical record count 3
    Commit point reached - logical record count 4
    Commit point reached - logical record count 5
    Commit point reached - logical record count 6
    Commit point reached - logical record count 7
    Commit point reached - logical record count 8
    Commit point reached - logical record count 9
    Commit point reached - logical record count 10
    Commit point reached - logical record count 11
    Commit point reached - logical record count 12
    Commit point reached - logical record count 13
    Commit point reached - logical record count 14
    Commit point reached - logical record count 15
    Commit point reached - logical record count 16
    Commit point reached - logical record count 17
    Commit point reached - logical record count 18
    Commit point reached - logical record count 19
    Commit point reached - logical record count 20
    Commit point reached - logical record count 21
    Commit point reached - logical record count 22
    Commit point reached - logical record count 23
    Commit point reached - logical record count 24
    Commit point reached - logical record count 25
    Commit point reached - logical record count 26
    Commit point reached - logical record count 27
    Commit point reached - logical record count 28
    Commit point reached - logical record count 29

    SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:47 2013

    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

    Commit point reached - logical record count 3

    SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:47 2013

    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

    Commit point reached - logical record count 64
    Commit point reached - logical record count 91

    Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    /scratch/datamove/2013-10-08_00-05-16/Northwind
    /scratch/datamove/2013-10-08_00-05-16
    [oracle@Unknown-08:00:27:c8:2a:1c 2013-10-08_00-05-16]$


    We can now take a look at some data in the Oracle database by going to the dbo_northwind connection we made earlier and look at the data.


    And thats it. In these four parts we have generated capture scripts from SQLDeveloper to unload metadata from SQLServer. In part two, we loaded the metadata and converted it into an Oracle version. In Part three, we generated the DDL and ran it creating the new Oracle users. In part 4, we unloaded the data and copied it to the oracle machine.  We then loaded it using the scripts we generated from Oracle SQL Developer.

    Convert SQLServer to Oracle using files - Part 3

    Barry McGillin - Mon, 2013-10-07 20:23
    In part 1 we generated the offline capture scripts to take to the SQL Server machine, unloaded the metadata, zipped it up and copied it back to out local machine. In part 2 we used SQL Developer to create a migration project and load the capture files into SQL Developer.  We then converted the metadata into its Oracle equivalent.
    In this episode we will try and generate DDL from our migration project.  Right now, We can see the Oracle objects in the Converted Database Objects node.
    If we right click on Converted Database objects and choose generate, we can generate DDL to create the Oracle Schema and Objects.

    The wizard appears again with the introduction screen.  Clicking next takes us directly to the Target database Screen.


    Click on offline to choose generation of files.  For specifics of how the files get generated, click on advanced options

     You can select what way you want to generate your files, all in one file, a file per object type or a file per object. You can also choose the types of objects you want to generate and run.
     In this demo, I will just generate tables, data and supporting objects.   Clicking next  will take us to the data move page where we again choose offline to generate files.
     Choosing advanced options allows us to be specific about date masks and delimiters for data unload.
     Once we have chosen our options, we click next and review the summary.
     Finally, we click finish and the files are generated in the output directory we specified when setting up the project in part 2.
    Now, Lets go see what we generated.  If we go to the output directory we specified in the project, we can see the list of files we generated.  Remember the options we chose for generation.
    We also get the master.sql file opened in SQL Developer which looks like this


    SET ECHO OFF
    SET VERIFY OFF
    SET FEEDBACK OFF
    SET DEFINE ON
    CLEAR SCREEN
    set serveroutput on

    COLUMN date_time NEW_VAL filename noprint;
    SELECT to_char(systimestamp,'yyyy-mm-dd_hh24-mi-ssxff') date_time FROM DUAL;
    spool democapture_&filename..log

    -- Password file execution
    @passworddefinition.sql

    PROMPT Creating Role
    @role.sql

    prompt creating user Emulation
    @@Emulation/user.sql

    prompt creating user dbo_Northwind
    @@dbo_Northwind/user.sql

    prompt creating user dbo_pubs
    @@dbo_pubs/user.sql

    prompt Building objects in Emulation
    @@Emulation/master.sql

    prompt Building objects in dbo_Northwind
    @@dbo_Northwind/master.sql

    prompt Building objects in dbo_pubs
    @@dbo_pubs/master.sql

    Now, lets try and run this file and create the users and objects.  Firstly, we choose a connection to run the script.  This user must have the privileges to create users and all their ancillary objects.
    We can run this script to create the users.  Notice the worksheet output showing the output of the files.
    Once this is complete, we can create a connection in SQL Developer to one of the users created, dbo_Northwind, dbo_pubs and emulation.


    Now, we have created the schema from the DDL which was generated.  In the next and final episode of this, we will visit the data move.  We will run the data move scripts on SQL Server and extract the data which we can load via SQL Loader or external tables.


    APEX 5.0 - Page Designer

    David Peake - Mon, 2013-10-07 20:05
    Ever since we started showing off the new Page Designer scheduled for release with Application Express 5.0 almost everyone has been hounding me for screen shots. Generally I never make future functionality public as it changes often and sometimes in very dramatic ways between feature conception, initial implementation, and final deliverable. However, given how much of a "game-changer" this feature will be for all current APEX Developers I have released a slide deck on OTN: Application Express 5.0 - Page Designer.

    Please review Oracle's Safe Harbor within the slide deck.

    Convert SQL Server to Oracle using files - Part 2

    Barry McGillin - Mon, 2013-10-07 17:47
    Ok, Now we have the files as generated and moved in part 1, we can now start SQL Developer to load the files. Start up SQL Developer  and create a connection with the following privileges: CONNECT, RESOURCE and CREATE VIEW.

    When the connection is opened, right click on it and choose Migration Repository then Associate Migration Repository.  This will create the repository in the connection.

     Now, We can start the migration wizard. You can do this by either going to the tools menu and selecting migrate from the migration menu, or you can select the migrate icon from the migration project navigator.  The wizard will popup and you can walk through the steps as outlined below.
     Clicking the next button selects the repository page which we can choose the repository connection we just made.
     Next page and we need to create a project to hold the captured databases.
    The output directory in the page above is the directory where any log files or generated files will be placed.  When we generate DDL or data move files, this is where they will get generated.  Next page is the capture page.  For using the files from Part 1, we need to choose offline which will then show the page below, which asks us to select the offline capture file.
     This offline capture file is in the zip file we brought over from SQL Server.  Browse to the sqlserver2008.ocp.  This file tells SQL Developer what to expect in the directory.  It will look for the databases that have been unloaded.
     When its selected, SQL Developer parses the files and shows you a list of the databases you ran the offline capture scripts for in Part 1.

     Choose both databases and click next.
     The next page shows a list of the datatypes of SQL Server on the left and a list of equivalent data types on the right.  You can choose a different type if you want and you can also create a new mapping by clicking on the "Add new Rule".
     The next page lists the objects to be translated.  Because we have not captured anything yet, the best we can do is to tell SQL Developer to translate everything.  We can come back later and choose specific  stored programs to convert and translate.

     At this stage, we can click proceed to summary and then finish once you review the summary page.
     When finish is pressed, SQL Developer will capture the database metadata from the files and convert it to its Oracle equivalent.

     When this completes, you will see a new node with the project name you chose earlier. If you click on it, you will get an editor on the right hand side with a summary of the data captured and converted.


    Oracle Linux 5.10 channels are now published

    Wim Coekaerts - Mon, 2013-10-07 16:54
    We just released Oracle Linux 5.10 channels on both http://public-yum.oracle.com and on the Unbreakable Linux Network. ISO's are going to be updated on edelivery in a few days. The channels are available immediately.

    As many of you know, we are now using a CDN to distribute the RPMS for public-yum globally so you should have good bandwidth everywhere to freely access the RPMs.

    Convert SQL Server to Oracle using files - Part 1

    Barry McGillin - Mon, 2013-10-07 16:45
    Many people want to migrate their SQL Server databases and do not have direct network access to the database. In Oracle SQL Developer, we can migrate from SQL Developer to Oracle using a connection  to SQL Server or  using files to extract the metadata from SQL Server and convert it to an Oracle equivilent.

    Today, we'll show you how to use scripts to convert SQL Server.  First we need to start up SQL Developer and choose the Tools menu, then select Migration and Create Offline Capture Scripts

    When the dialog appears, choose the SQL Server and the appropriate version you want.  You will also need to choose a directory to put the scripts into.
    This will generate a set of files which we will need to move to our SQL Server machine to run.
    So on disk, these look like this.
    Now, we can zip this up and ftp it to the SQL Server machine you want to migrate, or in my case, I'll scp it to the machine.

    Now, lets go to SQL Server and run the scripts against the SQL Server database.  Looking below, I have opened up a command window and created a directory called blog and moved the sqlserver.zip file into that directory.
    Now, we have the scripts on the SQL Server box and ready to run.  Its important that when you run the scripts on a server, that you always run it from the same place.  The script which is run takes a number of parameters to run.
    OMWB_OFFLINE_CAPTURE sa superuser_password databasename server

      OMWB_OFFLINE_CAPTURE sa saPASSWORD DBNAME_TO_CAPTURE SQLSERVER_SERVER  

    This will unload the metadata from the database to flat files.  You need to run this script once for each database you want to migrate.  You'll see something like these as you go.


    This is one run for the northwind database.  I've run this again for the pubs database and lets look and see what files exist now.
    Now, we go up a directory and zip all this up so we can move it to the machine where we will translate it.
    Now, we can move that zip file.  Take a look at it, it is very small in size for this demo, but even for a large system, we are only capturing the metadata structure of the database.  If you are working with a partner or SI, this is the file you will want to send them for analysis.

    Ok, for those of you who are doing this right now, read on.

    When you have the capture.zip file transferred, unzip it into a clean directory.  We will use SQL Developer on this to  convert these metadata files into DDL to create the new Oracle schema and the data move scripts which can be used to unload the data from SQL Server and load it into Oracle.


    Now, we use SQL Developer to load this data.  We will need access to an Oracle database to create a schema to use as a repository. The repository is used to hold the source database information and the converted data.

    The next post will walk through SQL Developer loading these files and converting the metadata to an Oracle equivalent.







    Access Manager 11G Rel 2 and APEX 4.2

    Frank van Bortel - Mon, 2013-10-07 11:06
    There is some documentation regarding APEX and OAM, but it is flawed. Make sure APEX functions with standard (APEX user based) security, even through OAM; this means Allow /APEX/** Allow /i/** Protect /apex/apex_authentication.callback Page 9 states "OAM_REMOTE_USER with a value of $user.userid is created by default".Not true, just add it. What the extra entries are for is beyond me, APEX willFrankhttp://www.blogger.com/profile/07830428804236732019noreply@blogger.com0

    ★ Oracle Database 12c In-Memory Option Explained

    Eddie Awad - Mon, 2013-10-07 07:00

    12c

    Jonathan Lewis explains the recently announced Oracle Database 12c in-memory option:

    The in-memory component duplicates data (specified tables – perhaps with a restriction to a subset of columns) in columnar format in a dedicated area of the SGA. The data is kept up to date in real time, but Oracle doesn’t use undo or redo to maintain this copy of the data because it’s never persisted to disc in this form, it’s recreated in-memory (by a background process) if the instance restarts. The optimizer can then decide whether it would be faster to use a columnar or row-based approach to address a query.

    The intent is to help systems which are mixed OLTP and DSS – which sometimes have many “extra” indexes to optimise DSS queries that affect the performance of the OLTP updates. With the in-memory columnar copy you should be able to drop many “DSS indexes”, thus improving OLTP response times – in effect the in-memory stuff behaves a bit like non-persistent bitmap indexing.

    © Eddie Awad's Blog, 2013. | Permalink | Add a comment | Topic: Oracle | Tags: ,

    Big Data News from Oracle OpenWorld 2013

    Chen Shapira - Sun, 2013-10-06 20:00

    Only a week after Oracle OpenWorld concluded and I already feel like I’m hopelessly behind on posting news and impressions. Behind or not, I have news to share!

    The most prominent feature announced at OpenWorld is the “In-Memory Option”  for Oracle Database 12c.  This option is essentially a new part of the SGA that caches tables in column formats. This is expected to make data warehouse queries significantly faster and more efficient. I would have described the feature in more details, but Jonathan Lewis gave a better overview in this forum discussion, so just go read his post.

    Why am I excited about a feature that has nothing to do with Hadoop?

    First, because I have a lot of experience with large data warehouses. So I know that big data often means large tables, but only few columns used in each query. And I know that in order to optimize these queries and to avoid expensive disk reads every time each query runs, we build indexes on those columns, which makes data loading slow. In-memory option will allow us to drop those indexes and just store the columns we need in memory.

    Second, because I’m a huge fan of in-memory data warehouses, and am happy that Oracle is now making these feasible. Few TB of memory in a large server are no longer science fiction, which means that most of your data warehouse will soon fit in memory. Fast analytics for all! And what do you do with the data that won’t fit in memory? Perhaps store it in your Hadoop cluster.

    Now that I’m done being excited about the big news, lets talk about small news that you probably didn’t notice but you should.

    Oracle announced two cool new features for the Big Data Appliance. Announced may be a big word, Larry Ellison did not stand up on stage and talked about them. Instead the features sneaked quietly into the last upgrade and appeared in the documentation.

    Perfect Balance – If you use Hadoop as often as I do, you know how data skew can mess with query performance. You run a job with several reducers, each aggregates data for a subset of keys. Unless you took great care in partitioning your data, the data will not be evenly distributed between the reducers, usually because it wasn’t evenly distributed between the keys. As a result, you will spend 50% of the time waiting for that one last reducer to finish already.

    Oracle’s Perfect Balance makes the “took great case in partitioning your data” part much much easier. This blog post is just a quick overview, not an in-depth blog post, so I won’t go into details of how this works (wait for my next post on this topic!). I’ll just mention that Perfect Balance can be used without any change to the application code, so if you are using BDA, there is no excuse not to use it.

    And no excuse to play solitaire while waiting for the last reducer to finish.

    Oracle XQuery for Hadoop – Announced but not officially released yet, which is why I’m pointing you at an Amis blog post. For now thats the best source of information about this feature. This feature, combined with the existing Oracle Loader for Hadoop will allow running XQuery operations on XMLs stored in Hadoop, pushing down the entire data processing bit to Map Reduce on the Hadoop cluster. Anyone who knows how slow, painful and CPU intensive XML processing can be on an Oracle database server will appreciate this feature. I wish I had it a year ago when I had to ingest XMLs at a very high rate. It is also so cool that I’m a bit sorry that we never developed more awesome XQuery capabilities for Hive and Impala. Can’t wait for the release so I can try that!

    During OpenWorld there was also additional exposure for existing, but perhaps not very well known Oracle Big Data features – Hadoop for ODI, Hadoop for OBIEE and using GoldenGate with Hadoop. I’ll try to write more about those soon.

    Meanwhile, let me know what you think of In-Memory, Perfect Balance and OXH.


    Categories: DBA Blogs

    Distributing tables evenly into groups using the SQL Model Clause

    Rob van Wijk - Sun, 2013-10-06 16:59
    My colleague Ronald Rood recently had a nice SQL challenge for me. He had to perform an export of all the tables in a schema the old fashioned way and wanted to manually parallellize the operation. For that to work, all tables need to be assigned to a group. For the parallellization to work, the groups need to be balanced. If, say, all large tables are in one group, the parallellization won't Rob van Wijkhttp://www.blogger.com/profile/00499478359372903250noreply@blogger.com2

    SacredPy seeking collaborators

    Catherine Devlin - Thu, 2013-10-03 09:21

    I'm looking for collaborators who want to build web programming experience on an interesting project...

    During my job search, I was contacted by Kai Schraml, a seminary graduate who wants to scratch an itch. Seminarians have a serious need to discuss, debate, and seek consensus on the translations of difficult texts, like sacred scriptures. But the software tools currently available for the purpose are closed-source and expensive. That just seems wrong - not just because seminary students are broke, but because of the nature of the texts themselves. After all, Jesus released his teachings under a very strong open-source license!*

    So we're starting to work on an alternative, provisionally called "SacredPy". (It could be applied to any difficult texts, of course, so if Beowulf is sacred to you, have at it.) I'm quite employed now, but I'm dabbling at it a bit for the sheer interest and open-sourcey glory of it all. It's possible income could eventually come from this project - Kai could tell you more about the prospects - but certainly not soon, so this is no substitute for proper employment. But it might be great resume builder for a new Python programmer. It looks like we'll most likely build something atop Askbot, a Django-based project, so if you'd like to move into the thriving "experienced Djano developer" segment of the economy...

    Let me know at moc.liamg@nilved.enirehtac and we'll talk!

    * - Matthew 10:8 - δωρεὰν ἐλάβετε, δωρεὰν δότε ("Freely you have received, freely give")

    Java Embedded Development

    Anshu Sharma - Wed, 2013-10-02 17:41

    Internet of Things offers exciting possibilities to come up with new enterprise applications. Once you have figured out what functionality you are going to offer and what devices your application has to interact with, you will have to get familiar with embedded java development to get data out from the devices and into the Datacenter for Analytics, integration etc. Also, increasingly a lot of Analytics and processing is happening at the device or near the device, in gateways, for faster response and network usage optimization.

    For partners who have traditionally developed applications completely running in Datacenters, the good news is that Java Embedded has mostly the same syntax as Java SE (used in Datacenters) but APIs are a bit different as they are constrained and optimized to run in devices which have restricted processing capabilities and memory. The main Java Embedded products are -  Java Card, Java ME Embedded, Java SE Embedded, Java Embedded Suite & Oracle Event Processing for Java Embedded. As you can guess these separate SKUs are offering more functionality in exchange for increasing footprint. Here are some links for you to explore further -

    Java Embedded OTN page - http://www.oracle.com/us/technologies/java/embedded/overview/index.html

    Java Embedded community on java.net -  https://community.java.net/community/embedded

    In Java SE 8 there are plans to unify some of different SKUs of Java. Please see Java One Keynote to get better idea http://medianetwork.oracle.com/video/player/2685497644001 

    Pages

    Subscribe to Oracle FAQ aggregator