Feed aggregator

Getting Started with Oracle SOA B2B Integration: A hands On Tutorial

Antony Reynolds - Tue, 2013-10-08 17:14
Book: Getting Started with Oracle SOA B2B Integration: A hands On Tutorial

Before OpenWorld I received a copy of a new book by Scott Haaland, Alan Perlovsky & Krishnaprem Bhatia entitled Getting Started with Oracle SOA B2B Integration: A hands On Tutorial.  A free download is available of Chapter 3 to help you get a feeling for the style for the book.

A useful new addition to the growing library of Oracle SOA Suite books, it starts off by putting B2B into context and identifying some common B2B message patterns and messaging protocols.  The rest of the book then takes the form of tutorials on how to use Oracle B2B interspersed with useful tips, such as how to set up B2B as a hub to connect different trading partners, similar to the way a VAN works.

The book goes a little beyond a tutorial by providing suggestions on best practice, giving advice on what is the best way to do things in certain circumstances.

I found the chapter on reporting & monitoring to be particularly useful, especially the BAM section, as I find many customers are able to use BAM reports to sell a SOA/B2B solution to the business.

The chapter on Preparing to Go-Live should be read closely before the go live date, at the very least pay attention to the “Purging data” section

Not being a B2B expert I found the book helpful in explaining how to accomplish tasks in Oracle B2B, and also in identifying the capabilities of the product.  Many SOA developers, myself included, view B2B as a glorified adapter, and in many ways it is, but it is an adapter with amazing capabilities.

The editing seems a little loose, the language is strange in places and there are references to colors on black and white diagrams, but the content is solid and helpful to anyone tasked with implementing Oracle B2B.

★ Now Available for Download: Presentations from Oracle OpenWorld 2013

Eddie Awad - Tue, 2013-10-08 07:00

Oracle OpenWorld Head to the Content Catalog and start downloading your favorite sessions. No registration needed. Sessions will be available for download until March 2014.

Note that some presenters chose not to make their sessions available.

Via the Oracle OpenWorld Blog.

© Eddie Awad's Blog, 2013. | Permalink | 2 comments | Topic: Oracle | Tags:

Convert SQLServer to Oracle using files - Part 4

Barry McGillin - Mon, 2013-10-07 21:30
This, the last part of a four part tutorial goes over the movement of data using files generated by Oracle SQL Developer.  In part 1 we generated the offline capture scripts to take to the SQL Server machine, unloaded the metadata, zipped it up and copied it back to out local machine. In part 2 we used SQL Developer to create a migration project and load the capture files into SQL Developer.  We then converted the metadata into its Oracle equivalent. In  Part 3, we were able to generate DDL and run this DDL against an Oracle database.
Looking at the data move scripts that we generated in an earlier part.  We need to zip up the files and copy them to the SQL Server machine to run.  Lets look at that now.  The images below show the files moved to our SQLServer machine.  We go into the main directory under data move and run the bat file MicrosoftSQLServer_data.bat.  This batch file takes a number of parameters

This script then unloads the data from the database for the databases selected earlier.  We can see the dat files in the image above.  Now, we just need to go and transfer the data to the Oracle database machine for loading.  We can go back out to the main datamove directory and zip up the entire directory including the scripts.  We then need to ftp that to the database machine.  
The files need to be unzipped on the machine and cd into the main directory until you find a file called oracle_loader.sh.

We can run the files as below.  The output below shows the exact output of running the Oracle_loader.sh script on the data we have taken from SQL Server.


[oracle@Unknown-08:00:27:c8:2a:1c 2013-10-08_00-05-16]$ sh ./oracle_loader.sh orcl blog blog
/scratch/datamove/2013-10-08_00-05-16/Northwind /scratch/datamove/2013-10-08_00-05-16
/scratch/datamove/2013-10-08_00-05-16/Northwind/dbo_Northwind /scratch/datamove/2013-10-08_00-05-16/Northwind

SQL*Plus: Release 11.2.0.2.0 Production on Mon Oct 7 18:58:42 2013

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

Table altered.
Table altered.
Trigger altered.
Trigger altered.
Trigger altered.
Trigger altered.
Trigger altered.
Trigger altered.
Trigger altered.
Table altered.
Table altered.
Table altered.
Table altered.
Table altered.
Table altered.
Table altered.
Table altered.
Table altered.
Table altered.
Table altered.
Table altered.
Table altered.
Table altered.
Table altered.
Table altered.
Table altered.
Table altered.
Table altered.
 Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:43 2013

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

Commit point reached - logical record count 1
Commit point reached - logical record count 2
Commit point reached - logical record count 3
Commit point reached - logical record count 4
Commit point reached - logical record count 5
Commit point reached - logical record count 6
Commit point reached - logical record count 7
Commit point reached - logical record count 8
Commit point reached - logical record count 9

SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:44 2013

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

Commit point reached - logical record count 1
Commit point reached - logical record count 2
Commit point reached - logical record count 3
Commit point reached - logical record count 4
Commit point reached - logical record count 5
Commit point reached - logical record count 6
Commit point reached - logical record count 7
Commit point reached - logical record count 8

SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:44 2013

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

Commit point reached - logical record count 49

SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:44 2013

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

Commit point reached - logical record count 53

SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:45 2013

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.


SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:45 2013

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.


SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:45 2013

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

Commit point reached - logical record count 64
Commit point reached - logical record count 77

SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:45 2013

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

Commit point reached - logical record count 4

SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:46 2013

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

Commit point reached - logical record count 64
Commit point reached - logical record count 128
Commit point reached - logical record count 192
Commit point reached - logical record count 256
Commit point reached - logical record count 320
Commit point reached - logical record count 384
Commit point reached - logical record count 448
Commit point reached - logical record count 512
Commit point reached - logical record count 576
Commit point reached - logical record count 640
Commit point reached - logical record count 704
Commit point reached - logical record count 768
Commit point reached - logical record count 832
Commit point reached - logical record count 896
Commit point reached - logical record count 960
Commit point reached - logical record count 1024
Commit point reached - logical record count 1088
Commit point reached - logical record count 1152
Commit point reached - logical record count 1216
Commit point reached - logical record count 1280
Commit point reached - logical record count 1344
Commit point reached - logical record count 1408
Commit point reached - logical record count 1472
Commit point reached - logical record count 1536
Commit point reached - logical record count 1600
Commit point reached - logical record count 1664
Commit point reached - logical record count 1728
Commit point reached - logical record count 1792
Commit point reached - logical record count 1856
Commit point reached - logical record count 1920
Commit point reached - logical record count 1984
Commit point reached - logical record count 2048
Commit point reached - logical record count 2112
Commit point reached - logical record count 2155

SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:46 2013

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

Commit point reached - logical record count 64
Commit point reached - logical record count 128
Commit point reached - logical record count 192
Commit point reached - logical record count 256
Commit point reached - logical record count 320
Commit point reached - logical record count 384
Commit point reached - logical record count 448
Commit point reached - logical record count 512
Commit point reached - logical record count 576
Commit point reached - logical record count 640
Commit point reached - logical record count 704
Commit point reached - logical record count 768
Commit point reached - logical record count 830

SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:47 2013

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

Commit point reached - logical record count 1
Commit point reached - logical record count 2
Commit point reached - logical record count 3
Commit point reached - logical record count 4
Commit point reached - logical record count 5
Commit point reached - logical record count 6
Commit point reached - logical record count 7
Commit point reached - logical record count 8
Commit point reached - logical record count 9
Commit point reached - logical record count 10
Commit point reached - logical record count 11
Commit point reached - logical record count 12
Commit point reached - logical record count 13
Commit point reached - logical record count 14
Commit point reached - logical record count 15
Commit point reached - logical record count 16
Commit point reached - logical record count 17
Commit point reached - logical record count 18
Commit point reached - logical record count 19
Commit point reached - logical record count 20
Commit point reached - logical record count 21
Commit point reached - logical record count 22
Commit point reached - logical record count 23
Commit point reached - logical record count 24
Commit point reached - logical record count 25
Commit point reached - logical record count 26
Commit point reached - logical record count 27
Commit point reached - logical record count 28
Commit point reached - logical record count 29

SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:47 2013

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

Commit point reached - logical record count 3

SQL*Loader: Release 11.2.0.2.0 - Production on Mon Oct 7 18:58:47 2013

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

Commit point reached - logical record count 64
Commit point reached - logical record count 91

Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
/scratch/datamove/2013-10-08_00-05-16/Northwind
/scratch/datamove/2013-10-08_00-05-16
[oracle@Unknown-08:00:27:c8:2a:1c 2013-10-08_00-05-16]$


We can now take a look at some data in the Oracle database by going to the dbo_northwind connection we made earlier and look at the data.


And thats it. In these four parts we have generated capture scripts from SQLDeveloper to unload metadata from SQLServer. In part two, we loaded the metadata and converted it into an Oracle version. In Part three, we generated the DDL and ran it creating the new Oracle users. In part 4, we unloaded the data and copied it to the oracle machine.  We then loaded it using the scripts we generated from Oracle SQL Developer.

Convert SQLServer to Oracle using files - Part 3

Barry McGillin - Mon, 2013-10-07 20:23
In part 1 we generated the offline capture scripts to take to the SQL Server machine, unloaded the metadata, zipped it up and copied it back to out local machine. In part 2 we used SQL Developer to create a migration project and load the capture files into SQL Developer.  We then converted the metadata into its Oracle equivalent.
In this episode we will try and generate DDL from our migration project.  Right now, We can see the Oracle objects in the Converted Database Objects node.
If we right click on Converted Database objects and choose generate, we can generate DDL to create the Oracle Schema and Objects.

The wizard appears again with the introduction screen.  Clicking next takes us directly to the Target database Screen.


Click on offline to choose generation of files.  For specifics of how the files get generated, click on advanced options

 You can select what way you want to generate your files, all in one file, a file per object type or a file per object. You can also choose the types of objects you want to generate and run.
 In this demo, I will just generate tables, data and supporting objects.   Clicking next  will take us to the data move page where we again choose offline to generate files.
 Choosing advanced options allows us to be specific about date masks and delimiters for data unload.
 Once we have chosen our options, we click next and review the summary.
 Finally, we click finish and the files are generated in the output directory we specified when setting up the project in part 2.
Now, Lets go see what we generated.  If we go to the output directory we specified in the project, we can see the list of files we generated.  Remember the options we chose for generation.
We also get the master.sql file opened in SQL Developer which looks like this


SET ECHO OFF
SET VERIFY OFF
SET FEEDBACK OFF
SET DEFINE ON
CLEAR SCREEN
set serveroutput on

COLUMN date_time NEW_VAL filename noprint;
SELECT to_char(systimestamp,'yyyy-mm-dd_hh24-mi-ssxff') date_time FROM DUAL;
spool democapture_&filename..log

-- Password file execution
@passworddefinition.sql

PROMPT Creating Role
@role.sql

prompt creating user Emulation
@@Emulation/user.sql

prompt creating user dbo_Northwind
@@dbo_Northwind/user.sql

prompt creating user dbo_pubs
@@dbo_pubs/user.sql

prompt Building objects in Emulation
@@Emulation/master.sql

prompt Building objects in dbo_Northwind
@@dbo_Northwind/master.sql

prompt Building objects in dbo_pubs
@@dbo_pubs/master.sql

Now, lets try and run this file and create the users and objects.  Firstly, we choose a connection to run the script.  This user must have the privileges to create users and all their ancillary objects.
We can run this script to create the users.  Notice the worksheet output showing the output of the files.
Once this is complete, we can create a connection in SQL Developer to one of the users created, dbo_Northwind, dbo_pubs and emulation.


Now, we have created the schema from the DDL which was generated.  In the next and final episode of this, we will visit the data move.  We will run the data move scripts on SQL Server and extract the data which we can load via SQL Loader or external tables.


APEX 5.0 - Page Designer

David Peake - Mon, 2013-10-07 20:05
Ever since we started showing off the new Page Designer scheduled for release with Application Express 5.0 almost everyone has been hounding me for screen shots. Generally I never make future functionality public as it changes often and sometimes in very dramatic ways between feature conception, initial implementation, and final deliverable. However, given how much of a "game-changer" this feature will be for all current APEX Developers I have released a slide deck on OTN: Application Express 5.0 - Page Designer.

Please review Oracle's Safe Harbor within the slide deck.

Convert SQL Server to Oracle using files - Part 2

Barry McGillin - Mon, 2013-10-07 17:47
Ok, Now we have the files as generated and moved in part 1, we can now start SQL Developer to load the files. Start up SQL Developer  and create a connection with the following privileges: CONNECT, RESOURCE and CREATE VIEW.

When the connection is opened, right click on it and choose Migration Repository then Associate Migration Repository.  This will create the repository in the connection.

 Now, We can start the migration wizard. You can do this by either going to the tools menu and selecting migrate from the migration menu, or you can select the migrate icon from the migration project navigator.  The wizard will popup and you can walk through the steps as outlined below.
 Clicking the next button selects the repository page which we can choose the repository connection we just made.
 Next page and we need to create a project to hold the captured databases.
The output directory in the page above is the directory where any log files or generated files will be placed.  When we generate DDL or data move files, this is where they will get generated.  Next page is the capture page.  For using the files from Part 1, we need to choose offline which will then show the page below, which asks us to select the offline capture file.
 This offline capture file is in the zip file we brought over from SQL Server.  Browse to the sqlserver2008.ocp.  This file tells SQL Developer what to expect in the directory.  It will look for the databases that have been unloaded.
 When its selected, SQL Developer parses the files and shows you a list of the databases you ran the offline capture scripts for in Part 1.

 Choose both databases and click next.
 The next page shows a list of the datatypes of SQL Server on the left and a list of equivalent data types on the right.  You can choose a different type if you want and you can also create a new mapping by clicking on the "Add new Rule".
 The next page lists the objects to be translated.  Because we have not captured anything yet, the best we can do is to tell SQL Developer to translate everything.  We can come back later and choose specific  stored programs to convert and translate.

 At this stage, we can click proceed to summary and then finish once you review the summary page.
 When finish is pressed, SQL Developer will capture the database metadata from the files and convert it to its Oracle equivalent.

 When this completes, you will see a new node with the project name you chose earlier. If you click on it, you will get an editor on the right hand side with a summary of the data captured and converted.


Oracle Linux 5.10 channels are now published

Wim Coekaerts - Mon, 2013-10-07 16:54
We just released Oracle Linux 5.10 channels on both http://public-yum.oracle.com and on the Unbreakable Linux Network. ISO's are going to be updated on edelivery in a few days. The channels are available immediately.

As many of you know, we are now using a CDN to distribute the RPMS for public-yum globally so you should have good bandwidth everywhere to freely access the RPMs.

Convert SQL Server to Oracle using files - Part 1

Barry McGillin - Mon, 2013-10-07 16:45
Many people want to migrate their SQL Server databases and do not have direct network access to the database. In Oracle SQL Developer, we can migrate from SQL Developer to Oracle using a connection  to SQL Server or  using files to extract the metadata from SQL Server and convert it to an Oracle equivilent.

Today, we'll show you how to use scripts to convert SQL Server.  First we need to start up SQL Developer and choose the Tools menu, then select Migration and Create Offline Capture Scripts

When the dialog appears, choose the SQL Server and the appropriate version you want.  You will also need to choose a directory to put the scripts into.
This will generate a set of files which we will need to move to our SQL Server machine to run.
So on disk, these look like this.
Now, we can zip this up and ftp it to the SQL Server machine you want to migrate, or in my case, I'll scp it to the machine.

Now, lets go to SQL Server and run the scripts against the SQL Server database.  Looking below, I have opened up a command window and created a directory called blog and moved the sqlserver.zip file into that directory.
Now, we have the scripts on the SQL Server box and ready to run.  Its important that when you run the scripts on a server, that you always run it from the same place.  The script which is run takes a number of parameters to run.
OMWB_OFFLINE_CAPTURE sa superuser_password databasename server

  OMWB_OFFLINE_CAPTURE sa saPASSWORD DBNAME_TO_CAPTURE SQLSERVER_SERVER  

This will unload the metadata from the database to flat files.  You need to run this script once for each database you want to migrate.  You'll see something like these as you go.


This is one run for the northwind database.  I've run this again for the pubs database and lets look and see what files exist now.
Now, we go up a directory and zip all this up so we can move it to the machine where we will translate it.
Now, we can move that zip file.  Take a look at it, it is very small in size for this demo, but even for a large system, we are only capturing the metadata structure of the database.  If you are working with a partner or SI, this is the file you will want to send them for analysis.

Ok, for those of you who are doing this right now, read on.

When you have the capture.zip file transferred, unzip it into a clean directory.  We will use SQL Developer on this to  convert these metadata files into DDL to create the new Oracle schema and the data move scripts which can be used to unload the data from SQL Server and load it into Oracle.


Now, we use SQL Developer to load this data.  We will need access to an Oracle database to create a schema to use as a repository. The repository is used to hold the source database information and the converted data.

The next post will walk through SQL Developer loading these files and converting the metadata to an Oracle equivalent.







Access Manager 11G Rel 2 and APEX 4.2

Frank van Bortel - Mon, 2013-10-07 11:06
There is some documentation regarding APEX and OAM, but it is flawed. Make sure APEX functions with standard (APEX user based) security, even through OAM; this means Allow /APEX/** Allow /i/** Protect /apex/apex_authentication.callback Page 9 states "OAM_REMOTE_USER with a value of $user.userid is created by default".Not true, just add it. What the extra entries are for is beyond me, APEX willFrankhttp://www.blogger.com/profile/07830428804236732019noreply@blogger.com0

★ Oracle Database 12c In-Memory Option Explained

Eddie Awad - Mon, 2013-10-07 07:00

12c

Jonathan Lewis explains the recently announced Oracle Database 12c in-memory option:

The in-memory component duplicates data (specified tables – perhaps with a restriction to a subset of columns) in columnar format in a dedicated area of the SGA. The data is kept up to date in real time, but Oracle doesn’t use undo or redo to maintain this copy of the data because it’s never persisted to disc in this form, it’s recreated in-memory (by a background process) if the instance restarts. The optimizer can then decide whether it would be faster to use a columnar or row-based approach to address a query.

The intent is to help systems which are mixed OLTP and DSS – which sometimes have many “extra” indexes to optimise DSS queries that affect the performance of the OLTP updates. With the in-memory columnar copy you should be able to drop many “DSS indexes”, thus improving OLTP response times – in effect the in-memory stuff behaves a bit like non-persistent bitmap indexing.

© Eddie Awad's Blog, 2013. | Permalink | Add a comment | Topic: Oracle | Tags: ,

Big Data News from Oracle OpenWorld 2013

Chen Shapira - Sun, 2013-10-06 20:00

Only a week after Oracle OpenWorld concluded and I already feel like I’m hopelessly behind on posting news and impressions. Behind or not, I have news to share!

The most prominent feature announced at OpenWorld is the “In-Memory Option”  for Oracle Database 12c.  This option is essentially a new part of the SGA that caches tables in column formats. This is expected to make data warehouse queries significantly faster and more efficient. I would have described the feature in more details, but Jonathan Lewis gave a better overview in this forum discussion, so just go read his post.

Why am I excited about a feature that has nothing to do with Hadoop?

First, because I have a lot of experience with large data warehouses. So I know that big data often means large tables, but only few columns used in each query. And I know that in order to optimize these queries and to avoid expensive disk reads every time each query runs, we build indexes on those columns, which makes data loading slow. In-memory option will allow us to drop those indexes and just store the columns we need in memory.

Second, because I’m a huge fan of in-memory data warehouses, and am happy that Oracle is now making these feasible. Few TB of memory in a large server are no longer science fiction, which means that most of your data warehouse will soon fit in memory. Fast analytics for all! And what do you do with the data that won’t fit in memory? Perhaps store it in your Hadoop cluster.

Now that I’m done being excited about the big news, lets talk about small news that you probably didn’t notice but you should.

Oracle announced two cool new features for the Big Data Appliance. Announced may be a big word, Larry Ellison did not stand up on stage and talked about them. Instead the features sneaked quietly into the last upgrade and appeared in the documentation.

Perfect Balance – If you use Hadoop as often as I do, you know how data skew can mess with query performance. You run a job with several reducers, each aggregates data for a subset of keys. Unless you took great care in partitioning your data, the data will not be evenly distributed between the reducers, usually because it wasn’t evenly distributed between the keys. As a result, you will spend 50% of the time waiting for that one last reducer to finish already.

Oracle’s Perfect Balance makes the “took great case in partitioning your data” part much much easier. This blog post is just a quick overview, not an in-depth blog post, so I won’t go into details of how this works (wait for my next post on this topic!). I’ll just mention that Perfect Balance can be used without any change to the application code, so if you are using BDA, there is no excuse not to use it.

And no excuse to play solitaire while waiting for the last reducer to finish.

Oracle XQuery for Hadoop – Announced but not officially released yet, which is why I’m pointing you at an Amis blog post. For now thats the best source of information about this feature. This feature, combined with the existing Oracle Loader for Hadoop will allow running XQuery operations on XMLs stored in Hadoop, pushing down the entire data processing bit to Map Reduce on the Hadoop cluster. Anyone who knows how slow, painful and CPU intensive XML processing can be on an Oracle database server will appreciate this feature. I wish I had it a year ago when I had to ingest XMLs at a very high rate. It is also so cool that I’m a bit sorry that we never developed more awesome XQuery capabilities for Hive and Impala. Can’t wait for the release so I can try that!

During OpenWorld there was also additional exposure for existing, but perhaps not very well known Oracle Big Data features – Hadoop for ODI, Hadoop for OBIEE and using GoldenGate with Hadoop. I’ll try to write more about those soon.

Meanwhile, let me know what you think of In-Memory, Perfect Balance and OXH.


Categories: DBA Blogs

Distributing tables evenly into groups using the SQL Model Clause

Rob van Wijk - Sun, 2013-10-06 16:59
My colleague Ronald Rood recently had a nice SQL challenge for me. He had to perform an export of all the tables in a schema the old fashioned way and wanted to manually parallellize the operation. For that to work, all tables need to be assigned to a group. For the parallellization to work, the groups need to be balanced. If, say, all large tables are in one group, the parallellization won't Rob van Wijkhttp://www.blogger.com/profile/00499478359372903250noreply@blogger.com2

SacredPy seeking collaborators

Catherine Devlin - Thu, 2013-10-03 09:21

I'm looking for collaborators who want to build web programming experience on an interesting project...

During my job search, I was contacted by Kai Schraml, a seminary graduate who wants to scratch an itch. Seminarians have a serious need to discuss, debate, and seek consensus on the translations of difficult texts, like sacred scriptures. But the software tools currently available for the purpose are closed-source and expensive. That just seems wrong - not just because seminary students are broke, but because of the nature of the texts themselves. After all, Jesus released his teachings under a very strong open-source license!*

So we're starting to work on an alternative, provisionally called "SacredPy". (It could be applied to any difficult texts, of course, so if Beowulf is sacred to you, have at it.) I'm quite employed now, but I'm dabbling at it a bit for the sheer interest and open-sourcey glory of it all. It's possible income could eventually come from this project - Kai could tell you more about the prospects - but certainly not soon, so this is no substitute for proper employment. But it might be great resume builder for a new Python programmer. It looks like we'll most likely build something atop Askbot, a Django-based project, so if you'd like to move into the thriving "experienced Djano developer" segment of the economy...

Let me know at moc.liamg@nilved.enirehtac and we'll talk!

* - Matthew 10:8 - δωρεὰν ἐλάβετε, δωρεὰν δότε ("Freely you have received, freely give")

Java Embedded Development

Anshu Sharma - Wed, 2013-10-02 17:41

Internet of Things offers exciting possibilities to come up with new enterprise applications. Once you have figured out what functionality you are going to offer and what devices your application has to interact with, you will have to get familiar with embedded java development to get data out from the devices and into the Datacenter for Analytics, integration etc. Also, increasingly a lot of Analytics and processing is happening at the device or near the device, in gateways, for faster response and network usage optimization.

For partners who have traditionally developed applications completely running in Datacenters, the good news is that Java Embedded has mostly the same syntax as Java SE (used in Datacenters) but APIs are a bit different as they are constrained and optimized to run in devices which have restricted processing capabilities and memory. The main Java Embedded products are -  Java Card, Java ME Embedded, Java SE Embedded, Java Embedded Suite & Oracle Event Processing for Java Embedded. As you can guess these separate SKUs are offering more functionality in exchange for increasing footprint. Here are some links for you to explore further -

Java Embedded OTN page - http://www.oracle.com/us/technologies/java/embedded/overview/index.html

Java Embedded community on java.net -  https://community.java.net/community/embedded

In Java SE 8 there are plans to unify some of different SKUs of Java. Please see Java One Keynote to get better idea http://medianetwork.oracle.com/video/player/2685497644001 

Install APEX’s Sample Packaged Application by Importing its SQL Script

Ittichai Chammavanijakul - Wed, 2013-10-02 15:11

If for some reason, you could not install the APEX sample packaged application via Application Builder > Packaged Applications interface, you have an option of installing it by importing the SQL script (fxxxx.sql).

In my case, when installing via the Packaged Applications interface, I got the following error:

Screen Shot 2013-10-02 at 12.06.48 PM

While working with DBA and Oracle support to resolve the root cause of this issue, I’ve found that the installation script (like export file) of the packaged applications comes with the APEX installation files under apex/core/packaged_apps

ICHAMMA1:packaged_apps$ pwd
/Users/ichamma1/Downloads/apex/core/packaged_apps

ICHAMMA1:packaged_apps$ grep "prompt  APPLICATION" *.sql
f7000.sql:prompt  APPLICATION 7000 - Online Marketing Campaign Calendar
f7010.sql:prompt  APPLICATION 7010 - Decision Manager***
f7020.sql:prompt  APPLICATION 7020 - Asset Manager*
f7050.sql:prompt  APPLICATION 7050 - Opportunity Tracker ***
f7060.sql:prompt  APPLICATION 7060 - Bug Tracking***
f7090.sql:prompt  APPLICATION 7090 - Group Calendar ***
f7100.sql:prompt  APPLICATION 7100 - Artwork Catalog***
f7120.sql:prompt  APPLICATION 7120 - Expertise Tracker***
f7130.sql:prompt  APPLICATION 7130 - Community Requests ***
f7140.sql:prompt  APPLICATION 7140 - Incident Tracking***
f7150.sql:prompt  APPLICATION 7150 - Systems Catalog***
f7170.sql:prompt  APPLICATION 7170 - Customer Tracker***
f7190.sql:prompt  APPLICATION 7190 - Issue Tracker***
f7220.sql:prompt  APPLICATION 7220 - P-Track***
f7230.sql:prompt  APPLICATION 7230 - Data Model Repository Viewer*
f7240.sql:prompt  APPLICATION 7240 - Checklist Manager***
f7250.sql:prompt  APPLICATION 7250 - Data Reporter***
f7270.sql:prompt  APPLICATION 7270 - APEX Application Archive***
f7280.sql:prompt  APPLICATION 7280 - Survey Builder ***
f7290.sql:prompt  APPLICATION 7290 - Meeting Minutes***
f7300.sql:prompt  APPLICATION 7300 - Use Case Status***
f7600.sql:prompt  APPLICATION 7600 - Sample Access Control*
f7610.sql:prompt  APPLICATION 7610 - Sample Build Options*
f7650.sql:prompt  APPLICATION 7650 - Go Live Check List***
f7800.sql:prompt  APPLICATION 7800 - Brookstrut Sample Application ***
f7810.sql:prompt  APPLICATION 7810 - Sample Reporting***
f7820.sql:prompt  APPLICATION 7820 - Sample Calendar***
f7830.sql:prompt  APPLICATION 7830 - Sample Charts***
f7840.sql:prompt  APPLICATION 7840 - Sample Dynamic Actions***
f7850.sql:prompt  APPLICATION 7850 - Sample Data Loading***
f7860.sql:prompt  APPLICATION 7860 - Sample Master Detail***
f7870.sql:prompt  APPLICATION 7870 - Sample Forms and Grid Layout***
f7880.sql:prompt  APPLICATION 7880 - Sample Search***
f7890.sql:prompt  APPLICATION 7890 - Feedback ***
f7900.sql:prompt  APPLICATION 7900 - Sample Dialog***
f7910.sql:prompt  APPLICATION 7910 - Sample Trees***
f7920.sql:prompt  APPLICATION 7920 - Sample Lists***
f7930.sql:prompt  APPLICATION 7930 - Sample Wizards***
f7940.sql:prompt  APPLICATION 7940 - Sample Collections***
f7950.sql:prompt  APPLICATION 7950 - Sample Time Zones*
f7960.sql:prompt  APPLICATION 7960 - Sample File Upload and Download***
f7980.sql:prompt  APPLICATION 7980 - Sample RESTful Services***
f8950.sql:prompt  APPLICATION 8950 - Sample Database Application

The installation is just simple as importing this script file (Application Builder > Import) or run from the SQL Plus (with proper security setup).

Categories: DBA Blogs

Oracle Linux 6 on Microsoft Azure

Wim Coekaerts - Wed, 2013-10-02 13:57
One of the great keynotes at Oracle OpenWorld last week, was from Microsoft. You can watch the replay here. I think Brad did an awesome job, very engaging and a very positive partner message. There was a lot of Oracle Linux talk in the Microsoft session, just awesome.

We have worked closely with Microsoft to ensure that we can deploy Oracle Linux inside their Azure platform (and also just in general on Hyper-v). Part of the work is to provide templates that include Oracle products such as Oracle RDBMS and Oracle WebLogic on Oracle Linux in Azure. This is a similar concept as Oracle VM templates. You can go through the catalog on Azure, select a template and a few minutes later you end up with a complete running Virtual Machine. These templates with Oracle products are available for both Windows and Oracle Linux environments.

Microsoft has a free trial offering which I tried out last night (with my personal account) and within a few minutes and no prior knowledge of how their environment works, I had an Oracle Linux 6 update 4 instance up and running. Logged in using ssh. They have a very easy to navigate portal. We have configured Oracle Linux out of the box with public-yum for updates. So if you need an enterprise grade Linux distribution on Azure that comes with free updates/errata and fast connectivity to the update servers, go use Oracle Linux. And the nice thing is, if you need support for some of those VM's deployed, you just pay for those VM's you want support for.

This is also nice for ISVs that want to provide their own application solutions in Azure, they can use Oracle Linux and embed it in their VM with their app and, again, an enterprise grade solution that can be freely used without signing contracts with us, and be current with updates and errata. If the ISV then wants support, they can resell Oracle Linux subscriptions. This is a very simple, open, hassle-free solution.

Modernizing Forms Talk

Gerd Volberg - Wed, 2013-10-02 03:27
The interview, I wrote about two days ago, was made in cooperation with my presentation "How to modernize Oracle Forms". And here is the video (50 minutes in german):





The link to the slides, used in the presentation, is here. It's german too, but I hope, that chapters like "Installation Look and Feel" are so easy to understand, that nobody has language problems in most parts of the presentation.

Try it
Gerd

Pagination with Couchbase

Tugdual Grall - Tue, 2013-10-01 03:00
If you have to deal with a large number of documents when doing queries against a Couchbase cluster it is important to use pagination to get rows by page. You can find some information in the documentation in the chapter "Pagination", but I want to go in more details and sample code in this article. For this example I will start by creating a simple view based on the beer-sample dataset, the Tugdual Grallhttps://plus.google.com/103667961621022786141noreply@blogger.com6

My Oracle OpenWorld 2013 Presentations

Chen Shapira - Mon, 2013-09-30 16:47

Oracle OpenWorld was fantastic, as usual. The best show in San Francisco. This is the seventh year in a row that I’m attending – 3 times as HP employee, 3 times as Pythian employee, and now as a Clouderan. My life changes, but the event and people are always fantastic.

There will be a separate blogpost about what I learned at the event, new exciting products and my thoughts of them. But first, let me follow up on what I taught.

On Sunday afternoon, and then again on Thursday afternoon, I presented “Data Wrangling with Oracle Connectors for Hadoop”. I presented it twice because both Oracle and IOUG liked my abstract. I was surprised to discover that both audiences had no idea what “Data Wrangling” is! I appreciate the attendees, they trusted me enough to attend without even being sure what I’m planning to talk about. In both sessions I had people come up with excellent questions, mentioning that they are current or future Cloudera customers. I absolutely loved it, what a great opportunity to connect with Hadoopers from all industries.

You can find the slides here: Data Wrangling with Oracle Connectors for Hadoop

On Monday, at OakTable World, I presented ETL on Hadoop. I presented it at Surge earlier this year, but this time I think I misjudged the fit of the content to the audience – I gave pretty technical tips of how to implement ETL on Hadoop to an audience with very little experience with Hadoop. They were smart people and mostly followed along, but I should have kept my content to more introductory level.

You can find the slides here: Scaling ETL with Hadoop

On Wednesday, I was fortunate to present with my former colleague Marc Fielding on SSDs and their use in Exadata. The topic is not very Hadoop related, but I love SSDs regardless and presenting with Marc was fun and the audience was highly engaged. I did get a lot of questions on SSDs and Hadoop, so I’ll consider writing about the topic in the future.

Marc has the latest version of the slides, but you can find an approximation here: Databases in a Solid State World.

Thanks again to everyone who attended, to all the customers who stopped to say hello and to everyone who was friendly and made the event fun. I hope to see you again next year.


Categories: DBA Blogs

SOA Upgrade 11.1.1.3 -> 11.1.1.6

Dave Best - Mon, 2013-09-30 14:50
We upgraded our SOA install from 11.1.1.3 to 11.1.1.6 and hit a few issues.   One of the main issues was that after upgrading DEV our inflight processes disappeared.    We talked back and forth with Oracle and it was supposed to be supported but for some reason it wasn't working for us.    I know that from SOA 10g to SOA 11g in-flight processes are not supported as part of the upgrade. One of

Pages

Subscribe to Oracle FAQ aggregator