Feed aggregator

How To Do Single Sign On (SSO) for Web Services

Mark Wilcox - Wed, 2013-12-11 08:38

A recent question on our internal list was

"A customer has OAM and wants to do SSO to SOAP Web Services".

In this case the customer was using Webcenter Content (the product formerly known as Unified Content Manager UCM). But the scenario applies to any SOAP Web Service.

My answer was well received and there isn't anything proprietary here so I thought I would share to make it easier for people to find and for me to refer to later.

First - There is no such thing as SSO in web services.

There is only identity propagation.

Meaning that I log in as Fabrizio into OAM, connect to a Web application protected by OAM.

That Web application is a Web Services client and I want to tell the client to tell the Web Services that Fabrizio is using the service.

The first step to set this up is to protect the web services via OWSM.

The second step is to translate the OAM token into a WS-Security token.

There are 3 ways to this second step:

1 - If you are writing manual client and don't want any other product involved - use OAM STS

2 - Use Oracle Service Bus (which most likely will also use OAM STS but should make this a couple of mouse clicks)

3 - Use OAG - which doesn't need to talk to STS. It has a very simple way to convert OAM into WS-Security header.

If you're not using OSB already - I would recommend OAG. It's by far the simplest plus you get the additional benefits of OAG.

PS - You can use OSB and OAG together in many scenarios - I was only saying to avoid OSB here because the service was already exposed and there was no benefit I could see for having OSB. If you have a reason to have OSB - let me know. I only know OSB at a very high level since my area of focus is security.

Why Oozie?

Chen Shapira - Mon, 2013-12-09 13:04

Thats a really frequently asked question. Oozie is a workflow manager and scheduler. Most companies already have a workflow schedulers – Activebatch, Autosys, UC4, HP Orchestration. These workflow schedulers run jobs on all their existing databases – Oracle, Netezza, MySQL. Why does Hadoop need its own special workflow scheduler?

As usual, it depends. In general, you can keep using any workflow scheduler that works for you. No need to change, really.
However, Oozie does have some benefits that are worth considering:

  1. Oozie is designed to scale in a Hadoop cluster. Each job will be launched from a different datanode. This means that the workflow load will be balanced and no single machine will become overburdened by launching workflows. This also means that the capacity to launch workflows will grow as the cluster grows.
  2. Oozie is well integrated with Hadoop security. This is especially important in a kerberized cluster. Oozie knows which user submitted the job and will launch all actions as that user, with the proper privileges. It will handle all the authentication details for the user as well.
  3. Oozie is the only workflow manager with built-in Hadoop actions, making workflow development, maintenance and troubleshooting easier.
  4. Oozie UI makes it easier to drill down to specific errors in the data nodes. Other systems would require significantly more work to correlate jobtracker jobs with the workflow actions.
  5. Oozie is proven to scale in some of the world’s largest clusters. The white paper discusses a deployment at Yahoo! that can handle 1250 job submissions a minute.
  6. Oozie gets callbacks from MapReduce jobs so it knows when they finish and whether they hang without expensive polling. No other workflow manager can do this.
  7. Oozie Coordinator allows triggering actions when files arrive at HDFS. This will be challenging to implement anywhere else.
  8. Oozie is supported by Hadoop vendors. If there is ever an issue with how the workflow manager integrates with Hadoop – you can turn to the people who wrote the code for answers.

So, should you use Oozie? If you find these benefits compelling, then yes. Step out of your comfort zone and learn another new tool. It will be worth it.

Categories: DBA Blogs

Adding Statcounter to Contao-CMS

Dietrich Schroff - Sun, 2013-12-08 03:47
If you are running a Contao based website and you want to add the Statcounter.com snippet for statistics, there is no manual how to to this. I tried some different things, and this was my solution:

  • Create a file name counter.html on your webserver on toplevel of the Contao system (htdocs or something equivalent).
  • Put the Statcounter snippet in this file
<-- Start of StatCounter Code for Default Guide -->
(script type="text/javascript")
var sc_project=NNN; 
var sc_invisible=1; 
  • Then add the following next to the last line inside index.php:
include 'counter.html';
  • After that you will be able to get your statistics via Statcounter.com....

    My history with Big Data

    Tahiti Views - Sat, 2013-12-07 19:28
    Before I joined Cloudera, I hadn't had much formal experience with Big Data. But I had crossed paths with one of its major use cases before, so I found it easy to pick up the mindset. My previous big project involved a relational database hooked up to a web server. Naturally I wanted to be able to track visitor stats, detect denial-of-service attacks, and chart the most popular pages, search John Russellhttp://www.blogger.com/profile/17089970732272081637noreply@blogger.com0

    Oracle Linux containers continued

    Wim Coekaerts - Fri, 2013-12-06 16:10
    More on Linux containers... the use of btrfs in particular and being able to easily create clones/snapshots of container images. To get started : have an Oracle Linux 6.5 installation with UEKr3 and lxc installed and configured.

    lxc by default uses /container as the directory to store container images and metadata. /container/[containername]/rootfs and /container/[containername]/config. You can specify an alternative pathname using -P. To make it easy I added an extra disk to my VM that I use to try out containers (xvdc) and then just mount that volume under /container.

    - Create btrfs volume

    If not yet installed, install btrfs-progs (yum install btrfs-progs)

    # mkfs.btrfs /dev/xvdc1
    # mount /dev/xvdc1 /container 
    You can auto-mount this at startup by adding a line to /etc/fstab

    /dev/xvdc1		/container		btrfs   defaults 0 0

    - Create a container

    # lxc-create -n OracleLinux59 -t oracle -- -R 5.9
    This creates a btrfs subvolume /container/OracleLinux59/rootfs

    Use the following command to verify :

    # btrfs subvolume list /container/
    ID 260 gen 33 top level 5 path OracleLinux59/rootfs

    - Start/Stop container

    # lxc-start -n OracleLinux59

    This starts the container but without extra options your current shell becomes the console of the container.
    Add -c [file] and -d for the container to log console output to a file and return control to the shell after starting the container.

    # lxc-start -n OracleLinux59 -d -c /tmp/OL59console
    # lxc-stop -n OracleLinux59

    - Clone a container using btrfs's snapshot feature which is built into lxc

    # lxc-clone -o OracleLinux59 -n OracleLinux59-dev1 -s
    Tweaking configuration
    Copying rootfs...
    Create a snapshot of '/container/OracleLinux59/rootfs' in '/container/OracleLinux59-dev1/rootfs'
    Updating rootfs...
    'OracleLinux59-dev1' created
    # btrfs subvolume list /container/
    ID 260 gen 34 top level 5 path OracleLinux59/rootfs
    ID 263 gen 34 top level 5 path OracleLinux59-dev1/rootfs

    This snapshot clone is instantaneous and is a copy on write snapshot.
    You can test space usage like this :

    # btrfs filesystem df /container
    Data: total=1.01GB, used=335.17MB
    System: total=4.00MB, used=4.00KB
    Metadata: total=264.00MB, used=25.25MB
    # lxc-clone -o OracleLinux59 -n OracleLinux59-dev2 -s
    Tweaking configuration
    Copying rootfs...
    Create a snapshot of '/container/OracleLinux59/rootfs' in '/container/OracleLinux59-dev2/rootfs'
    Updating rootfs...
    'OracleLinux59-dev2' created
    # btrfs filesystem df /container
    Data: total=1.01GB, used=335.17MB
    System: total=4.00MB, used=4.00KB
    Metadata: total=264.00MB, used=25.29MB

    - Adding Oracle Linux 6.5

    # lxc-create -n OracleLinux65 -t oracle -- -R 6.5
    lxc-create: No config file specified, using the default config /etc/lxc/default.conf
    Host is OracleServer 6.5
    Create configuration file /container/OracleLinux65/config
    Downloading release 6.5 for x86_64
    Configuring container for Oracle Linux 6.5
    Added container user:oracle password:oracle
    Added container user:root password:root
    Container : /container/OracleLinux65/rootfs
    Config    : /container/OracleLinux65/config
    Network   : eth0 (veth) on virbr0
    'oracle' template installed
    'OracleLinux65' created

    - Install an RPM in a running container

    # lxc-attach -n OracleLinux59-dev1 -- yum install mysql
    Setting up Install Process
    Resolving Dependencies
    --> Running transaction check
    ---> Package mysql.i386 0:5.0.95-3.el5 set to be updated

    This connects to the container and executes # yum install mysql inside the container.

    - Modify container resource usage

    # lxc-cgroup -n OracleLinux59-dev1 memory.limit_in_bytes 53687091
    # lxc-cgroup -n OracleLinux59-dev1 cpuset.cpus
    # lxc-cgroup -n OracleLinux59-dev1 cpuset.cpus 0,1

    Assigns cores 0 and 1. You can also use a range 0-2,...

    # lxc-cgroup -n OracleLinux59-dev1 cpu.shares
    # lxc-cgroup -n OracleLinux59-dev1 cpu.shares 100
    # lxc-cgroup -n OracleLinux59-dev1 cpu.shares
    # lxc-cgroup -n OracleLinux59-dev1 blkio.weight
    # lxc-cgroup -n OracleLinux59-dev1 blkio.weight 20

    A list of resource control parameters : http://docs.oracle.com/cd/E37670_01/E37355/html/ol_subsystems_cgroups.html#ol_cpu_cgroups

    Lenz has created a Hands-on lab which you can find here : https://wikis.oracle.com/display/oraclelinux/Hands-on+Lab+-+Linux+Containers

    Parameterizing Hive Actions in Oozie Workflows

    Chen Shapira - Fri, 2013-12-06 12:30

    Very common request I get from my customers is to parameterize the query executed by a Hive action in their Oozie workflow.
    For example, the dates used in the query depend on a result of a previous action. Or maybe they depend on something completely external to the system – the operator just decides to run the workflow on specific dates.

    There are many ways to do this, including using EL expressions, capturing output from shell action or java action.
    Here’s an example of how to pass the parameters through the command line. This assumes that whoever triggers the workflow (Human or an external system) has the correct value and just needs to pass it to the workflow so it will be used by the query.

    Here’s what the query looks like:

    insert into test select * from test2 where dt=${MYDATE}

    MYDATE is the parameter that allows me to run the query on a different date each time. When running this query in hive, I’d use something like “set MYDATE=’2011-10-10′” before running the query. But when I run it from Oozie, I need to pass the parameter to the Hive action.

    Lets assume I saved the query in a file hive1.hql. Here’s what the Oozie workflow would look like:

    <workflow-app name="cmd-param-demo" xmlns="uri:oozie:workflow:0.4">
    	<start to="hive-demo"/>
    	<action name="hive-demo">
    		<hive xmlns="uri:oozie:hive-action:0.2">
    		<ok to="end"/>
    		<error to="kill"/>
    	<kill name="kill">
    		<message>Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
    	<end name="end"/>

    The important line is “MYDATE=${MYDATE}”. Here I translate an Oozie parameter to a parameter that will be used by the Hive script. Don’t forget to copy hive-site.xml and hive1.hql to HDFS! Oozie actions can run on any datanode and will not read files on local file system.

    And here’s how you call Oozie with the commandline parameter:
    oozie job -oozie http://myserver:11000/oozie -config ~/workflow/job.properties -run -verbose -DMYDATE=’2013-11-15′

    Thats it!

    Categories: DBA Blogs

    It’s a Miracle

    Mathias Magnusson - Thu, 2013-12-05 14:20

    Time to get back into blogging. I stopped a while ago and the reason was two-fold. As I was leaving my job at Kentor AB I wanted to avoid any concerns with what I wrote while my three month long two week notice played out. The other reason was that once I had left life got really busy both with work for my client as well as with the new adventure I had departed on.

    The new adventure is to bring the Miracle brand to Sweden. I will try to build up an Oracle database focused team here in Stockholm based on the success Miracle A/S has had in Denmark. Yes it is equally exciting as it is scary for someone who has lived their life in medium to large scale consulting practices.

    However, I have always had a big admiration for the guys who started Miracle and what they have achieved. Getting a chance to bring the same style and quality to my home market of Stockolm and Sweden was just an offer that was too good to pass up. That is the kind of opportunities most of us will only get once or twice in a career. This one came at a time where it was close enough to work with everything else that was going on. The one thing that may not be optimal is having a new house built at the same time. Actually, that makes it completely not optimal. But the phrase I keep repeating to others that are thinking about when the best time to get started is “There is no time such as the present”, so I took my own advice for once.

    So now the work is on getting the Oracle practice going, or rather it is going it just needs a few more legs. And with legs I mean consultants.

    Next is however partying with our colleagues down in Copenhagen close to the headquarters. Tomorrow evening is when the Christmas party kicks off. Who knows when it will end. :-)

    This blog will soon return to a more technical program. thought probably with some posts on interesting things with starting up a company. I’m sure the next year will teach us a lot of things on how to do that.

    About that, this hiring business… When is the best time to add more people to a small company and how do we manage risk? Well… ehhh… Yeah… That’s right there is no best time, but there is no time such as the present.

    Largest Tables Including Indexes and LOBs

    Jeremy Schneider - Wed, 2013-12-04 16:04

    Just a quick code snippit. I do a lot of data pumps to move schemas between different databases; for example taking a copy of a schema to an internal database to try to reproduce a problem. Some of these schemas have some very large tables. The large tables aren’t always needed to research a particular problem.

    Here’s a quick bit of SQL to list the 20 largest tables by total size – including space used by indexes and LOBs. A quick search on google didn’t reveal anything similar so I just wrote something up myself. I’m pretty sure this is somewhat efficient; if there’s a better way to do it then let me know! I’m posting here so I can reference it in the future. :)

    with segment_rollup as (
      select owner, table_name, owner segment_owner, table_name segment_name from dba_tables
        union all
      select table_owner, table_name, owner segment_owner, index_name segment_name from dba_indexes
        union all
      select owner, table_name, owner segment_owner, segment_name from dba_lobs
        union all
      select owner, table_name, owner segment_owner, index_name segment_name from dba_lobs
    ), ranked_tables as (
      select rank() over (order by sum(blocks) desc) rank, sum(blocks) blocks, r.owner, r.table_name
      from segment_rollup r, dba_segments s
      where s.owner=r.segment_owner and s.segment_name=r.segment_name
        and r.owner=upper('&schema_name')
      group by r.owner, r.table_name
    select rank, round(blocks*8/1024) mb, table_name
    from ranked_tables
    where rank<=20;

    The output looks like this:

    Enter value for schema_name: someschema
          RANK         MB TABLE_NAME
    ---------- ---------- ------------------------------
             1      14095 REALLYBIGTABLE_USESLOBS
             2       6695 VERYBIG_MORELOBS
             3       5762 VERYLARGE
             4       5547 BIGBIG_LOTSOFINDEXES
             5        446 MORE_REASONABLE
             6        412 REASONABLE_TABLE_2
             7        377 ANOTHERONE
             8        296 TABLE1235
             9        280 ANOTHER_MADEUP_NAME
            10        141 I_CANT_POST_PROD_NAMES_HERE
            11         99 SMALLERTABLE
            12         92 NICESIZETABLE
            13         89 ILIKETHISTABLE
            14         62 DATATABLE
            15         53 NODATATABLE
            16         48 NOSQLTABLE
            17         30 HERES_ANOTHER_TABLE
            18         28 TINYTABLE
            19         24 ACTUALLY_THERES_400_MORE_TABLES
            20         19 GLAD_I_DIDNT_LIST_THEM_ALL
    20 rows selected.

    And just a quick reminder – the syntax to exclude a table from a data pump schema export is:

    expdp ... exclude=SCHEMA_EXPORT/TABLE:[TABNAME],[TABNAME],...

    Hope this is useful!

    Joint Webinar with Oracle – Top 5 Key Reasons Why Oracle VM is Best for Oracle Database

    Oracle NZ - Wed, 2013-12-04 14:50

    Top 5 Key Reasons Why Oracle VM is Best for Oracle Database


    When: Tuesday December 17th , 10am PST

    Register Here:


    Oracle VM application-driven architecture is designed for rapid deployment and ease of lifecycle management providing IT a highly scalable and cost effective virtualization solution for Oracle Database and business-critical applications.

    Join us in this webcast featuring Francisco Munoz Alvarez, a seasoned expert, and Oracle ACE Director as he shares his performance benchmark findings highlighting key reasons why Oracle VM is the best virtualization technology for Oracle Databases.



    Copyright © OracleNZ by Francisco Munoz Alvarez [Joint Webinar with Oracle - Top 5 Key Reasons Why Oracle VM is Best for Oracle Database], All Right Reserved. 2016.
    Categories: DBA Blogs

    Oracle Linux containers

    Wim Coekaerts - Wed, 2013-12-04 14:24
    So I played a bit with docker yesterday (really cool) and as I mentioned, it uses lxc (linux containers) underneath the covers. To create an image based on OL6, I used febootstrap, which works fine but Dwight Engen pointed out that I should just use lxc-create since it does all the work for you.

    Dwight's one of the major contributors to lxc. One of the things he did a while back, was adding support in lxc-create to understand how to create Oracle Linux images. All you have to do is provide a version number and it will figure out which yum repos to connect to on http://public-yum.oracle.com and download the required rpms and install them in a local subdirectory. This is of course superconvenient and incredibly fast. So... I played with that briefly this morning and here's the very short summary.

    Start out with a standard Oracle Linux 6.5 install and uek3. Make sure to add/install lxc if it's not yet there (yum install lxc) and you're good to go.

    *note - you also have to create /container for lxc - so also do mkdir /container after you install lxc, thank Tony for pointing this out.

    # lxc-create -n ol65 -t oracle -- -R 6.5.

    That's it. lxc-create will know this is an Oracle Linux container, using OL6.5's repository to create the container named ol65.

    lxc-create automatically connects to public-yum, figures out which repos to use for 6.5, downloads all required rpms and generates the container. At the end you will see :

    Configuring container for Oracle Linux 6.5
    Added container user:oracle password:oracle
    Added container user:root password:root
    Container : /container/ol65/rootfs
    Config    : /container/ol65/config
    Network   : eth0 (veth) on virbr0
    'oracle' template installed
    'ol65' created

    Now all you need to do is :

    lxc-start --name ol65

    And you are up and running with a new container. Very fast, very easy.

    If you want an OL5.9 container (or so) just do lxc-create -n ol59 -t oracle -- -R 5.9. Done. lxc has tons of very cool features, which I will get into more later. You can use this model to import images into docker as well, instead of using febootstrap.

    #  lxc-create -n ol65 -t oracle -- -R 6.5
    #  tar --numeric-owner -jcp -C /container/ol65/rootfs . | \
        docker import - ol6.5
    #  lxc-destroy -n ol65

    Oracle Linux 6.5 and Docker

    Wim Coekaerts - Tue, 2013-12-03 23:21
    I have been following the Docker project with great interest for a little while now but never got to actually try it out at all. I found a little bit of time tonight to at least try hello world.

    Since docker relies on cgroups and lxc, it should be easy with uek3. We provide official support for lxc, we are in fact a big contributor to the lxc project (shout out to Dwight Engen) and the docker website says that you need to be on 3.8 for it to just work. So, OL6.5 + UEK3 seems like the perfect combination to start out with.

    Here are the steps to do few very simple things:

    - Install Oracle Linux 6.5 (with the default UEK3 kernel (3.8.13))

    - To quickly play with docker you can just use their example

    (*) if you are behind a firewall, set your HTTP_PROXY

    -> If you start from a Basic Oracle Linux 6.5 installation, install lxc first. Your out-of-the-box OL should be configured to access the public-yum repositories.

    # yum install lxc

    -> ensure you mount the cgroups fs

    # mkdir -p /cgroup ; mount none -t cgroup /cgroup

    -> grab the docker binary

    # wget https://get.docker.io/builds/Linux/x86_64/docker-latest -O docker
    # chmod 755 docker

    -> start the daemon

    (*) again, if you are behind a firewall, set your HTTP_PROXY setting (http_proxy won't work with docker)

    # ./docker -d &
    -> you can verify if it works

    # ./docker version
    Client version: 0.7.0
    Go version (client): go1.2rc5
    Git commit (client): 0d078b6
    Server version: 0.7.0
    Git commit (server): 0d078b6
    Go version (server): go1.2rc5

    -> now you can try to download an example using ubuntu (we will have to get OL up there :))

    # ./docker run -i -t ubuntu /bin/bash

    this will go and pull in the ubuntu template and run bash inside

    # ./docker run -i -t ubuntu /bin/bash
    WARNING: IPv4 forwarding is disabled.

    and now I have a shell inside ubuntu!

    -> ok so now on to playing with OL6. Let's create and import a small OL6 image.

    -> first install febootstrap so that we can create an image

    # yum install febootstrap

    -> now you have to point to a place where you have the repoxml file and the packages on an http server. I copied my ISO content over to a place

    I will install some basic packages in the subdirectory ol6 (it will create an OL installed image - this is based on what folks did for centos so it works the same (https://github.com/dotcloud/docker/blob/master/contrib/mkimage-centos.sh)

    # febootstrap -i bash -i coreutils -i tar -i bzip2 -i gzip \
    -i vim-minimal -i wget -i patch -i diffutils -i iproute -i yum ol6 ol6 http://wcoekaer-srv/ol/
    # touch ol6/etc/resolv.conf
    # touch ol6/sbin/init

    -> tar it up and import it

    # tar --numeric-owner -jcpf ol6.tar.gz -C ol6 .
    # cat ol6.tar.gz | ./docker import - ol6


    List the image

    # ./docker images
    # ./docker images
    REPOSITORY          TAG                 IMAGE ID      
          CREATED             SIZE
    ol6                 latest              d389ed8db59d    
          8 minutes ago       322.7 MB (virtual 322.7 MB)
    ubuntu              12.04               8dbd9e392a96     
          7 months ago        128 MB (virtual 128 MB)

    And now I have a docker image with ol6 that I can play with!

    # ./docker run -i -t ol6 ps aux
    WARNING: IPv4 forwarding is disabled.
    root         1  1.0  0.0  11264   656 ?        R+   23:58   0:00 ps aux

    Way more to do but this all just worked out of the box!

    # ./docker run ol6 /bin/echo hello world
    WARNING: IPv4 forwarding is disabled.
    hello world

    That's it for now. Next time, I will try to create a mysql/ol6 image and various other things.

    This really shows the power of containers on Linux and Linux itself. We have all these various Linux distributions but inside lxc (or docker) you can run ubuntu, debian, gentoo, yourowncustomcrazything and it will just run, old versions of OL, newer versions of OL, all on the same host kernel.

    I can run OL6.5 and create OL4, OL5, OL6 containers or docker images but I can also run any old debian or slackware images at the same time.

    Android Update: 4.4

    Dietrich Schroff - Tue, 2013-12-03 14:44
    After nearly everyone got the update, my device came up with this tiny icon:
    And here we go:

     If you follow the link you will get the following information:
    • KitKat 4.4: Smart, simple, and truly yours 
    • and several other enhancements of version 4.4
    And here the system information after the update:
    For a complete history of all updates visit this posting.

    OSP #2c: Build a Standard Platform from the Bottom-Up

    Jeremy Schneider - Mon, 2013-12-02 15:07

    This is the fourth of twelve articles in a series called Operationally Scalable Practices. The first article gives an introduction and the second article contains a general overview. In short, this series suggests a comprehensive and cogent blueprint to best position organizations and DBAs for growth.

    This article – building a standard platform – has been broken into three parts. We’ve already discussed standardization in general and looked in-depth at storage. Now it’s time to look in-depth at three more key decisions: CPU and memory and networking.


    One of the key ideas of Operationally Scalable Practices is to start early with standards that don’t get in the way of consolidation. As you grow, consolidation will be increasingly important – saving both money and time. Before we dig into specifics of standardizing CPU and memory, we need to briefly discuss consolidation in general.

    Consolidation can happen at many levels:

    1. Single schema and multiple customers
    2. Single database and multiple schemas or tenants (12c CDB)
    3. Single OS and multiple databases
    4. Single hardware and multiple OS’s (virtualization)

    Two important points about this list. First, it works a lot like performance tuning: the biggest wins are always highest in the stack. If you want to save time and money then you should push to consolidate as high as possible, ideally in the application. But there are often forces pushing consolidation lower in the stack as well. For example:

    • Google doesn’t spin up new VMs every time a new customer signs up for Google Apps. Their existing webapp stack handles new customers. This is a great model – but if your app wasn’t designed this way from the beginning, it could require a massive development effort to add it.
    • It’s obvious but worth stating: you can only push consolidation up a homogenous stack. If the DB runs on linux and the app runs on windows then naturally they’ll each need their own VM. Same goes for the other three tiers.
    • Server operating systems have robust multiuser capabilities – but sharing an Operating System can still be tricky and these days virtualization offers a strong value proposition (especially when combined with automation). Then there are containers, which fall somewhere in between single OS and virtualization.
    • Security or regulatory or contractual requirements may require separate storage, separate databases or separate operating systems.
    • A requirement for independent failover may drive separate databases. In data guard, whole databases (or whole container databases) must be failed over as a single unit.

    The second important point is that realistically you will encounter all four levels of consolidation at some point as you grow. Great standards accommodate them all.


    In my opinion, batch workloads can vary but interactive workloads should always be CPU-bound (not I/O-bound). To put it another way: there are times when your database is mainly servicing some app where end-users are clicking around. At those times, your “top activity” graph in enterprise manager should primarily be green. Not blue, not red, not any other color. (And not too much of that green!) I’m not talking about reports, backups, or scheduled jobs – just the interactive application itself. (Ideally you even have some way to distinguish between different categories of activity, in which case there are ways to look at the profile of the interactive traffic even when there is other activity in the database!)

    This leads into the question of how much CPU you need. I don’t have any hard and fast rules for CPU minimums in a standard configuration. Just two important thoughts:

    1. Maximum unit of consolidation: CPU is a major factor in how many applications can be consolidated on a single server. (Assuming that we’re talking about interactive applications with effective DB caching – these should be primarily CPU-bound.)
    2. Minimum unit of licensing: If partitioning or encryption becomes a requirement for you six months down the road then you’ll have to license the number of cores in one server. Oracle requires you to license all CPUs physically present in the server if any feature is used on that server.

    The goal is to limit future purchasing to this configuration. And as with storage, if you really must have more than one configuration, then try to keep it down to two (like a high-CPU option).


    I don’t have a formula to tell you how much memory you should standardize on either. It’s surprising how often SGAs are still poorly sized today – both too small and too large. You need to understand your own applications and their behavior. It’s worthwhile to spend some time reading sar or AWR reports and looking at historical activity graphs.

    Once you start to get a rough idea what your typical workload looks like, I would simply suggest to round up as you make the final decision on standard total server memory capacity. There are two reasons for this:

    1. OS and database consolidation have higher memory requirements. Application and schema/multitenant consolidation will not be as demanding on memory – but as we pointed out earlier, your standards should support all levels of consolidation.
    2. You’re probably not maxing out the memory capacity of your server and it’s probably not that expensive to bump it up a little bit.
    Consolidation Level Common Bottleneck Single Schema (Multiple Customers) CPU Multiple Schemas/PDBs CPU Multiple Databases Memory Multiple OS’s (VMs) Memory Networking

    Small companies generally start with one network. But these days, networking can quickly get complicated even at small companies since network gear allows you to define and deploy multiple logical networks on the physical equipment. Early on, even if it doesn’t all seem relevant yet, I would recommend discussing these networking topics:

    • Current traffic: Are you gathering data on current network usage? Do you know how much bandwidth is used by various services, and how bursty those services are?
    • Logical segregation: Which network should be used for application traffic? What about monitoring traffic, backup traffic, replication traffic (e.g. data guard or goldengate) and operations traffic (kickstarts, data copies between environments, etc)? What about I/O traffic (e.g. NFS or iSCSI)? What is the growth strategy and how will this likely evolve over the coming years?
    • Physical connections: How many connections do we need, accounting for redundancy and isolation/performance requirements and any necessary physical network separation?
    • Clustering: Clustering generally require a dedicated private network and tons of IPs (on both the private cluster network and your corporate network). Sometimes it has higher bandwidth and latency requirements than usual. Generally it is recommended to deploy RAC on at least 10G ethernet for the interconnect. Is there a general strategy for how this will be addressed when the need arises?

    It will benefit you greatly to take these discussions into consideration early and account for growth as you build your standard platform.


    One design pattern that I’ve found to be helpful is the idea of slots. The basic idea is similar to physical PCI or DIMM slots – but these are logical “slots” which databases or VMs can use. This is a simplified, practical version of the service catalog concept borrowed from ITIL for private cloud architectures – and this can provide a strong basis if you grow or migrate to that point.

    1. Determine the smallest amount of memory which a standardized database (SGA) or VM will use. This will determine a slot size.
    2. Determine the largest amount of memory which can be allocated on the server. For databases, about 70% of server memory for SGA is a good starting point if it’s an interactive system. For VMs it’s possible to even allow more memory than is physically present but I don’t know the latest conventional wisdom about doing this.
    3. Choose additional DB or VM size options as even multiples of the minimum size.

    For example, a database server containing 64GB of memory might have a slot size of 5GB with 9 total available slots. Anyone who wants a database can choose either a small or large database; a small database uses 1 slot and its SGA is 5GB. A large database uses 5 slots and its SGA is 25GB.

    After the basic slot definition has been decided, CPU limits can be drafted. If the database server has 8 physical cores then the small database might have a hard limit of 2 CPUs and a large database might have a hard limit of 6 CPUs.

    One area which can be confusing with CPU limits is factoring in processor threads. When determining your limits for a consolidation environment, make sure that individual applications are capped before pushing the total load over the physical number of CPUs. But allow the aggregate workload to approach the logical number of CPUs in a period of general heavy load coming from lots of applications.

    In practice, that means:

    1. For multiple databases, set cpu_limit on each one low according to the physical count and calibrate the aggregate total against the logical count.
    2. For multiple schemas in a single database: use resource manager to limit CPU for each schema according to physical count and set cpu_count high according to logical count.

    Slot Example

    Now you have a first draft of memory and CPU definitions for a small and large database. The next step is to define the application workload limits for each database size. As you’re consolidating applications into a few databases, how many instances of your app can be allowed in a small and large database respectively?

    Suppose you’re a SAAS company who hosts and manages lots of SAP databases for small businesses. I don’t actually know what the CPU or memory requirements of SAP are so I’m making these numbers up – but you might decide that a small database (5GB/2cpu) can support one SAP instance and a large database (25GB/6cpu) can support 25 instances (with PDBs).

    Remember that schema/multi-tenant consolidation is very efficient – so you can service many more applications with less memory compared to multiple databases. For a starting point, make sure that the large database uses more than half of the slots then use server CPU capacity to determine how many app instances can be serviced by a large database.

    Another observation is that your production system probably uses more CPU than your test and dev systems. You may be able to double or triple the app instance limits for non-production servers.

    It’s an iterative process to find the right slot sizes and workload limits. But the payoff is huge: something that’s easy to draw on a whiteboard and explain to stakeholders. Your management has some concrete figures to work with when projecting hardware needs against potential growth. The bottom line is that you have flexible yet strong standards – which will enable rapid growth while easing management.

    Example Slot Definitions Database Size Slots Max SAP Instances (Production Server) Max SAP Instances (Test Server) Small 1 1 2 Large 5 25 50
    Standard Server:
    - 8 Cores
    - 64 GB Memory
    - 9 Slots
    nr_hugepages = 23552 (45 GB plus 1 GB extra)
    Standard Database:
    sga_target/max_size = [slots * 5] GB
    pga_aggregate_target = [slots * 2] GB
    cpu_count = [slots + 1]
    processes = [slots * 400]

    Planning the journey from Oracle Warehouse Builder to Oracle Data Integrator

    Antonio Romero - Mon, 2013-12-02 10:31

    Check out the blog post here on the OWB integration and migration capabilities in ODI 12c. Julien and myself give an overview of what you can see and expect in the recent release. This is an overview which will give you some insights and pointers to the documentation. 

    Planning the journey from Oracle Warehouse Builder to Oracle Data Integrator

    Antonio Romero - Mon, 2013-12-02 10:31

    Check out the blog post here on the OWB integration and migration capabilities in ODI 12c. Julien and myself give an overview of what you can see and expect in the recent release. This is an overview which will give you some insights and pointers to the documentation. 


    Rob van Wijk - Thu, 2013-11-28 16:06
    During a performance inspection of some E-Business Suite batch programs running on, I noticed almost 99% of the consistent reads missing in the tkprof file for a particular insert statement. It was a giant INSERT ALL statement where the accompanying select statement contained several subqueries in a "case when exists (query)" construct. And those subqueries in the select clause was what Rob van Wijkhttp://www.blogger.com/profile/00499478359372903250noreply@blogger.com1

    Oracle Linux 6.5

    Wim Coekaerts - Wed, 2013-11-27 11:21
    Oracle Linux 6.5 has been uploaded to ULN and public-yum. OL6.5 includes UEKr3 based on Linux 3.8 alongside the base 2.6.32-based kernel.

    The ISOs are available for download from My Oracle Support and will be available shortly(early December) from edelivery.

    The ISOs are also being mirrored to public external mirror sites, one of them is my own mirror site.

    Release notes are here.

    Dodeca Now Available for Download!

    Tim Tow - Mon, 2013-11-25 18:23
    This past Friday, November 22nd, we completed our work on the newest version of the Dodeca Spreadsheet Management System and made Dodeca available for download from our website.  This blog entry is a sneak peek at some of the new features in version 6.6, as well as 6.5, which was released to select customers with specific functionality requests.  There are a few features that are particularly useful for end users, so let’s start there.
    More Excel Support
    Dodeca has always been strong on Excel version support and this version delivers even more Excel functionality.  Internally, we use the SpreadsheetGear control, which does a very good job with Excel compatibility.  This version of Dodeca integrates a new version of SpreadsheetGear that now has support for 398 Excel functions including the new SUMIFS, COUNTIFS, and CELL functions.
    Excel Page Setup Dialog
    The new version of Dodeca includes our implementation of the Excel Page Setup Dialog which makes it easy for users to customize the printing of Dodeca views that are based on Excel templates.  Note that for report developers, the Excel Page Setup has also been included in the Dodeca Template Designer.

    New PDF View Type
    Customers who use PDF files in their environments will like the new PDF View Type.  In previous releases of Dodeca, PDF documents displayed in Dodeca opened in an embedded web browser control.  Beginning in this version, Dodeca includes a dedicated PDF View type that uses a specialized PDF control.

    View Selector Tooltips
    Finally, users will like the new View Selector tooltips which optionally display the name and the description of a report as a tooltip.

    Performance is one of those things that users always appreciate, so we have added a new setting that can significantly improve performance in some circumstances.  Dodeca has a well-defined set of configuration objects that are stored on the server and we were even awarded a patent recently for the unique aspects of our metadata design.  That being said, depending on how you implement reports and templates, there is the possibility of having many queries issued to the server to check for configuration updates.  In a few instances, we saw that optimizing the query traffic could be beneficial, so we have implemented the new CheckForMetadataUpdatesFrequencyPolicy property.  This property, which is controlled by the Dodeca administrator, tells Dodeca whether we should check the server for updates before any object is used, as was previously the case, only when a view opens, or only when the Dodeca session begins.  We believe the latter case will be very useful when Dodeca is deployed in production as objects configured in production often do not change during the workday and, thus, network traffic can be optimized using this setting.  The screenshot below shows where the administrator can control the update frequency.

    Though users will like these features, we have put a lot of new things in for the people who create Dodeca views and those who administer the system.  Let’s start with something that we think all Dodeca admins will use frequently.
    Metadata Property Search Utility
    As our customers continue to expand their use of Dodeca, the number of objects they create in the Dodeca environment continues to grow.  In fact, we now have customers who have thousands of different objects that they manage in their Dodeca environments.  The Metadata Property Search Utility will help these users tremendously.

    This utility allows the administrator to enter a search string and locate every object in our system that contains that string.  Once a property is located, there is a hyperlink that will navigate to the given object and automatically select the relevant property.  This dialog is modeless, which means you can navigate to any of the located items without closing the dialog.

    Note: this version does not search the contents of Excel files in the system.
    Essbase Authentication Services
    In the past, when administrators wished to use an Essbase Authentication service to validate a login against Essbase and automatically obtain Dodeca roles based on the Essbase user’s group memberships, they had to use an Essbase connection where all users had access to the Essbase application and database.  The new ValidateCredentialsOnly property on both of the built-in Essbase Authentication services now flags the service to check login credentials at the server-level only, eliminating the need for users to have access to a specific Essbase database.
    New Template Designer Tools
    Prior to Dodeca 6.x, all template editing was performed directly in Excel.  Since that time, however, most template design functionality has been replicated in the Dodeca Template Designer, and we think it is preferable due to the speed and ease of use with which users can update templates stored in the Dodeca repository.  We have added a couple of new features to the Template Designer in this version.  The first tool is the Group/Ungroup tool that allows designers to easily apply Excel grouping to rows and/or columns within the template.   The second new tool is the Freeze/Unfreeze tool that is used to freeze rows and/or columns in place for scrolling.
    Parameterized SQL Select Statements
    Since we introduced the SQLPassthroughDataSet object in the Dodeca 5.x series, we have always supported the idea of tokenized select statements.  In other words, the SQL could be written so that point-of-view selections made by users could be used directly in the select statement.  In a related fashion, we introduced the concept of parameterized insert, update, and delete statements in the same version.  While parameterized statements are similar in concept to tokenized statements, there is one important distinction under the covers.  In Dodeca, parameterized statements are parsed and converted into prepared statements that can be used multiple times and results in more efficient use of server resources.  The parameterized select statement was introduced in this version of Dodeca in order for customers using certain databases that cache the prepared statement to realize improved server efficiency on their select statements.
    Workbook Script Formula Editor Improvements
    We have also been working hard to improve extensibility for developers using Workbook Scripts within Dodeca.  In this release, our work focused on the Workbook Script Formula Editor.  The first thing we added here is color coding that automatically detects and distinguishes Excel functions, Workbook Script functions, and Dodeca tokens.  In the new version, Excel functions are displayed in green, Dodeca functions and parentheses are displayed in blue, and tokens are displayed in ochre.   Here is an example.

    In addition, we have implemented auto-complete for both Excel and Dodeca functions.

    New SQLException Event
    Version 6.6 of Dodeca introduces a new SQLException event that provides the ability for application developers to customize the behavior when a SQL Exception is encountered.
    XCopy Release Directory
    Beginning in version 6.6, the Dodeca Framework installation includes a pre-configured directory intended for customers who prefer to distribute their client via XCopy deployment instead using Microsoft ClickOnce distribution.  The XCopy deployment directory is also for use by those customers who use Citrix for deployment.
    Mac OS X Release Directory
    The Dodeca Framework installation now includes a pre-compiled Dodeca.app deployment for customers who wish to run the Dodeca Smart Client on Mac OS X operating systems.  What that means is that Dodeca now runs on a Mac without the need for any special Windows emulators.  Dodeca does not require Excel to run on the Mac (nor does it require Excel to run on Windows for that matter), so you can certainly save your company significant licensing fees by choosing Dodeca for your solution. 

    In short, you can see we continue to work hard to deliver functionality for Dodeca customers.  As always, the Dodeca Release Notes provide detailed explanations of all new and updated Dodeca features.  As of today, we have decided to make the Release Notes and other technical documents available for download to non-Dodeca customers.  If you are curious about all of the things Dodeca can do, and if you aren't afraid to dig into the details, you can now download our 389 page cumulative Release Notes document from the Dodeca Technical Documents section of our website.  

    Categories: BI & Warehousing

    Browsing the Meta Data Services Repository of the Oracle SOA/BPM Suite 11g

    Jan Kettenis - Mon, 2013-11-25 13:09
    In this article I explain a handy way to browse the MDS on the SOA/BPM server from JDeveloper, as well as how to download its content using Enterprise Manager, and finally an (as far as I know) undocumented feature to look up artifacts using a browser.

    This article has been updated on November 26 to include the option regarding downloading the MDS content.

    The Meta Data Services (or MDS for short) of Oracle's SOA/BPM Suite is used to manage various types of artifacts like:
    • Process models created with Process Composer,
    • Abstract WSDL's and XSD's,
    • Domain Value Map's (DVM), and even
    • Artifacts  of deployed composites.
    Browsing the MDS from JDeveloperTo find out what actually is deployed in the MDS you can setup an MDS connection within JDeveloper to the server. Such a connection can be handy, for example to verify if replacements of variables in the MDS artifacts are properly done when deploying. Using this connection you can open those artifacts in JDeveloper and check the source.

    To create an MDS connection go to the Resource Palette -> New Connection -> SOA-MDS. This will pop-up a tab from which you can create a database connection to the MDS for example the dev_mds schema. Having created the database connection you have to choose the partition to use for the SOA-MDS connection. To be able to check-out processes created whith Composer from the MDS or to save them in the MDS, you create a SOA-MDS that uses the obpm partition. As the name already suggests, this is in BPM-specific partition. To browse the other artifacts I mention above, you use the soa-infra partion, which is shared by both SOA and BPM.

    In the figure below you can see two types of connections, above to the soa-infra and below to the obpm partition. In the (soa-infra) apps you can find the reusable artifacts that you have deployed explicitly (like abstract WSDL's, XSD's, EDL's).

    What you also see is a deployed-composites folder that shows all composites that have been deployed. When expanding a composite, you will find that all artifacts are shown. This is a much easier way to verify that you do not deploy too many artifacts to the server then by introspecting the SAR file, I would say. Except for .bpmn files (that at the time of writing are not yet recognized by this MDS  browser) you can open all plain text files in JDeveloper.

    Downloading the MDS from Enterprise ManagerNow let's assume that you have not been given access to the MDS's DB schema on the environment (perhaps because it is Production), but you do have access to the Enterprise Manager. For this situation my dear colleague Subhashini Gumpula pointed me to the possibility to download the content from the MDS as follows:

    soa-infra -> Adminstration -> MDS Configuration  -> and then on the right side of the screen: Export.

    This will download a soa-infra_metadata.zip file with its content!
    Looking up Artifacts in the MDS Using a BrowserNow let's assume that you also have not been given access to Enterprise Manager on the environment, but you can access using the HTTP protocol. Thanks to my dear colleague Luc Gorrisen I recently learned that you can browse it using part of the URL of the composite, as follows:


    For example, to look up the abstract WSDL of some ApplicationService that is used by some StudentRegistration business process, I can use the following URL.


    Mind you, this is not restricted to only the WSDL's it is using.

    Ain't that cool?!

    Conditional Borders

    Tim Dexter - Mon, 2013-11-25 11:57

    How can you conditionally turn cells borders on and off in Publishers RTF/XSLFO templates? With a little digging you'll find what appears to be the appropriate attributes to update in your template. You would logically come up with using the various border styling options:




    Buuuut, that doesnt work. Updating them individually does not make a difference to the output. Not sure why and I will ask but for now here's the solution. Use the compound border formatter border-top|bottom|left|right. This takes the form ' border-bottom="0.5pt solid #000000". You set all three options at once rather than individually. In a BIP template you use:

    <?attribute@incontext:border-bottom;'3.0pt solid #000000'?>
    <?attribute@incontext:border-top;'3.0pt solid #000000'?>
    <?attribute@incontext:border-left;'3.0pt solid #000000'?>
    <?attribute@incontext:border-right;'3.0pt solid #000000'?>
    <?end if?>

    3pt borders is a little excessive but you get the idea. This approach can be used with the if@row option too to get the complete row borders to update. If your template will need to be run in left to right languages e.g. Arabic or Hebrew, then you will need to use start and end in place of left and right.

    For the inquisitive reader, you're maybe wondering how, did this guy know that? And why the heck is this not in the user docs?
    Other than my all knowing BIP guru status ;0) I hit the web for info on XSLFO cell border attributes and then the Template Builder for Word. Particularly the export option; I generated the XSLFO output from a test RTF template and took a look at the attributes. Then I started trying stuff out, Im a hacker and proud me!  For the users doc updates, I'll log a request for an update.

    Categories: BI & Warehousing


    Subscribe to Oracle FAQ aggregator