Feed aggregator

SLOB Use Cases By Industry Vendors. Learn SLOB, Speak The Experts’ Language.

Kevin Closson - Fri, 2017-02-10 15:46

This is just a quick blog entry to showcase a few of the publications from IT vendors showcasing SLOB. SLOB allows performance engineers to speak in short sentences. As I’ve pointed out before, SLOB is not used to test how well Oracle handles transaction. If you are worried that Oracle cannot handle transactions then you have bigger problems than what can be tested with SLOB. SLOB is how you test whether–or how well–a platform can satisfy SQL-driven database physical I/O.

SLOB testing is not at all like using a transactional test kit (e.g., TPC-C). Transactional test kits are, first and foremost, Oracle intrinsic code testing kits (the code of the server itself). Here again I say if you are questioning (testing) Oracle code then something is really wrong. Sure, transactional kits can involve physical I/O but the ratio of CPU utilization to physical I/O is generally not conducive to testing even mid-range modern storage without massive compute capability.

Recent SLOB testing on top-bin Broadwell Xeons (E5-2699v4) show that each core is able to drive over 50,000 physical read IOPS (db file sequential read).  On the contrary 50,000 IOPS is about what one would expect from over a dozen of such cores with a transactional test kit because the CPU is being used to execute Oracle intrinsic transaction code paths and, indeed, some sundry I/O.

The following are links and screenshots from the likes of DellEMC, HPE, Nutanix, NetApp, Pure Storage, IBM and Nimble Storage showing some of their SLOB use cases. Generally speaking, if you are shopping for modern storage–optimized for Oracle Database–you should expect to see SLOB results.

For general SLOB information, please visit: https://kevinclosson.net/slob.

The first case is VMware showcasing VSAN with Oracle using SLOB at: https://blogs.vmware.com/apps/2016/08/oracle-12c-oltp-dss-workloads-flash-virtual-san-6-2.html.


VMware Using SLOB to Assess VSAN Suitability for Oracle Database

VMware has an additional publication showing SLOB results at the following URL: https://blogs.vmware.com/virtualblocks/2016/08/22/oracle-12c-oltp-dss-workloads-flash-virtual-san-6-2/

The VCE Solution guide for consolidating databases includes proof points based on SLOB testing at the following link: http://www.vce.com/asset/documents/oracle-sap-sql-on-vblock-540-solutions-guide.pdf.


VCE Solution Guide Using SLOB Proof Points


Next is Nutanix with this publication: https://next.nutanix.com/t5/Server-Virtualization/Oracle-SLOB-Performance-on-Nutanix-All-Flash-Cluster/m-p/12997

 Nutanix Using SLOB for Platform Suitability Testing

Nutanix Using SLOB for Platform Suitability Testing

NetApp has a lot of articles showcasing SLOB results. The first is at the following link: https://www.netapp.com/us/media/nva-0012-design.pdf.


NetApp Testing FlexPod Select for High-Performance Oracle RAC with SLOB

The next NetApp article entitled NetApp AFF8080 EX Performance and Server Consolidation with Oracle Database also features SLOB results and can be found here: https://www.netapp.com/us/media/tr-4415.pdf.

 NetApp Testing the AFF8080 with SLOB

NetApp Testing the AFF8080 with SLOB

Yet another SLOB-related NetApp article entitled Oracle Performance Using NetApp Private Storage for SoftLayer can be found here:  http://www.netapp.com/us/media/tr-4373.pdf.

 NetApp Testing NetApp Private Storage for SoftLayer with SLOB

NetApp Testing NetApp Private Storage for SoftLayer with SLOB

When searching the NetApp main webpage I find 11 articles that offer SLOB testing results:


Searching NetApp Website shows 11 SLOB-Related Articles

Hewlett-Packard Enterprise offers an article entitled HPE 3PAR All-Flash Acceleration for Oracle ASM Preferred Reads which models performance using SLOB. The article can be found here: http://h20195.www2.hpe.com/V2/getpdf.aspx/4AA6-3375ENW.pdf?ver=1.0


HPE Using SLOB For Performance Assessment of 3PAR Storage

In the Pure Storage article called Pure Storage Reference Architecture for Oracle Databases, the authors also show SLOB results. The article can be found here:



Pure Storage Featuring SLOB Results in Reference Architecture

Nimble Storage offers the following blog post with SLOB testing results: https://connect.nimblestorage.com/people/tdau/blog/2013/08/14.

 Nimble Storage Blogging About Testing Their Array with SLOB

Nimble Storage Blogging About Testing Their Array with SLOB

There is an IBM 8-bar logo presentation showing SLOB results here:  http://coug.ab.ca/wp-content/uploads/2014/02/Accelerating-Applications-with-IBM-FlashJAN14-v2.pdf.


IBM Material Showing SLOB Testing

I also find it interesting that folks contributing code to the Linux Kernel include SLOB results showing value of their contributions such as here: http://lkml.iu.edu/hypermail/linux/kernel/1302.2/01524.html.


Linux Kernel Contributors Use SLOB Testing of Their Submissions

Next we see Red Hat disclosing Live Migration capabilities that involve SLOB workloads: https://www.linux-kvm.org/images/6/66/2012-forum-live-migration.pdf.


Red Hat Showcasing Live Migration with SLOB Workload

DellEMC has many publications showcasing SLOB results. This reference, however, merely suggests the best-practice of involving SLOB testing before going into production:



DellEMC Advocate Pre-Production Testing with SLOB

An example of a detailed DellEMC publication showing SLOB results is the article entitled VMAX ALL FLASH AND VMAX3 ISCSI DEPLOYMENT GUIDE FOR ORACLE DATABASES which can be found here:




Figure 14: EMC Testing VMAX All-FLASH with SLOB

I took a moment to search the main DellEMC website for articles containing the word SLOB and found 76 such articles!


Search for SLOB Material on DellEMC Main Web Page

More and more people are using SLOB. If you are into Oracle Database platform performance I think you should join the club! Maybe you’ll even take interest in joining the Twitter SLOB list: https://twitter.com/kevinclosson/lists/slob-community.

Get SLOB, use SLOB!





Filed under: oracle

Returning Nested Tables vs Returning Ref_Cursor

Tom Kyte - Fri, 2017-02-10 15:06
I have an Account table, Address Table and Contact Table. Each Account may have multiple addresses and contacts. I am using Stored Procedures. What would be the best way to return a list of accounts with all the addresses and contacts linked with the...
Categories: DBA Blogs

Regarding loading in to the database

Tom Kyte - Fri, 2017-02-10 15:06
I have 56k chunk of data .when i loading this data in to my database using toad ,it is taking almost 7-8 hours o get loaded. Can you please suggest some solution as i using sql developer ,some errors are thrown like task rolled back .
Categories: DBA Blogs

What is your salary ?

Tom Kyte - Fri, 2017-02-10 15:06
Tom, WHat is your salary? Thanks
Categories: DBA Blogs

Where to download version of oracle 7.2?

Tom Kyte - Fri, 2017-02-10 15:06
Tom, I'm sorry in advance for my question is not about complex topics on Oracle. I want to do some experiments with the old unix sco 6. Where to download version of oracle 7.2?
Categories: DBA Blogs

Your famous 4 table schema

Tom Kyte - Fri, 2017-02-10 15:06
Hello Tom, some time ago, I've read a very good article on "Ask Tom" about your database anti-pattern "4 table-schema". I use your thoughts to tell my junior engineers about how NOT to do database modelling and why. Now I've tried to find your...
Categories: DBA Blogs

please help me

Tom Kyte - Fri, 2017-02-10 15:06
simply retrieving a table using table name dynamically DECLARE TABL_NAME VARCHAR2(255):='MY_DETAILS'; STRNG VARCHAR2(255); begin STRNG :='SELECT * FROM' || TABL_NAME; DBMS_SQL.PARSE (STRNG); END; im new to sql please help why this simple ...
Categories: DBA Blogs

Scan Listener for Single Instances

Tom Kyte - Fri, 2017-02-10 15:06
Hi, We have a bunch of <b><u>non</u></b>-RAC single instances (SE and EE). I'm thinking about still using a SCAN listener for these single instances as I could avoid any TNS configuration change when moving around databases between hosts. Is th...
Categories: DBA Blogs

Automating DevOps for the Oracle Database with Developer Cloud Service and SQLcl

Shay Shmeltzer - Fri, 2017-02-10 13:58

In the previous blog entry I showed how you can leverage Oracle Developer Cloud Service (DevCS) to manage the lifecycle of your database creation scripts (version management, branching, code reviews etc).

But how do you make sure that your actual database is in synch with the changes that you make in your scripts?

This is another place where DevCS can come to the rescue with the built-in continuous integration functionality it provides. Specifically with the new features for database integration including secure DB connection specification, and leveraging the powerful SQL Command Line (SQLcl) - the new command line interface to the Oracle DB - which is built-in in the DevCS build servers.

In the video below I go through a process where a check-in of SQL script code change automatically initiate a build process that modifies a running database.

A few points to note:

  • For the sake of simplicity, the demo doesn't follow the recommended step of a code review before merging changes into the master branch (you can see how to do that here).
  • The demo automates running the build whenever a change to the scripts is done. You could also define a scenario where the build runs at a specific time every day - for example at 1am - and synch the DB to today's scripts.
  • You can further extend the scenario shown here of dropping and re-creating objects to add steps to populate the DB with new data and even run tests on the new database.

As you can see Developer Cloud Service can be a very powerful engine for your database DevOps - and it is included for free with your Oracle Database Cloud Services - so give it a try

DB Build

Categories: Development

Oracle Public Cloud: LIOPS with 4 OCPU in PaaS

Yann Neuhaus - Fri, 2017-02-10 13:44

In the latest post I’ve run a cached SLOB workload on Oracle Cloud IaaS to measure logical reads per seconds on a system covered by 2 processor licences (so 4 OCPs). Just as a comparison, here is the same on Oracle PaaS database as a service.


The CPUs in PaaS are not exactly the same: E5-2690 v2 (3.00GHz) – it was E5-2699 v3 (2.30GHz) for my IaaS test.

[oracle@DBI122 ~]$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Thread(s) per core: 1
Core(s) per socket: 2
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 62
Stepping: 4
CPU MHz: 2992.874
BogoMIPS: 5985.74
Hypervisor vendor: Xen
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 25600K
NUMA node0 CPU(s): 0,1
[oracle@DBI122 ~]$ cat /proc/cpuinfo | tail -26
processor : 1
vendor_id : GenuineIntel
cpu family : 6
model : 62
model name : Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz
stepping : 4
microcode : 0x428
cpu MHz : 2992.874
cache size : 25600 KB
physical id : 0
siblings : 2
core id : 1
cpu cores : 2
apicid : 2
initial apicid : 2
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm constant_tsc rep_good nopl eagerfpu pni pclmulqdq ssse3 cx16 pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm xsaveopt fsgsbase smep erms
bogomips : 5985.74
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management:

So it seems that PaaS has faster CPU (see frequency and bogomips) but nothing worth a real test:


Here I’ve run 1 to 8 SLOB sessions as I did in the previous post and here is the result:

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 1.0 25.1 0.00 2.19
DB CPU(s): 1.0 25.1 0.00 2.18
Logical read (blocks): 611,210.2 15,357,878.4
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 2.0 40.1 0.00 7.71
DB CPU(s): 2.0 40.1 0.00 7.70
Logical read (blocks): 1,195,863.3 24,031,350.5
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 3.0 75.1 0.00 11.86
DB CPU(s): 3.0 75.0 0.00 11.84
Logical read (blocks): 1,720,446.4 43,208,149.8
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 4.0 70.7 0.00 11.78
DB CPU(s): 4.0 70.6 0.00 11.76
Logical read (blocks): 2,266,196.4 40,174,995.7
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 5.0 125.1 0.00 13.17
DB CPU(s): 5.0 124.9 0.00 13.15
Logical read (blocks): 2,802,916.0 70,385,892.6
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 6.0 90.1 0.00 15.80
DB CPU(s): 6.0 90.0 0.00 15.78
Logical read (blocks): 3,312,050.8 49,898,529.6
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 7.0 95.5 0.00 17.22
DB CPU(s): 7.0 95.3 0.00 17.18
Logical read (blocks): 3,812,912.2 52,225,112.1
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 8.0 141.3 0.00 16.45
DB CPU(s): 7.9 140.2 0.00 16.33
Logical read (blocks): 4,237,433.6 75,154,623.7

Faster CPU but lower logical reads processed by seconds… Don’t look only at the specs when choosing an instance type. Test it with your workload…

Besides performance, I really like the Oracle Cloud PaaS for Database. You have easy provisioning (a few clicks) but still full access (root, grid, oracle, sysdba). There is no competitor on that. In other clouds, either you go IaaS and you have to install and configure everything yourself, or you go PaaS and you have very limited admin access. Here you have both.


Cet article Oracle Public Cloud: LIOPS with 4 OCPU in PaaS est apparu en premier sur Blog dbi services.

Weekly Link Roundup – Feb 10, 2017

Complete IT Professional - Fri, 2017-02-10 12:36
Here’s a collection of interesting articles I’ve read this week Articles I’ve Read DBA Productivity and Oracle Database 12.2 https://juliandontcheff.wordpress.com/2017/02/09/dba-productivity-and-oracle-database-12-2/ Julian writes about automating the tasks a DBA does, and shows a chart demonstrating which tasks are performed manually vs automated. He also covers some of the new automation features for DBAs in releace 12cR2. […]
Categories: Development

Software Magazine on Managing the Web Content

WebCenter Team - Fri, 2017-02-10 09:02

This week Software Magazine published a feature on Web Content Management (WCM) and had industry leaders weigh in on the future of Web Content. Oracle executive, David Le Strat was quoted throughout the feature where he made the case for how the onus is on technology vendors to provide flexible, channel-agnostic content management systems so that organizations can easily scale and are able to create content once and publish anywhere, across any channel. With the proliferation of channels, technologies and different content types, you need to be able to drive a content management strategy that allows you to centrally manage content and deliver it consistently across channels.

Have a read at the feature and let us know which camp you are in and if you agree. We would love to hear from you.

ORA-00911 invalid character Solution

Complete IT Professional - Fri, 2017-02-10 05:00
Are you getting an “ORA-00911 invalid character” error when running an SQL statement? Find out what causes it and how to resolve it in this article. ORA-00911 Cause So, you’ve tried to run an SQL statement, such as INSERT or SELECT, and gotten this error: ORA-00911: invalid character Why did this happen? According to the […]
Categories: Development

Enabling Concurrent OBIEE RPD Development - for free

Rittman Mead Consulting - Fri, 2017-02-10 04:30
Enabling Concurrent OBIEE RPD Development - for free

One of the most common and long standing problems with developing in OBIEE is the problem of multiple developers working on the RPD at the same time. This blog explains the problem and explores the solution that we’ve developed and have been successfully using at clients over the last couple of years. We’re pleased to announce the immediate availability of the supporting tools, as part of the Rittman Mead Open Source Project.
Before we get into the detail, I'll first explain a bit about the background to the requirement and the options that ship with OBIEE.

Why Concurrent Development

The benefits of concurrent development are obvious: scalability and flexibility of development. It enables you to scale your development team to meet the delivery demands of the business. The challenge is to manage all of the concurrent work and enable releases in a flexible manner - which is where source control comes in.

We couldn't possibly attempt to manage concurrent development on something as complex as the RPD without good version control in place. Source control (A.K.A. version control/revision control) systems like Git and Apache Subversion (SVN) are designed to track and retain all of the changes made to a code base so that you can easily backtrack to points in time where the code was behaving as expected. It tracks what people changed, who changed it, when they changed it and even why they made that change (via commit messages). They also possess merge algorithms that can automatically combine changes made to text files, as long as there are no direct conflicts on the same lines. Then there's added benefits with code branching and tagging for releases. All of this leads to quicker and more reliable development cycles, no matter what the project, so good in fact that I rely on it even when working as one developer. To (mis)quote StackOverflow, "A civilised tool for a civilised age".

All of these techniques are about reducing the risk during the development process, and saving time. Time spent developing, time spent fixing bugs, spent communicating, testing, migrating, deploying and just about every IT activity under the sun. Time that could be better spent elsewhere.

Out of the Box

Oracle provide two ways to tackle this problem in the software:

However I believe that neither of these are sufficient for high standards of reliable development and releases - the reasons for which I explore below (and have been described previously). Additionally it is not possible to natively and fully integrate with version control for the RPD, which again presents a significant problem for reliable development cycles.

Firstly the online check-in and check-out system does, by design, force all development to be conducted online. This in itself is not an issue for a single developer, and is in fact a practice that we advocate for ‘sandbox’ development in isolation. However, as soon as there is more than one developer on the same server it reduces development flexibility. Two developers cannot develop their solutions in isolation and can be made to wait for assets they want to modify to be unlocked by other developers. This may be workable for a small amount of developers but does not scale well. Furthermore, the risk of losing work is much higher when working online; we've all seen the infamous "Transaction Update Failed" message when saving online. This is usually because of an inconsistency in the RPD but can be caused by less obvious reasons and usually leads to repeating some redundant work. Lastly, very large RPDs like those from BI Apps or very mature enterprise deployments pose a problem when working online. They cause the Admin Tool to work very slowly because of the calls it has to make to the server, which can be frustrating for developers. To be clear, I am certainly not advocating developing without testing your code, but given the speed of uploading an RPD to the server and the fact that it can be automated, in my experience it is far more efficient to develop offline and upload frequently for testing.

The MUD system is much better and is quite close in methodology to what we recommend in this guide. The premise works on having a production quality master RPD and then having many other individual developers with their own RPDs. The check-in and check-out system will automatically handle three-way merges to and from the master RPD when development changes are made. This is good in theory but has been maligned for years when used in practice. The version control system in MUDE is not as robust as Git or SVN for example and the conflict resolution relies on developers managing their own issues, without the ability for a source master to intervene. Ultimately there is little flexibility in this method, which makes it difficult to use in the real world.

Source Controlling the RPD

Source control is another problem as the RPD is a binary file which cannot be merged or analysed by text-based comparison tools like SVN or Git. A potential solution at one point seemed to be MDS XML, a structured, textual representation of the RPD. However, this also seemed to have some drawbacks when put into practice. Whilst some use MDS XML with great success and there are tools on the market that rely on this system, here at Rittman Mead we’ve found that there are significant risks and issues with it. We’ve come up with what we believe is a robust, scalable, and flexible approach, based around the binary RPD.

The Rittman Mead Solution to Concurrent OBIEE RPD Development

Successful development lifecycles comes down to implementation of the correct process and ensuring it is as quick and reliable as possible. Tools, like the ones described in this blog, can be used to help in both of those areas but are not a substitute for detailed knowledge of the processes and the product. A key feature of this approach is the Source Master who owns and is responsible for the overall development process. They will have a detailed understanding of the method and tools, as well as the specifics of the current and future work being done on the RPD. Things will go wrong, it is as inevitable as death and taxes - the key is to minimise the impact and frequency of these events.

The solution is based on the Gitflow method, which is one of the most established development models. The model is based on a few major concepts:

  • Features - Specific items of development, these are begun from the development branch and then merged back into development when complete.
  • Develop/Master Branches - Two branches of code, one representing the development stream, the other the production code.
  • Releases - A branch taken from development that is then eventually merged into production. This is the mechanism for getting the development stream into production.

I highly recommend reading that blog and this cheatsheet as they explains the method excellently and what we've done here is support that model using binary RPDs and the 3-way merge facility in OBIEE. Also of relevance is this Rittman Mead blog which describes some of the techniques we're explaining here. We've open sourced some command line tools (written in Python) to ease and automate the process. You can download the code from the GitHub repository and need only an install of Python 2.7 and the OBIEE client to get started. The tooling works with both git and Subversion (SVN). We recommend the use of git, but realise that SVN is often embedded at organisations and so support that too.


This section shows a simple example of how you might use this methodology for multiple developers to work on the RPD in a reliable way. Many of the screenshots below show SourceTree, a GUI for Git which I'm a fan for both its UI and GitFlow support.

We have two developers in our team, Basil and Manuel, who both want to work on the RPD and add in their own changes. They already have an RPD they've made and are using with OBIEE, named base.rpd. First they initialise a Git repository, committing a copy of their RPD (base.rpd).

Enabling Concurrent OBIEE RPD Development - for free

The production branch is called master and the development branch develop, following the standard naming convention for GitFlow.

Before we get started, let's a take a look at the RPD we're working with:

Enabling Concurrent OBIEE RPD Development - for free

Simple Concurrent Development

Now Basil and Manuel both independently start features F01 and F02 respectively:

python obi-merge-git.py startFeature F01  

Each developer is going to add a measure column for Gross Domestic Product (GDP) to the logical fact table, but in different currencies. Basil adds "GDP (GBP)" as a logical column and commits it to his development branch, F01. Manuel does the same on his, adding "GDP (USD)" as a logical column and committing it to F02.

Enabling Concurrent OBIEE RPD Development - for free

Now Basil finishes his feature, which merges his work back into the develop branch.

Enabling Concurrent OBIEE RPD Development - for free

This doesn't require any work, as it's the first change to development to occur.

python obi-merge-git.py finishFeature F01

Checking out develop...  
Already on 'develop'

Merging feature/F01 into develop...  
Successfully merged feature/F01 to the develop branch.  

When Manuel does the same, there is some extra work to do. To explain what's happening we need to look at the 3 merge candidates in play, using the terminology of OBIEE’s 3-way merge functionality:

  • Original: This is the state of the development repository from when the feature was created.
  • Modified: This is your repository at the time of finishing the feature.
  • Current: This is the state of the development repository at the time of finishing the feature.

When Basil completed F01, the original and current RPDs were the same, so it could just be overridden with the new RPD. However now, the Original and Current RPDs are different, so we need to resolve the changes. Our RPDs are binary files and so we need to use the 3-way merge from the Admin Tool. The python script wrapped around this process uses Git’s metadata to determine the appropriate merge candidates for invoking the OBIEE 3-way merge.

Enabling Concurrent OBIEE RPD Development - for free

Since our changes do not conflict so this can happen automatically without user intervention. This is one of the critical differences from doing the same process in MDS XML, which would have thrown a git merge conflict (two changes to the same Logical Table, and thus same MDS XML file) requiring user intervention.

python obi-merge-git.py finishFeature F02

Checking out develop...  
Already on 'develop'

Merging feature/F02 into develop...  
warning: Cannot merge binary files: base.rpd (HEAD vs. feature/F02)

Creating patch...

        Patch created successfully.

Patching RPD...

        RPD patched successfully.

RPD Merge complete.

Successfully merged feature/F02 to the develop branch.  

In the background the script uses the comparerpd and patchrpd OBIEE commands.


Now our development branch has both features in, which we can see using the Admin Tool:

Enabling Concurrent OBIEE RPD Development - for free

To get this into production we can start a release process:

python obi-merge-git.py startRelease v1.00  

This creates a new branch from develop that we can use to apply bug fixes if we need to. Any changes made to the release now will be applied back into development when the release is complete as well as being merged into the production branch. The developers realise they have forgotten to put the new columns in the presentation layer, so they do it now in the release branch as a bugfix. In GitFlow, bugfixes are last minute changes that need to be made for a release but do not interfere with the next development cycle, which may have already begun (in the develop branch) by the time the bug was spotted. The changes are merged back to develop as well as master so the fix isn't lost in the next cycle.

Enabling Concurrent OBIEE RPD Development - for free

This is committed to the repo and then the release is finished:

python obi-merge-git.py finishRelease v1.00  

Enabling Concurrent OBIEE RPD Development - for free

After the release we can see that the master and develop branches are at the same commit point, with a tag of the release name added in too. Additionally we can switch to the develop and master branches and see all of the changes including the columns in the presentation layer. The full commit history of course remains if we want to roll back to other RPDs.

Enabling Concurrent OBIEE RPD Development - for free

Conflicted Development

Basil and Manuel start their new features, F03 and F04 respectively. This time they’re working on the same existing column - something that a “Source Master” should have helped avoid, but missed this time. Basil edits the column formula of the "Area" column and renames it to "Area (sqm)"" and Manuel does the same, naming his column "Area (sqFt)".

Enabling Concurrent OBIEE RPD Development - for free

They both commit the changes to their own feature branches and Manuel merges his back to development with no problem.

python obi-merge-git.py finishFeature F04  

However when Basil tries to finish his feature the obvious conflict occurs, as the automatic merge cannot resolve without some human intervention since it is the same object in the RPD affected by both changes. At this point, the script will open up the current RPD in the Admin Tool and tell Basil to merge his changes manually in the tool, going through the usual conflict resolution process. The script provides 3 RPDs to make the RPD choosing step unambiguous:

  • original.rpd
  • modified.rpd
  • current.rpd (Opened)
python obi-merge-git.py finishFeature F03

Checking out develop...  
Already on 'develop'

Merging feature/F03 into develop...  
warning: Cannot merge binary files: base.rpd (HEAD vs. feature/F03)

Creating patch...

        Patch created successfully.

Patching RPD...

        Failed to patch RPD. See C:\Users\Administrator\Documents\obi-concurrent-develop\patch_rpd.log for details.

        Conflicts detected. Can resolve manually using the Admin Tool.

        Original RPD:   C:\\Users\\Administrator\\Documents\\rpd-test\a.rpd (original.rpd)
        Current RPD:    C:\\Users\\Administrator\\Documents\\rpd-test\c.rpd (Opened)
        Modified RPD:   C:\\Users\\Administrator\\Documents\\rpd-test\b.rpd (modified.rpd)

Perform a full repository merge using the Admin Tool and keep the output name as the default or C:\\Users\\Administrator\\Documents\\rpd-test\base.rpd

Will open RPD using the Admin Tool.

Press Enter key to continue.

You must close the AdminTool after completing the merge manually in order for this script to continue.  

When Basil hits a key, the Admin Tool opens up, and from here he needs to manually initiate the merge and specify the merge candidates. This is made easy by the script which automatically names them appropriately:

Enabling Concurrent OBIEE RPD Development - for free

Note that the a, b and c RPDs are part of the merge logic with Git and can be ignored here.

Basil assigns the original and modified RPDs to the correct parts of the wizard and then resolves the conflict (choosing his change) in the next step of the wizard.

Enabling Concurrent OBIEE RPD Development - for free

Upon closing the Admin Tool, the Git merge to the develop branch is automatically completed.

Now when they look in the development RPD they can see the column named as "Area (sqm)", having accepted Basil's change. Of course this is a trivial example, but because the method relies on using the Admin Tool, it will be just as reliable as a manual 3-way merge you would perform in OBIEE.

In my experience, most of the problems with 3-way merging is that developers get confused as to which candidates to choose or they lose track of a true original point from when both developers started working. Using this method eliminates both of the those problems, with the added benefit of tight integration into source control. Even with an easier interface to the 3-way merge process, developers and/or the Source Master should be aware of some of the ‘features’ of OBIEE’s 3-way merge. For example, if a change has occurred on the physical layer which does not have any representations at all in the business or presentation layers, it may be lost during a 3-way merge. Another is that the merge rules are not guaranteed to stay the same between OBIEE versions, which means that we cannot be certain our development lifecycle is stable after patching or upgrading OBIEE.

So given this, and as a general core tenet of good software development practice, you should be automatically testing your RPDs after the merge and before release.

Testing the RPD

There are still issues with OBIEE RPD merging that aren't rectified by the 3-way merge and so must be handled manually if and when they occur. One such example is that if a change has occurred on the physical layer which does not have any representations at all in the business or presentation layers, it may be lost during a 3-way merge. Another problem is that the merge rules are not guaranteed to stay the same between OBIEE versions, which means that we cannot be certain our development lifecycle is stable after patching or upgrading OBIEE. Another thing I don't really like is the inherent bias the merge process has toward the modified RPD, instead of treating the modified and current RPDs equally. The merge candidates in the tool have been selected in such a way as to mitigate this problem but I am wary it may have unforeseen consequences for some as yet untested scenarios. There are may be other inconsistencies, but it is difficult to pin down all of the scenarios precisely and that's one of the main stumbling blocks when managing a file as complex as the RPD. Even if we didn't receive any conflicts, it is vital that RPDs are checked and tested (preferably automatically) before release.

The first step to testing is to create a representative test suite, which will encompass as much of the functionality of your system in as few reports as possible. The reason for this is that it is often impractical and sometimes invalid to check the entire catalogue at once. Furthermore, the faster the testing phase occurs, the quicker the overall release process will be. The purpose of a test suite is so that we can take a baseline of the data of each report from which we can validate consistency after making changes. This means your test suite should contain reports that are expected not to change after making changes to RPD. Furthermore you need to be careful that the underlying data of the report does not change between the baseline capture and the regression validation phases, otherwise you will invalidate your test.

In terms of tooling, Oracle provide BVT which can be used outside of upgrades to perform automated regression tests. This is good as it provides both data checks as well as visual validation. Furthermore, it can be run on a specific Web/Presentation Catalog folder directly, as opposed to the whole system.

As well as Oracle’s BVT, we also have an in-house Regression Testing tool that was written prior to BVT’s availability, and is still used to satisfy specific test scenarios. Built in Python, it is part of a larger toolset that we use with clients for automating the full development lifecycle for OBIEE, including migrating RPDs and catalogue artefacts between environments.

This brings us onto the last piece in the DevOps puzzle is continuous integration (CI). This is the concept of automatically testing and deploying code to a higher environment as soon as the work is complete. This is something not explicitly covered by the tools in this blog, however would work nicely used with the testing and migration scripts described above. This could all be made seamless by invoking the processes via script calls or better using Git hooks.


The success of an OBIEE concurrent development approach comes down to two things: the tooling, and the rigorous implementation of the process - and it is the latter that is key. In this article I’ve demonstrated the tooling that we’ve developed, along with the process required for a successful development method. Here at Rittman Mead we have detailed understanding and experience in the process and framework necessary to implement it at any client, adapting and advising to ensure the integration into existing in-house development and release requirements. The real world is messy and developers don't all work in the same way. A single tool in isolation is not going to succeed in making OBIEE - designed from the outset as a single-developer tool - scale to multiple developers. Instead of insisting that you change to accommodate our tool, we instead bring our tool and process and adapt to suit you.

You can find the code used in this blog up on GitHub and if you would like to discuss how Rittman Mead can help implement concurrent OBIEE RPD development successfully at your organisation, please get in touch.

Categories: BI & Warehousing

Windows 10-Related EBS Certifications: February 2017 Edition

Steven Chan - Fri, 2017-02-10 02:05

Windows 10 logoE-Business Suite certifications with Microsoft Windows 10 have become hard to track.  This is partly due to the number of different things that run with Windows 10, including EBS components and browsers.  In addition, Microsoft is positioning Windows 10 as a "Windows as a service" offering, which has resulted in a series of recent changes to their release vehicles and offerings.

I've been covering these regularly. Here's a recap of everything related to EBS certifications on Windows 10 to date:

Windows 10, Java, and Edge

Microsoft Edge does not support plug-ins, so it cannot run Forms.  We are working on an enhancement request called “Java Web Start” to get around this Edge-limitation:

Categories: APPS Blogs

Links for 2017-02-09 [del.icio.us]

Categories: DBA Blogs

Workshop Apache Kafka – presentation and hands on labs for getting started

Amis Blog - Fri, 2017-02-10 00:09

The AMIS SIG session on Apache Kafka (9th February 2017) took 25 participants by the hand on a tour of Apache Kafka. Through presentations, demonstrations and a hands-on workshop, we provided a feet-hitting-the-ground-running introduction to Apache Kafka and Kafka Streams as bonus. Responsible for this workshop are Maarten Smeets and Lucas Jellema.

All materials for the workshop are freely available. The sources and hands-on lab are available in a GitHub Repo: https://github.com/MaartenSmeets/kafka-workshop 

The workshop discusses Hello World with Kafka, interacting with Kafka from Java and Node.JS, Kafka REST proxy, Kafka Streams, under the hood: partitions, brokers, replication and Kafka integration with Oracle Service Bus and Oracle Stream Analytics.


The presentation is also available from SlideShare: http://www.slideshare.net/lucasjellema/amis-sig-introducing-apache-kafka-scalable-reliable-event-bus-message-queue

The post Workshop Apache Kafka – presentation and hands on labs for getting started appeared first on AMIS Oracle and Java Blog.

Generate Trace Files

Tom Kyte - Thu, 2017-02-09 20:46
Hi, I am working on Oracle database version is I have PL/SQL packages, which run from few minutes to couple of hours. My requirement is that for each package execution, it should generate a trace file and once it is generated, it sho...
Categories: DBA Blogs

How does Oracle know whether the index belongs to the primary key?

Tom Kyte - Thu, 2017-02-09 20:46
Consider the following two tables and their primary keys: <code> create table testuser.test1 ( col1 number not null, col2 number not null, col3 number not null ); alter table testuser.test1 add constraint test1_pk primary key ...
Categories: DBA Blogs

Query on time overlaps

Tom Kyte - Thu, 2017-02-09 20:46
I am struggling to merge continuous time ranges to one.Here is my record set. STAFF_NUMBER SHIFT_DATE TASK_START_TIME TASK_END_TIME 123 12/10/2016 12/10/2016 17:14 12/10/2016 20:10 123 12/10/2016 12/10/2016 20:08 12/10/2016 21:08 1...
Categories: DBA Blogs


Subscribe to Oracle FAQ aggregator