Feed aggregator

Musings on Standby Database

Hans Forbrich - Wed, 2013-06-12 13:02
It seem that every few months, there is a renewed discussion about whether you need to license your standby database, whether standby is Data Guard, whether Data Guard can be used in Oracle Data Server Standard Edition, whether we have to pay if we just apply redo at night, and similar.

Here is my response to that question:

-----

Standby is standby.  It is a technique to support disaster recovery.  And it is still called Standby Database, not Data Guard, even now.

For a long time, in order to automate disaster recovery technique people have written scripts.  For Oracle Database Server, Laurence To and the Oracle SPG group assembled a number of the scripts back with Oracle7 and Oracle8 and released then as a feature of the Enterprise Edition called Data Guard that initially only consisted of the 'best practices' scripts.  The core feature was, and still is, available at no additional cost.

Data Guard has since progressed significantly and become more of a set of executables, rather than scripts.  But the primary purpose still is to automate the steps of syncronizing the standby and automating the switchover/failover.

Standby is standby.  With Oracle Database Server, it consists of two databases: the first or primary actively handling transactions and query requests; the 'standby' being available to take over the load if the primary fails.

Over the years, we in the industry have refined the term to distinguish between Cold and Hot standby, the difference being in how much effort is involved, and how quickly the standby environment is available for use.

A Cold Standby environment may have the software installed, but the environment does not use any CPU cycles to keeping the data in sync.  In general, that will require some sort of restore from backups.  Since the Cold Standby does not use CPU cycles, Oracle has not traditionally charged for it.

A Hot Standby environment keeps the data in sync fairly closely to the primary.  The more similar the standby environment needs to be to the primary at the data and configuration level, the more it will cost to do that, and the more complicated the sync method needs to be.  The Hot Standby does use CPU cycles, and therefore must be licensed the same way as the primary unless you have an exception within YOUR Oracle license contract.

Oracle database server - whether Express Edition, Standard Edition or Enterprise Edition - has the ability to perform crash and media recovery from intact redo log files.  Oracle's hot standby capability is simply continuous media recovery.  However that requires the redo information from the primary to be sent to the standby, when it is available and it requires the standby to apply the redo once it has arrived.

The Enterprise Edition feature called Data Guard is simply a 'guardian application' that detects when the redo information is available, extracts it, transmits it, and controls the application at the standby system(s).  What it does can also be done manually, or through your own scripts.  Indeed, in Standard Edition, DbVisit (http://www.dbvisit.com) has created their own commercially available executable that does the same thing and more.

Data Guard has been enhanced to allow several 'levels' of similarity, from "minimum data loss" through "absolutely no loss permitted".  What used to be scripts is now compiled executables with many test points and with the ability to control the database server.

And the database kernel has been modified to allow the standby server to be opened in read-only while applying the redo information which may happen under the control of the Data Guard application.  This is called Active Data Guard, and it DOES require additional licenses.


Also check out the Software Investment Guide at http://www.oracle.com/us/corporate/pricing/index.html

And remember: the final authority is Oracle, not me.  "I read it on the Internet" is a shoddy defense in a contract dispute and will likely NOT be accepted by the Judge in a Court of Law.
Categories: DBA Blogs

Improving data move on EXADATA V

Mathias Magnusson - Wed, 2013-06-12 07:00

Wrap-up

This is the last post in this series and I’ll not introduce anything new here, but rather just summarise the changes explained and talk a bit about the value the solution delivers to the organisation.

Let’s first review the situation we faced before implementing the changes.

The cost of writing the log-records to the database was that all the parallell writing from many different sources was such that it introduced severe bottlenecks to the point that the logging feature had to be turned off days at a time. This was not acceptable but rather than shutting down the whole system which would put lives in immediate danger, this was the only option available. Then even if that would have been fast enough, the moving of data was taking over twice the time available and it was fast approaching the point where data written in 24 hours would take more time to move to the historical store for log-data. That would of course have resulted in an ever growing backlog even if the data move was on 24×7. On top of that the data took up 1.5 TB of disk space, costing a lot of money and raising concerns with out ability to move it to EXADATA.

To resolve the issue during business hours of having contention causing a crippling impact on the overall system, we changed the table setup to not have any primary keys, no foreign keys and no indexes. We made the tables partitioned such that we get one partition per day.

To make the move from operational tables to historical tables faster, we opted to have both in the same instance on EXADATA. This allowed us to use partition exchange to swap out the partition from the operational table and swap it into the historical table. This took just a second as all we did was updating some metadata for which table the partition belongs to. Note that this ultra fast operation replaced a process that used to take around 16 hours, for which we had 6.5 and the time it took was expanding as business was growing.

Finally, to reduce the space consumed on disk we used HCC – Hybrid Columnar Compression. This is an EXADATA only feature for compressing data such that columns with repeating values gets a very good compression ratio. We went from 1.5 TB to just over 100 GB. This means that even with no purging of data it would take us over five years to even get back to the amount of storage this used to require.

So in summary

  • During business hours we use 20% of the computing power and even less of the wall clock time it used to take,
  • The time to move data to the historical store was reduced from around 16 hours to less than one second.
  • Disk space requirement was reduced from 1.5 TB to just over 100 GB.

And all of this was done without changing one line of code, in fact there was no rebuild, no configuration change or anything to allow this drastic improvement to work with all the different systems that was writing to these log-tables.

One more thing to point out here is that all these changes was done without using traditional SQL. The fact that it  is an RDBMS does not mean that we have to use SQL to resolve every problem. In fact, SQL is often not the best tool for the job. It is also worth to note that these kinds of optimisations cannot be done by an ORM, it is not what they do. This is what your performance or database architect needs to do for you.

For easy lookup, here are links to the posts in this series.

  1. Introduction
  2. Writing log records
  3. Moving to history tables
  4. Reducing storage requirements
  5. Wrap-up (this post)

Custom OSB Reporting Provider

Edwin Biemond - Tue, 2013-06-11 15:53
With the OSB Report Action we can add some tracing and logging to an OSB Proxy, this works OK especially when you add some Report keys for single Proxy projects but when you have projects with many Proxies who are invoking other JMS or Local Proxies than the default reporting tables (WLI_QS_REPORT_DATA, WLI_QS_REPORT_ATTRIBUTE ) in the SOA Suite soainfra schema is not so handy. I want to

New OTN Interface

Michael Armstrong-Smith - Mon, 2013-06-10 17:32
If you are a user of OTN (Oracle Technology Network) you should have noticed that there is a new interface. I think its pretty cool. What do you think?

Windows Surface RT: There is a desktop?!

Dietrich Schroff - Mon, 2013-06-10 14:21
Last week i had the opportunity to use a Windows Surface RT tablet. For several month i am using a windows 8 laptop, which is equipped with a touchscreen. So i am used to the new tiles interface and i knew at least on laptops, you have the desktop-applications and the "tiles"-applications.
I was curious, how it feels to work without having "two worlds" of applications on a device...
But it was for me big suprise: If you install Microsoft Office on a RT device, you get the desktop back:
 On a tablet with a display size with 10.6 inches (27cm)? I tried to write a document and it wasn't easy to hit the right icons...
By saving the document it was astonishing, that i got a file chooser. From my Nexus 7 i was used to get no folder structures or similar things:
There are two problems with this desktop applications: If not in full-screen mode, you have to work with really small windows and resizing is very difficult and
 the applications are not shown in the application bar:
They are all summarized with desktop... There is no way to switch directly to your word application or powerpoint.  You have to move the desktop und then choose your Office application....

What's new in EBS 12.2?

Famy Rasheed - Mon, 2013-06-10 02:24

Measuring the time left

Rob Baillie - Sun, 2013-06-09 08:30
Burn-down (and burn-up, for that matter) charts are great for those that are inclined to read them, but some people don't want to have to interpret a pretty graph, they just want a simple answer to the question "How much will it cost?"

That is if, like me, you work in what might be termed a semi-agile*1 arena then you also need some hard and fast numbers. What I am going to talk about is a method for working out the development time left on a project that I find to be pretty accurate. I'm sure that there are areas that can be finessed, but this is a simple calculation that we perform every few days that gives us a good idea of where we are.
The basis.It starts with certain assumptions:
You are using stories.OK, so they don't actually have to be called stories, but you need to have split the planned functionality into small chunks of manageable and reasonably like sized work.
Having done that you need to have a practice of working on each chunk until its finished before moving on to the next, and have a customer team test and accept or sign off that work soon after the developers have built it.
You need that so that you uncover your bugs, or unknown work as early as possible, so you can account for them in your numbers.
Your customer team is used to writing stories of the same size.When your customer team add stories to the mix you can be confident that you won't always have to split them into smaller stories before you estimate and start working on them.
This is so you can use some simple rules for guessing the size of the work that your customer team has added but your developers have not yet estimated.
You estimate using a numeric value.It doesn't matter if you use days work, story points or function points, as long as it is expressed as a number, and that something estimated to take 2 of your unit is expected to take the same as 2 things estimated at 1.
If you don't have this then you cant do any simple mathematics on the numbers you have and it'll make your life much harder.
Your developers quickly estimate the bulk of the work before anything is started.This is not to say that the whole project has a Gandalf like startup: "Until there is a detailed estimate, YOU SHALL NOT PASS"; rather that you T-shirt cost, or similar, most of your stories so that you have some idea of the overall cost of the work you're planning.
You need this early in the project so that you have a reasonable amount of data to work with
Your developers produce consistent estimates.
Not that your developers produce accurate estimates, but that they tend to be consistent; if one story is underestimated, then the next one is likely to be.
This tends to be the case if the same group of developers estimate all the stories that they all involve making changes to the same system. If a project involves multiple teams or systems then you may want to split them into sub projects for the means of this calculation.
You keep track of time spent on your project.Seriously, you do this right?
It doesn't need to be a detailed analysis of what time is spent doing what, but a simple total of how much time has been spent by the developers, split between the time spent on stories and that on fixing defects.
If you don't do this, even on the most agile of projects, then your bosses and customer team don't have the real data that they need to make the right decisions.
You, and they, are walking a fine line to negligent

If you have all these bits then you've got something that you can work with...
The calculation.The calculation is simple, and based on the following premises:

  • If your previous estimates were out, they will continue to be out by the same amount for the whole of the project.
  • The level of defects created by the developers and found by the customer team will remain constant through the whole project.
  • Defects need to be accounted for in the time remaining.
  • Un-estimated stories will be of a similar size to previously completed work. 
The initial variables:

totalTimeSpent = The total time spent on all development work (including defects).

totalTimeSpentOnDefects = The total time spent by developers investigating and fixing defects.

numberOfStoriesCompleted = The count of the number of stories that the development team have completed and released to the customer.

storiesCompletedEstimate = The sum of the original estimates against the stories that have been completed and released to the customer.

totalEstimatedWork = The sum of the developers' estimates against stories and defects that are yet to do.

numberOfStoriesCompleted = The count of number of a stories that have been completed by the development team and released to the customer.

numberOfUnEstimatedStories = The count of the number of stories that have been raised by the customer but not yet estimated by the development team.

numberOfUnEstimatedDefects = The count of the number of defects that have been found by the customer but not yet estimated by the development team.
Using these we can work out:
Time remaining on work that has been estimated by the development team.For this we use a simple calculation on the previous accuracy of the estimates.
This includes taking into account the defects that will be found, and need to be fixed against the new feunctionality that will be built.


estimateAccuracy = totalTimeSpent / storiesCompletedEstimate

predictedTimeRemainingOnEstimatedWork = ( totalEstimatedWork * estimateAccuracy )
Time remaining on work that has not been estimated by the development team.In order to calculate this, we rely on the assumptions that the customer team have got used to writing stories of about the same size every time.
You may need to get a couple of developers to help with this by splitting things up with the customer team as they are creating them. I'd be wary of getting then to estimate work though.

averageStoryCost = totalTimeSpent / numberOfStoriesCompleted

predictedTimeRemainingOnUnEstimatedStories = numberOfUnEstimatedStories * averageStoryCost


averageDefectCost = totalTimeSpentOnDefects / numberOfStoriesCompleted

predictedTimeRemainingOnUnEstimatedDefects = numberOfUnEstimatedDefects * averageDefectCost 
Total predicted time remainingThe remaining calculation is then simple, it's the sum of the above parts.
We've assessed the accuracy of previous estimates, put in an allocation for bugs not yet found, and assigned a best guess estimate against things the development team haven't yet put their own estimate.

totalPredictedTimeRemaining = predictedTimeRemainingOnEstimatedWork + predictedTimeRemainingOnUnEstimatedStories + predictedTimeRemainingOnUnEstimatedDefects 
The limitationsI find this calculation works well, as long as you understand its limitations.
I hope to present some data in this blog very soon, as we already have some empirical evidence that it works.
Admittedly, for the first 20% or so of the project the numbers coming out of thus will fluctuate quite a bit. This is because there isn't enough 'yesterday's weather' data to make it the estimate accuracy calculation meaningful. The odd unexpectedly easy (or hard) story can have a bit effect on the numbers.
Also, if your testing and accepting of stories lags far behind your development or if you don't fix your bugs first, you will under estimate the number of bugs in the system. However, if you know these things you can react to them as you go along.
Further WorkI am not particularly inclined to make changes to this calculation, as the assumptions and limitations are perfectly appropriate for the teams that I work with. For other teams this may not be the case, and I might suggest some slight alterations if you think they'd work for you.
Estimating number of defects not yet found.
It seems reasonable for you to look at the average number of defects raised per story accepted and use this to work out the number of defects that have not yet been found.  These could then be included in your calculation based on the average cost of defects that you've already fixed.
This might be a good idea if you have a high level of defects being raised in your team.  I'd say high as meaning anything over about 20% of your time being spent fixing defects.
Using the estimate accuracy of previous projects at the start of the new.
As I pointed out earlier, a limitation of this method is the fact that you have limited information at the start of the project and so you can't rely on the numbers being generated for some time.  A way of mitigating this is to assume that this project will go much like the previous one.
You can then use the estimate accuracy (and defect rate, if you calculated one) from your previous project in order to mitigate the lack of information in this.
If you're using the same development team and changing the same (or fundamentally similar) applications, then this seems entirely appropriate.

*1 Semi-agile: I'd define this is where the development of software is performed in a full agile manner, but the senior decision makers still rely on business case documentation, project managers and meeting once a month for updates.

Oracle A-Team Chronicles Live!

Ramkumar Menon - Fri, 2013-06-07 13:58
Oracle Fusion Middleware A-Team is Oracle' SWAT Team of Solution Architects with extensive expertise in Oracle's Integration Product Stack. Their new Chronicles Portal is live at http://www.ateam-oracle.com/integration/. It contains several articles describing Best Practices, Tips & Tricks, Architectural Guidance for SOA, BPM, and several other subjects. Its a Must-Bookmark Portal. Happy Reading!

Oracle A-Team Chronicles Live!

Ramkumar Menon - Fri, 2013-06-07 13:58
Oracle Fusion Middleware A-Team is Oracle' SWAT Team of Solution Architects with extensive expertise in Oracle's Integration Product Stack. Their new Chronicles Portal is live at http://www.ateam-oracle.com/integration/. It contains several articles describing Best Practices, Tips & Tricks, Architectural Guidance for SOA, BPM, and several other subjects. Its a Must-Bookmark Portal. Happy Reading!

Looking Forward to Kscope13

Look Smarter Than You Are - Thu, 2013-06-06 23:17
On June 9, the rates for Kscope13 go up $300 per person (basically, you're going up to the last minute, I-don't-know-why-I-waited-but-now-it-costs-a-lot-more price).  If you haven't registered yet for what is by far the best Oracle EPM, BI, Hyperion, Business Analytics, Essbase, etc. conference in the world, go right now to kscope13.com and register.  It'll be the best training experience of the year: you're basically getting 4.5 days of training that you won't see anywhere else the entire year... for the price of 2 days of training at an Oracle training center.

And when you register, don't forget to use promo code IRC to save $100 off whatever the current rate is.

The conference is June 23-27 in New Orleans though my favorite day is always the opening Sunday, so make sure you fly in Saturday night.  On Sunday, they turn the sessions over to the Oracle Development team to talk about everything they have planned for the next 1-3 years.  It's the one time each year that you can hear right from the people who are building it what you're going to be seeing in the future.  There's generally an hour on each major product line (an hour on Essbase, an hour on Hyperion Planning, an hour on mobile BI, etc.).  The keynote this year is Balaji Yelamanchili, the head of Oracle BI and EPM development for Oracle.  My only semi-complaint about this year's BI/EPM Symposium is that there's so much content that they're splitting it into three concurrent symposiums: Business Intelligence, EPM, and a special symposium for the EPM business users.

This year will be somewhat bittersweet for me since I am no longer actively involved with the chairing of the conference.  This means that I get to focus on going to sessions, learning things, playing/leading Werewolf games, and of course, presenting a few sessions.  Here are the ones I'm personally teaching:


  • Using OBIEE to Retrieve Essbase Data:  The 7 Steps You Won’t Find Written Down.  This is in the BI track and it's basically all the quirks about connecting OBIEE to Essbase in a way that uses the strengths of each product.
  • What’s New in OBIEE 11.1.1.7: Oracle on Your iPhone & Other Cool Things.  This is also in the BI track and it's an overview of all the things that people will like in 11.1.1.6 (for both Hyperion and relational audiences).
  • Everything You Know About Essbase Optimization  is Incomplete, Outdated, Or Just Plain Wrong.  This is in the Essbase track and it's the one I'm most looking forward to delivering, because I get to break all of the optimization rules we all have been accepting as gospel for close to 20 years.
  • Learn From Common Mistakes: Pitfalls to Avoid In Your Hyperion Planning Implementation.  This is a vendor presentation hosted by interRel.  I get to sit on the panel and answer Planning questions from the audience while talking about blunders I've seen during Planning implementations.  It should be fun/rousing.  Since it's all interRel, I wouldn't be surprised if a few punches were thrown or at minimum, a few HR violations were issued.
  • Innovations in BI:  Oracle Business Intelligence against Essbase & Relational (parts 1 and 2).  This is also in the BI track (somehow I became a BI speaker???) and I'm co-presenting this session with Stewart Bryson from Rittman Mead.  We'll be going over OBIEE on Essbase on relational and compare it to OBIEE on relational directly.  Stewart is a long-time friend and Oracle ACE for OBIEE, so it should let us each showcase our respective experiences with Essbase and OBIEE in a completely non-marketing way.
  • CRUX (CRUD meets UX): Oracle Fusion Applications Functional UI Design Patterns in Oracle ADF.  This is in the Fusion track and I'll be talking about how to make a good user interface as part of the user experience of ADF.  No, this doesn't have a thing to do with Hyperion.
I am looking forward to all the wacky, new things Mike Riley (my replacement as Conference Chair for Kscope) has in store.  My first Kscope conference was in New Orleans in 2008 (back when they called it Kaleidoscope and no one was quite sure why it wasn't "i before e") so this is a homecoming of sorts albeit with 8 times as many sessions on Oracle BI/EPM.  If you're there (and let's face it, all the cool kids will be), stop by the interRel booth and say "hi."  It's the only 400 square feet booth, so it shouldn't be hard to find.
Categories: BI & Warehousing

EM12c Disk Busy Alert for Oracle Database Appliance V1 & X3-2

Fuad Arshad - Wed, 2013-06-05 10:03
Oracle Just Published a Document ID 1558550.1 that talks about an issues that i've had an SR out for 6 months now.
Due to a Linux iostat bug  BUG: 1672511 (unpublished)  - oda - disk device sdau1 & sdau2 are 100% busy due to avgqu-sz value
This forces host level monitoring to report Critical Disk Busy alerts . This Bug will be fixed in an upcoming release of the the Oracle Database Appliance Software. 
This workaround is to disable Disk Activity Busy alert in EM12c. After the issue is resolved the user now has the responsibility to remember to reenable this alert.

  The alert in the document makes me laugh though 

Note:  Once you apply the iostat fix through an upcoming ODA release, make sure that you re-enable this metric by adding the Warning and Critical threshold values and applying the changes.
 

Improving data move on EXADATA IV

Mathias Magnusson - Wed, 2013-06-05 07:00

Reducing storage requirements

In the last post in this series I talked about how we sped up the move of data from operational to historical tables from around 16 hours down to just seconds. You find that post here.

The last area of concern was the amount of storage this took and would take in the future. As it was currently taking 1.5 TB it would be a fairly large chunk of the available storage and that raised concerns for capacity planning and for availability of space on the EXADATA for other systems we had plans to move there.

We set out to see what we could do to both estimate max disk utilisation this disk space would reach as well as what we could do to minimize the needed disk space. There were two considerations  minimize disk utilisation at the same time as query time should not be worsened. Both these were of course to be achieved without adding a large load to the system, especially not during business hours.

The first attempt was to just compress one of the tables with the traditional table compression. After running the test across the set of tables we worked with, we noticed a compression ratio of 57%. Not bad, not bad at all. However, this was now to be using an EXADATA. One of the technologies that are EXADATA only (to be more technically correct, only available with Oracle branded storage) is HCC. HCC stands for Hybrid Columnar Compression. I will not explain how it is different from normal compression in this post, but as the name indicates the compression is based around columns rather than on rows as traditional compression is. This can achieve even better results, at least that is the theory and the marketing for EXADATA says that this is part of the magic sause of EXADATA. Time to take it out for a spin.

After having set it up for our tables having the same exact content as we had with the normal compression, we had a compression rate of 90%. That is 90% of the needed storage was reduced by using HCC. I tested the different options available for the compression (query high and low as well as archive high and low), and ended up choosing query high. My reasoning there was that the compression rate of query high over query low was improved enough and the processing power needed was well worth it. I got identical results on query high and archive low. It took the same time, resulted in the same size dataset and querying took the same time. I could not tell that they were different in any way. Archive high however  is a different beast. It took about four times the processing power to compress and querying too longer and used more resources too. As this is a dataset I expect the users to want to run more and more queries against when they see that it can be done in a matter of seconds, my choice was easy, query high was easily the best for us.

How do we implement it then? Setting a table to compress query high and then run normal inserts against it is not achieving a lot. There is some savings with it, but it is just marginal compared to what can be achieved. For HCC to kick in, we need direct path writes to occur. As this data is written once and never updated, we can get everything compressed once the processing day is over. Thus, we set up a job to run thirty minutes past midnight which compressed the previous days partition. This is just one line in the job that does the move of the partitions described in the last post in this series.

The compression of  one very active day takes less than two minutes. In fact, the whole job to move and compress has run in less than 15 seconds for each days compression since we took this solution live a while back. That is a time well worth the 90% saving in disk consumption we achieve.

It is worth to note that while HCC is an EXADATA feature not available in most Oracle databases, traditional compression is available. Some forms of it requires licensing, but it is available so while you may not get the same ratio as described in this post you can get a big reduction in disk space consumption using the compression method available to you.

With this part the last piece of the puzzle fell in place and there were no concerns left with the plan for fixing the issues the organisation had with managing this log data. The next post in this serie will summarise and wrap up what was achieved with the changes described in this serie.


Salesforce.com Real Time integration with Oracle using Informatica PowerCenter 9.5

Kubilay Çilkara - Tue, 2013-06-04 17:03
In this post I will describe how you can integrate your Salesforce.com org with a relational database, like Oracle in real time, or better 'near' real time!

Many times I come across the requirement of quickly propagating changes from cloud platforms like Salesforce.com to on-premise data stores. You can do this with webservices, but that is not middleware and it requires coding.

How about doing with a data integration tool?

+Informatica Corporation's  Informatica PowerCenter can achieve this by using the CDC (Change Data Capture) feature of the Informatica PowerCenter Salesforce connector, when Salesforce is the source in a mapping.

The configuration is simple. All you really have to set up is 2 properties in the Mapping Tab of a Session Task in Informatica Workflow Manager.

These are the properties:
  • Time Limit property to -1
  • Flush interval property to 60 seconds (minimum 60 seconds)
See a picture from one of my settings










And here is what these two settings mean from the PowerCenter PowerExchange for Salesforce.com User Guide:

CDC Time Limit

Time period (in seconds) that the Integration Service reads changed Salesforce data. When you set the CDC Time Limit to a non-zero value, the Integration Service performs a full initial read of the source data and then captures changes to the Salesforce data for the time period you specify. Set the value to -1 to capture changed data for an infinite period of time. Default is 0. 

Flush Interval

Interval (in seconds) at which the Integration Service captures changed Salesforce data. Default is 300. If you set the CDC Time Limit to a non-zero value, the Integration Service captures changed data from the source every 300 seconds. Otherwise, the Integration Service ignores this value.


That's it, you don't have to configure anything else!

Once you set up these properties in the mapping tab of a session, save and restart the task in the workflow, the task will run continuously, non stop. The connector will poll the Salesforce org continuously and propagate any changes you do in Salesforce, downstream to the premise database system, including INSERT, UPDATE and DELETE operations.

Enjoy!

More reading:

SFDC CDC implementation in Informatica PowerCenter




Categories: DBA Blogs

Webcast Series - What's New in EPM 11.1.2.3 and OBIEE 11.1.1.7

Look Smarter Than You Are - Tue, 2013-06-04 10:56
Today I'm giving the first presentation in a 9-week long series on all the new things in Oracle EPM Hyperion 11.1.2.3 and OBIEE 11.1.1.7.  The session today (and again on Thursday) is an overview of everything new in all the products.  It's 108 slides which goes to show you that there's a lot new in 11.1.2.3.  I won't make it through all 108 slides but I will cover the highlights.

I'm actually doing 4 of the 9 weeks (and maybe 5, if I can swing it).  Here's the complete lineup in case you're interested in joining:

  • June 4 & 6 - Overview
  • June 11 & 13 - HFM
  • June 18 & 20 - Financial Close Suite
  • July 9 & 11 - Essbase and OBIEE
  • July 16 & 18 - Planning
  • July 23 & 25 - Smart View and Financial Reporting
  • July 30 & Aug 1 - Data & Metadata Tools (FDM, DRM, etc.)
  • Aug 6 & 8 - Free Supporting Tools (LCM, Calc Mgr, etc.)
  • Aug 13 & 15 - Documentation

If you want to sign up, visit http://www.interrel.com/educations/webcasts.  There's no charge and I don't do marketing during the sessions (seriously, I generally forget to explain what company I work for).  It's a lot of information, but we do spread it out over 9 weeks, so it's not information overload.

And bonus: you get to hear my monotone muppet voice for an hour each week. #WorstBonusEver
Categories: BI & Warehousing

TROUG 2013 DW/BI SIG

H.Tonguç Yılmaz - Tue, 2013-06-04 04:21
Selam, ikinci Türk Oracle Kullanıcıları Derneği, BI/DW özel ilgi grubu toplantımız 21 Haziran günü İTÜ Maslak ‘da gerçekleşecek. Draft plan şu şekilde: 09:00 – 09:30 Kayıt ve açılış 09:30 – 10:15 Ersin İhsan Ünkar / Oracle Big Data Appliance & Oracle Big Data Connectors – Hadoop Introduction 10:30 – 11:15 Ferhat Şengönül / Exadata TBD […]

Free Course on ADF Mobile

Bex Huff - Mon, 2013-06-03 15:30

Oracle came out with a clever new online course on Developing Applications with ADF Mobile. I really like the format: it's kind of like a presentation, but with with video of the key points and code samples. There's also an easy-to-navigate table of contents on the side so you can jump to the topic of interest.

I like it... I hope the ADF team continues in this format. Its a lot better than a jumble of YouTube videos ;-)

read more

Categories: Fusion Middleware

e_howdidIdeservethis

Oracle WTF - Sat, 2013-06-01 02:51

A friend has found himself supporting a stack of code written in this style:

DECLARE
   e_dupe_flag EXCEPTION;
   PRAGMA EXCEPTION_INIT(e_dupe_flag, -1);

BEGIN
   ...

EXCEPTION
   WHEN e_dupe_flag THEN
      RAISE e_duplicate_err;

  etc...

Because, as he says, coding is not hard enough.

This reminded me of one that was sent in a while ago:

others EXCEPTION;

"I didn't know you could do that" adds our correspondent.

Pages

Subscribe to Oracle FAQ aggregator