Feed aggregator

June 2013 Critical Patch Update for Java SE Released

Oracle Security Team - Tue, 2013-06-18 14:51

Hello, this is Eric Maurice again.

Oracle today released the June 2013 Critical Patch Update for Java SE.  This Critical Patch Update provides 40 new security fixes.  37 of these vulnerabilities are remotely exploitable without authentication.

34 of the fixes brought with this Critical Patch Update address vulnerabilities that only affect client deployments.  The highest CVSS Base Score for these client-only fixes is 10.0. 

 4 of the vulnerabilities fixed in this Critical Patch Update can affect client and server deployments.  The most severe of these vulnerabilities has received a CVSS Base Score of 7.5. 

One of the vulnerabilities fixed in this Critical patch Update affects the Java installer and can only be exploited locally. 

Finally, one of the fixes included in this Critical Patch Update affects the Javadoc tool and the documents it creates.  Some HTML pages that were created by any 1.5 or later versions of the Javadoc tool are vulnerable to frame injection.  This means that this vulnerability (CVE-2013-1571, also known as CERT/CC VU#225657) can only be exploited through Javadoc-generated HTML files hosted on a web server.  If exploited, this vulnerability can result in granting a malicious attacker the ability to inject frames into a vulnerable web page, thus allowing the attacker to direct unsuspecting users to malicious web pages through their web browsers.  This vulnerability has received a CVSS Base Score of 4.3.  With the release of this Critical Patch Update, Oracle has fixed the Javadoc tool so that it doesn’t produce vulnerable pages anymore, and additionally produced a utility, the “Java API Documentation Updater Tool,” to fix previously produced (and vulnerable) HTML files.  More information about this vulnerability is available on the CERT/CC web site at http://www.kb.cert.org/vuls/id/225657. 

Oracle recommends that this Critical Patch Update be applied as soon as possible because it includes fixes for a number of severe vulnerabilities.  Note that the vulnerabilities fixed in this Critical Patch Update affect various components and, as a result, may not affect the security posture of all Java users in the same way. 

Desktop users can leverage the Java Autoupdate or visit Java.com to ensure that they are running the most recent version.  As a reminder, security fixes delivered through the Critical Patch Update for Java SE are cumulative: in other words, running the most recent version of Java provides users with the protection resulting from all previously-released security fixes.

<?xml:namespace prefix = o ns = "urn:schemas-microsoft-com:office:office" /> 

For More Information:

The Advisory for the June 2013 Critical Patch Update for Java is located at http://www.oracle.com/technetwork/topics/security/javacpujun2013-1899847.html

More information about the Javadoc tool is available at http://www.oracle.com/technetwork/java/javase/documentation/index-jsp-135444.html

ODTUG KScope 2013 : I'm Speaking

Luc Bors - Tue, 2013-06-18 03:52

error on line 1 at column 1: Document is empty

Dave Best - Mon, 2013-06-17 15:38
<!--[if !mso]> v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} x\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} <![endif]--> I'm not sure why this happens but every now and then the invalidator password gets corrupted.  When that happens, the following error will be seen when you try to access portal: This page contains the following errors:

agent deployment error in EM 12c

Amardeep Sidhu - Sun, 2013-06-16 12:04

Yesterday I was configuring EM 12c for a Sun Super Cluster system. There were a total of 4 LDOMs where I needed to deploy the agent (Setup –> Add targets –> Add targets manually). Out of these 4 everything went fine for 2 LDOMs but for the other two it failed with an error message. It didn’t give much details on the EM screen but rather gave a message to try to secure/start the agent manually. When I tried to do that manually the secure agent part worked fine but the start agent command failed with the following error message:

oracle@app1:~$emctl start agent
Oracle Enterprise Manager Cloud Control 12c Release 2
Copyright (c) 1996, 2012 Oracle Corporation.  All rights reserved.
Starting agent ………………………………………………………. failed.
HTTP Listener failed at Startup
Possible port conflict on port(3872): Retrying the operation…
Failed to start the agent after 1 attempts.  Please check that the port(3872) is available.

I thought that there was something wrong with the port thing so I cleaned the agent installation, made sure that the port wasn’t being used and did the agent deployment again. This time it again failed with the same message but it reported a different port number ie 1830 agent port no:

oracle@app1:~$emctl start agent
Oracle Enterprise Manager Cloud Control 12c Release 2
Copyright (c) 1996, 2012 Oracle Corporation.  All rights reserved.
Starting agent ……………………………………………. failed.
HTTP Listener failed at Startup
Possible port conflict on port(1830): Retrying the operation…
Failed to start the agent after 1 attempts.  Please check that the port(1830) is available.

Again checked few things but found nothing wrong. All the LDOMs had similar configuration so what worked for the other two should have worked for these two also.

Before starting with the installation I had noted the LDOM hostnames and IPs in a notepad file and had swapped the IPs of two LDOMs (actually these two only Smile with tongue out ). But later on I found that and corrected. While looking at the notepad file it occurred to me that the same stuff could be wrong in /etc/hosts of the server where EM is deployed. Oh boy that is what it was. While making the entries in /etc/hosts of EM server, I copied it from the notepad and the wrong entries got copied. The IPs for these two LDOMs got swapped with each other and that was causing the whole problem.

deinstalled the agent, correct the /etc/hosts and tried to deploy again…all worked well !

Categories: BI & Warehousing

Oracle NoSQL Database Contest Announced

Charles Lamb - Fri, 2013-06-14 10:16
As many of you know, the Oracle NoSQL Database was advanced into the pole position of the NoSQL landscape thru its underlying use of BerkeleyDB as its core storage engine.  Oracle's NoSQL Database is an advanced cluster management, load balancing, parallelization and replication layer that turns BerkeleyDB into a superior scale-out solution.   An Oracle partner has now launched a contest to highlight some of the coolest applications built on the solution.

Contest - Show off your NoSQL, win an iPad!!

A partner of ours, OpalSoft Inc, is running a contest to select the Coolest “Oracle NoSQL Database Application".  It is simple to enter the contest, just go to http://www.nosqlcontest.com/ and submit information about an application you've built on the Oracle NoSQL Database.  If you haven't built one already, you still have time, so go ahead and download create an application or integrate with some cool open source project then submit your entry.  You have until July, 8th 2013 to complete your submission. The chosen winner will receive a new iPad with one of those retina displays, perfect for hanging out by the pool this summer!  
For complete details, visit the contest website.

Oracle Unified Directory 11.1.2.1.0: TNS and EUS - Part 2: Enterprise User Security

Frank van Bortel - Fri, 2013-06-14 06:17
Enterprise User Security: Step by Step I want to set OUD up in the way I've done it with OID 10.1.4.3: Use a Shared Schema in every database map this shared schema within the security domain in OUD create enterpise users in OUD Use a group in OUD to assign the enterprise roles to Assign Enterprise Users (defined in OUD) to these groups Planning Implementing Enterprise User security Frankhttp://www.blogger.com/profile/07830428804236732019noreply@blogger.com0

APEX conditions and performance - part 2

Tony Andrews - Fri, 2013-06-14 01:56
This is a follow-up to my previous post APEX conditions and performance. Roel Hartman has followed up my post by doing the due diligence and testing the performance of different APEX condition types.  His findings back up what I just asserted. (But I wasn't just guessing, my assertions were based on facts I learned long ago from someone in the APEX team via the Oracle APEX forum - so long ago Tony Andrewshttp://www.blogger.com/profile/16750945985361011515noreply@blogger.com0http://tonyandrews.blogspot.com/2013/06/apex-conditions-and-performance-part-2.html

Required Reading

Chet Justice - Wed, 2013-06-12 13:10
It's not often that I run across articles that really resonate with me. Last night was one of those rare occasions. What follows is a sampling of what I consider to be required reading for any IT professional with a slant towards database development. Bad CaRMa

how NOT to design a database schema - super classic article. Every data architect should read this ! simple-talk.com/opinion/opinio… @timothyjgorman

— Kyle Hailey (@dboptimizer) June 11, 2013
That led me to Bad CaRMa by Tim Gorman. This was an entry in Oracle Insights: Tales of the Oak Table, which I have not read, yet.

A snippet:

...The basic premise was that just about all of the features of the relational database were eschewed, and instead it was used like a filing system for great big plastic bags of data. Why bother with other containers for the data—just jam it into a generic black plastic garbage bag. If all of those bags full of different types of data all look the same and are heaped into the same pile, don't worry! We'll be able to differentiate the data after we pull it off the big pile and look inside.

Amazingly, Randy and his crew thought this was incredibly clever. Database engineer after database engineer were struck dumb by the realization of what Vision was doing, but the builders of the one-table database were blissfully aware that they were ushering in a new dawn in database design...

This is from 2006 (the book was published in 2004). Not sure how I missed that story, but I did. Big Ball of Mud I've read this one, and sent it out, many times over the years. I can't remember when I first encountered it, but I read this once every couple of months. I send it out to colleagues about as often. You can find the article here.

A BIG BALL OF MUD is haphazardly structured, sprawling, sloppy, duct-tape and bailing wire, spaghetti code jungle. We’ve all seen them. These systems show unmistakable signs of unregulated growth, and repeated, expedient repair. Information is shared promiscuously among distant elements of the system, often to the point where nearly all the important information becomes global or duplicated. The overall structure of the system may never have been well defined. If it was, it may have eroded beyond recognition. Programmers with a shred of architectural sensibility shun these quagmires. Only those who are unconcerned about architecture, and, perhaps, are comfortable with the inertia of the day-to-day chore of patching the holes in these failing dikes, are content to work on such systems.

Read it. Remember it. How To Ask Questions The Smart WayEver been in a forum? Has anyone ever given you the "RTFM" answer? Here's how you can avoid it. How To Ask Questions The Smart Way. I read this originally about 9 or 10 years ago. I've sent it out countless times.

The first thing to understand is that hackers actually like hard problems and good, thought-provoking questions about them. If we didn't, we wouldn't be here. If you give us an interesting question to chew on we'll be grateful to you; good questions are a stimulus and a gift. Good questions help us develop our understanding, and often reveal problems we might not have noticed or thought about otherwise. Among hackers, “Good question!” is a strong and sincere compliment.

Despite this, hackers have a reputation for meeting simple questions with what looks like hostility or arrogance. It sometimes looks like we're reflexively rude to newbies and the ignorant. But this isn't really true.

What we are, unapologetically, is hostile to people who seem to be unwilling to think or to do their own homework before asking questions. People like that are time sinks — they take without giving back, and they waste time we could have spent on another question more interesting and another person more worthy of an answer. We call people like this “losers” (and for historical reasons we sometimes spell it “lusers”).
Business Logic - PL/SQL Vs Java - Reg The article can be found here.

I'm don't believe this is the one that I would read just about every day during my first few years working with Oracle, but it's representative (I'll link up the original when I find it). I cut my teeth in the Oracle world by reading AskTom every single day for years. Some of my work at the time included working with java server pages (jsp) - at least until I found APEX. I monkeyed around with BC4J for awhile as well, but I believe these types of threads on AskTom kept me from going off the cliff. In fact, I got to a point where I would go to an interview and then debate the interviewer about this same topic. Fun times.

if it touches data -- plsql.

If it is computing a fourier transformation -- java.

If it is processing data -- plsql.

If it is generating a graph -- java.

If it is doing a transaction of any size, shape or form against data -- plsql.
Thinking Clearly About Performance Cary Millsap. Most of the people seem to know Cary from Optimizing Oracle Performance, I didn't. I first "met" Cary virtually and he was gracious enough to help me understand my questions around Logging, Debugging, Instrumentation and Profiling. Anyway, what I've learned over that time, is that Cary doesn't think of himself as a DBA, he's a Developer. That was shocking for me to hear...I wonder how many others know that. So I've read this paper about 20 times over the last couple of years (mostly because I'm a little slow). I organize events around this topic (instrumentation, writing better software, etc) and this fits in perfectly. My goal is to one day co-present with Cary, while playing catch, on this topic (I don't think he knows that, so don't tell him). Link to his paper can be found here. Enjoy! The Complicator's Gloves One of my favorite articles from The Daily WTF of all time. Find the article here. The gist of the story is this: an internal forum where people were discussing how to warm a given individuals hands on his bike ride to work. The engineers then proceeded to come up with all kinds of solutions...they spent all day doing this. Finally, someone posts, "wear gloves." End of discussion. Love it. I wrote about it years ago in Keeping it Simple. For a few years I considered buying up thecomplicatorsgloves.com and try to gather related stories, but I got lazy. You should read this often, or better, send it out to colleagues on a regular basis to remind them of their craziness.

I'll continue to add to this list as time goes on. If you have any suggestions, leave a comment and I'll add them to the list.
Categories: BI & Warehousing

Musings on Standby Database

Hans Forbrich - Wed, 2013-06-12 13:02
It seem that every few months, there is a renewed discussion about whether you need to license your standby database, whether standby is Data Guard, whether Data Guard can be used in Oracle Data Server Standard Edition, whether we have to pay if we just apply redo at night, and similar.

Here is my response to that question:

-----

Standby is standby.  It is a technique to support disaster recovery.  And it is still called Standby Database, not Data Guard, even now.

For a long time, in order to automate disaster recovery technique people have written scripts.  For Oracle Database Server, Laurence To and the Oracle SPG group assembled a number of the scripts back with Oracle7 and Oracle8 and released then as a feature of the Enterprise Edition called Data Guard that initially only consisted of the 'best practices' scripts.  The core feature was, and still is, available at no additional cost.

Data Guard has since progressed significantly and become more of a set of executables, rather than scripts.  But the primary purpose still is to automate the steps of syncronizing the standby and automating the switchover/failover.

Standby is standby.  With Oracle Database Server, it consists of two databases: the first or primary actively handling transactions and query requests; the 'standby' being available to take over the load if the primary fails.

Over the years, we in the industry have refined the term to distinguish between Cold and Hot standby, the difference being in how much effort is involved, and how quickly the standby environment is available for use.

A Cold Standby environment may have the software installed, but the environment does not use any CPU cycles to keeping the data in sync.  In general, that will require some sort of restore from backups.  Since the Cold Standby does not use CPU cycles, Oracle has not traditionally charged for it.

A Hot Standby environment keeps the data in sync fairly closely to the primary.  The more similar the standby environment needs to be to the primary at the data and configuration level, the more it will cost to do that, and the more complicated the sync method needs to be.  The Hot Standby does use CPU cycles, and therefore must be licensed the same way as the primary unless you have an exception within YOUR Oracle license contract.

Oracle database server - whether Express Edition, Standard Edition or Enterprise Edition - has the ability to perform crash and media recovery from intact redo log files.  Oracle's hot standby capability is simply continuous media recovery.  However that requires the redo information from the primary to be sent to the standby, when it is available and it requires the standby to apply the redo once it has arrived.

The Enterprise Edition feature called Data Guard is simply a 'guardian application' that detects when the redo information is available, extracts it, transmits it, and controls the application at the standby system(s).  What it does can also be done manually, or through your own scripts.  Indeed, in Standard Edition, DbVisit (http://www.dbvisit.com) has created their own commercially available executable that does the same thing and more.

Data Guard has been enhanced to allow several 'levels' of similarity, from "minimum data loss" through "absolutely no loss permitted".  What used to be scripts is now compiled executables with many test points and with the ability to control the database server.

And the database kernel has been modified to allow the standby server to be opened in read-only while applying the redo information which may happen under the control of the Data Guard application.  This is called Active Data Guard, and it DOES require additional licenses.


Also check out the Software Investment Guide at http://www.oracle.com/us/corporate/pricing/index.html

And remember: the final authority is Oracle, not me.  "I read it on the Internet" is a shoddy defense in a contract dispute and will likely NOT be accepted by the Judge in a Court of Law.
Categories: DBA Blogs

Improving data move on EXADATA V

Mathias Magnusson - Wed, 2013-06-12 07:00

Wrap-up

This is the last post in this series and I’ll not introduce anything new here, but rather just summarise the changes explained and talk a bit about the value the solution delivers to the organisation.

Let’s first review the situation we faced before implementing the changes.

The cost of writing the log-records to the database was that all the parallell writing from many different sources was such that it introduced severe bottlenecks to the point that the logging feature had to be turned off days at a time. This was not acceptable but rather than shutting down the whole system which would put lives in immediate danger, this was the only option available. Then even if that would have been fast enough, the moving of data was taking over twice the time available and it was fast approaching the point where data written in 24 hours would take more time to move to the historical store for log-data. That would of course have resulted in an ever growing backlog even if the data move was on 24×7. On top of that the data took up 1.5 TB of disk space, costing a lot of money and raising concerns with out ability to move it to EXADATA.

To resolve the issue during business hours of having contention causing a crippling impact on the overall system, we changed the table setup to not have any primary keys, no foreign keys and no indexes. We made the tables partitioned such that we get one partition per day.

To make the move from operational tables to historical tables faster, we opted to have both in the same instance on EXADATA. This allowed us to use partition exchange to swap out the partition from the operational table and swap it into the historical table. This took just a second as all we did was updating some metadata for which table the partition belongs to. Note that this ultra fast operation replaced a process that used to take around 16 hours, for which we had 6.5 and the time it took was expanding as business was growing.

Finally, to reduce the space consumed on disk we used HCC – Hybrid Columnar Compression. This is an EXADATA only feature for compressing data such that columns with repeating values gets a very good compression ratio. We went from 1.5 TB to just over 100 GB. This means that even with no purging of data it would take us over five years to even get back to the amount of storage this used to require.

So in summary

  • During business hours we use 20% of the computing power and even less of the wall clock time it used to take,
  • The time to move data to the historical store was reduced from around 16 hours to less than one second.
  • Disk space requirement was reduced from 1.5 TB to just over 100 GB.

And all of this was done without changing one line of code, in fact there was no rebuild, no configuration change or anything to allow this drastic improvement to work with all the different systems that was writing to these log-tables.

One more thing to point out here is that all these changes was done without using traditional SQL. The fact that it  is an RDBMS does not mean that we have to use SQL to resolve every problem. In fact, SQL is often not the best tool for the job. It is also worth to note that these kinds of optimisations cannot be done by an ORM, it is not what they do. This is what your performance or database architect needs to do for you.

For easy lookup, here are links to the posts in this series.

  1. Introduction
  2. Writing log records
  3. Moving to history tables
  4. Reducing storage requirements
  5. Wrap-up (this post)

Custom OSB Reporting Provider

Edwin Biemond - Tue, 2013-06-11 15:53
With the OSB Report Action we can add some tracing and logging to an OSB Proxy, this works OK especially when you add some Report keys for single Proxy projects but when you have projects with many Proxies who are invoking other JMS or Local Proxies than the default reporting tables (WLI_QS_REPORT_DATA, WLI_QS_REPORT_ATTRIBUTE ) in the SOA Suite soainfra schema is not so handy. I want to

New OTN Interface

Michael Armstrong-Smith - Mon, 2013-06-10 17:32
If you are a user of OTN (Oracle Technology Network) you should have noticed that there is a new interface. I think its pretty cool. What do you think?

Windows Surface RT: There is a desktop?!

Dietrich Schroff - Mon, 2013-06-10 14:21
Last week i had the opportunity to use a Windows Surface RT tablet. For several month i am using a windows 8 laptop, which is equipped with a touchscreen. So i am used to the new tiles interface and i knew at least on laptops, you have the desktop-applications and the "tiles"-applications.
I was curious, how it feels to work without having "two worlds" of applications on a device...
But it was for me big suprise: If you install Microsoft Office on a RT device, you get the desktop back:
 On a tablet with a display size with 10.6 inches (27cm)? I tried to write a document and it wasn't easy to hit the right icons...
By saving the document it was astonishing, that i got a file chooser. From my Nexus 7 i was used to get no folder structures or similar things:
There are two problems with this desktop applications: If not in full-screen mode, you have to work with really small windows and resizing is very difficult and
 the applications are not shown in the application bar:
They are all summarized with desktop... There is no way to switch directly to your word application or powerpoint.  You have to move the desktop und then choose your Office application....

What's new in EBS 12.2?

Famy Rasheed - Mon, 2013-06-10 02:24

Measuring the time left

Rob Baillie - Sun, 2013-06-09 08:30
Burn-down (and burn-up, for that matter) charts are great for those that are inclined to read them, but some people don't want to have to interpret a pretty graph, they just want a simple answer to the question "How much will it cost?"

That is if, like me, you work in what might be termed a semi-agile*1 arena then you also need some hard and fast numbers. What I am going to talk about is a method for working out the development time left on a project that I find to be pretty accurate. I'm sure that there are areas that can be finessed, but this is a simple calculation that we perform every few days that gives us a good idea of where we are.
The basis.It starts with certain assumptions:
You are using stories.OK, so they don't actually have to be called stories, but you need to have split the planned functionality into small chunks of manageable and reasonably like sized work.
Having done that you need to have a practice of working on each chunk until its finished before moving on to the next, and have a customer team test and accept or sign off that work soon after the developers have built it.
You need that so that you uncover your bugs, or unknown work as early as possible, so you can account for them in your numbers.
Your customer team is used to writing stories of the same size.When your customer team add stories to the mix you can be confident that you won't always have to split them into smaller stories before you estimate and start working on them.
This is so you can use some simple rules for guessing the size of the work that your customer team has added but your developers have not yet estimated.
You estimate using a numeric value.It doesn't matter if you use days work, story points or function points, as long as it is expressed as a number, and that something estimated to take 2 of your unit is expected to take the same as 2 things estimated at 1.
If you don't have this then you cant do any simple mathematics on the numbers you have and it'll make your life much harder.
Your developers quickly estimate the bulk of the work before anything is started.This is not to say that the whole project has a Gandalf like startup: "Until there is a detailed estimate, YOU SHALL NOT PASS"; rather that you T-shirt cost, or similar, most of your stories so that you have some idea of the overall cost of the work you're planning.
You need this early in the project so that you have a reasonable amount of data to work with
Your developers produce consistent estimates.
Not that your developers produce accurate estimates, but that they tend to be consistent; if one story is underestimated, then the next one is likely to be.
This tends to be the case if the same group of developers estimate all the stories that they all involve making changes to the same system. If a project involves multiple teams or systems then you may want to split them into sub projects for the means of this calculation.
You keep track of time spent on your project.Seriously, you do this right?
It doesn't need to be a detailed analysis of what time is spent doing what, but a simple total of how much time has been spent by the developers, split between the time spent on stories and that on fixing defects.
If you don't do this, even on the most agile of projects, then your bosses and customer team don't have the real data that they need to make the right decisions.
You, and they, are walking a fine line to negligent

If you have all these bits then you've got something that you can work with...
The calculation.The calculation is simple, and based on the following premises:

  • If your previous estimates were out, they will continue to be out by the same amount for the whole of the project.
  • The level of defects created by the developers and found by the customer team will remain constant through the whole project.
  • Defects need to be accounted for in the time remaining.
  • Un-estimated stories will be of a similar size to previously completed work. 
The initial variables:

totalTimeSpent = The total time spent on all development work (including defects).

totalTimeSpentOnDefects = The total time spent by developers investigating and fixing defects.

numberOfStoriesCompleted = The count of the number of stories that the development team have completed and released to the customer.

storiesCompletedEstimate = The sum of the original estimates against the stories that have been completed and released to the customer.

totalEstimatedWork = The sum of the developers' estimates against stories and defects that are yet to do.

numberOfStoriesCompleted = The count of number of a stories that have been completed by the development team and released to the customer.

numberOfUnEstimatedStories = The count of the number of stories that have been raised by the customer but not yet estimated by the development team.

numberOfUnEstimatedDefects = The count of the number of defects that have been found by the customer but not yet estimated by the development team.
Using these we can work out:
Time remaining on work that has been estimated by the development team.For this we use a simple calculation on the previous accuracy of the estimates.
This includes taking into account the defects that will be found, and need to be fixed against the new feunctionality that will be built.


estimateAccuracy = totalTimeSpent / storiesCompletedEstimate

predictedTimeRemainingOnEstimatedWork = ( totalEstimatedWork * estimateAccuracy )
Time remaining on work that has not been estimated by the development team.In order to calculate this, we rely on the assumptions that the customer team have got used to writing stories of about the same size every time.
You may need to get a couple of developers to help with this by splitting things up with the customer team as they are creating them. I'd be wary of getting then to estimate work though.

averageStoryCost = totalTimeSpent / numberOfStoriesCompleted

predictedTimeRemainingOnUnEstimatedStories = numberOfUnEstimatedStories * averageStoryCost


averageDefectCost = totalTimeSpentOnDefects / numberOfStoriesCompleted

predictedTimeRemainingOnUnEstimatedDefects = numberOfUnEstimatedDefects * averageDefectCost 
Total predicted time remainingThe remaining calculation is then simple, it's the sum of the above parts.
We've assessed the accuracy of previous estimates, put in an allocation for bugs not yet found, and assigned a best guess estimate against things the development team haven't yet put their own estimate.

totalPredictedTimeRemaining = predictedTimeRemainingOnEstimatedWork + predictedTimeRemainingOnUnEstimatedStories + predictedTimeRemainingOnUnEstimatedDefects 
The limitationsI find this calculation works well, as long as you understand its limitations.
I hope to present some data in this blog very soon, as we already have some empirical evidence that it works.
Admittedly, for the first 20% or so of the project the numbers coming out of thus will fluctuate quite a bit. This is because there isn't enough 'yesterday's weather' data to make it the estimate accuracy calculation meaningful. The odd unexpectedly easy (or hard) story can have a bit effect on the numbers.
Also, if your testing and accepting of stories lags far behind your development or if you don't fix your bugs first, you will under estimate the number of bugs in the system. However, if you know these things you can react to them as you go along.
Further WorkI am not particularly inclined to make changes to this calculation, as the assumptions and limitations are perfectly appropriate for the teams that I work with. For other teams this may not be the case, and I might suggest some slight alterations if you think they'd work for you.
Estimating number of defects not yet found.
It seems reasonable for you to look at the average number of defects raised per story accepted and use this to work out the number of defects that have not yet been found.  These could then be included in your calculation based on the average cost of defects that you've already fixed.
This might be a good idea if you have a high level of defects being raised in your team.  I'd say high as meaning anything over about 20% of your time being spent fixing defects.
Using the estimate accuracy of previous projects at the start of the new.
As I pointed out earlier, a limitation of this method is the fact that you have limited information at the start of the project and so you can't rely on the numbers being generated for some time.  A way of mitigating this is to assume that this project will go much like the previous one.
You can then use the estimate accuracy (and defect rate, if you calculated one) from your previous project in order to mitigate the lack of information in this.
If you're using the same development team and changing the same (or fundamentally similar) applications, then this seems entirely appropriate.

*1 Semi-agile: I'd define this is where the development of software is performed in a full agile manner, but the senior decision makers still rely on business case documentation, project managers and meeting once a month for updates.

Oracle A-Team Chronicles Live!

Ramkumar Menon - Fri, 2013-06-07 13:58
Oracle Fusion Middleware A-Team is Oracle' SWAT Team of Solution Architects with extensive expertise in Oracle's Integration Product Stack. Their new Chronicles Portal is live at http://www.ateam-oracle.com/integration/. It contains several articles describing Best Practices, Tips & Tricks, Architectural Guidance for SOA, BPM, and several other subjects. Its a Must-Bookmark Portal. Happy Reading!

Oracle A-Team Chronicles Live!

Ramkumar Menon - Fri, 2013-06-07 13:58
Oracle Fusion Middleware A-Team is Oracle' SWAT Team of Solution Architects with extensive expertise in Oracle's Integration Product Stack. Their new Chronicles Portal is live at http://www.ateam-oracle.com/integration/. It contains several articles describing Best Practices, Tips & Tricks, Architectural Guidance for SOA, BPM, and several other subjects. Its a Must-Bookmark Portal. Happy Reading!

Pages

Subscribe to Oracle FAQ aggregator