Feed aggregator

Replacef

Adrian Billington - Mon, 2007-01-22 02:00
A function to simplify string building and debugging by replacing multiple placeholders from a collection of inputs. January 2007

Presenting at ODTUG Kaleidoscope 2007 in Orlando

Clemens Utschig - Sat, 2007-01-20 16:15
As fellow blogger Wilfred van der Deijl wrote on his blog, I got an email too, invitation to speak there - last year I got the slots from my manager (thx Dave :D), this year the are my own - awesome.

I'll present on 3 topics, two for beginners, and one intermediate
  • Managing successful SOA projects, a view beyond agile science
    The session will give an inside view into methodologies applied to govern successful SOA projects.We will discover some of the common challenges by examining the different phases of a project and by splitting the project into different categories - each representing its own set of issues, and how these can be successfully mastered - applying the right tools and the right skills. An insiders view - to make your SOA projects successful.

  • Oracle 11g — Oracle's Next Generation SOA Infrastructure
    which will give a sneak preview on 11g - the all integrated SCA based SOA plattform

  • Advanced Concepts of the BPEL Language
    which is one of the sessions from last year that was very well attended and that I also presented at Oracle Open World last year
Last year the conference was really well attended and Washington DC was great - although insanely hot and humid, which made every walk outside a little bit of a sauna, bath trip.

I hope to see you there - This conference is definetely worth the trip, and hey Daytona is not that bad :D

Heading for Europe - to support one of our key SOA Suite customers

Clemens Utschig - Sat, 2007-01-20 15:47
After a bad abdominal virus this week, that seems to pollute the Bay Area, and got me a sleepless night in the ER at UCSF Med - I am somewhat well again, and on my way to Europe.

I got the chance of a keynote at the OOP 2007 in Munich (Germany), speaking about Managing successfull SOA projects - and the importance of the human/organizational side in SOA projects.

If you happen to be at the OOP or around Munich, I think the entry to the demo pods / show floor is free - so come by and visit me - I'll be there to answer questions around SOA in general, demo the SOA Suite and meet customers.

Also during the week I'll be visiting one of our customers in Germany to make sure their large, and fledged SOA implementation hits the target.
With this in mind, and all the feedback we receive from the fields in our OTN forums I am very happy to see the industry and the market adopting our SOA Suite, the latest edition of, an all over integrated plattform from Governance, Development and Security, to BPEL as well as ESB.

Logical Reads and Orange Trees

Eric S. Emrick - Thu, 2007-01-18 23:36
My previous post was a riddle aimed to challenge us to really think about logical I/O (session logical reads). Usually we think of I/O in terms of OS block(s), memory pages, Oracle blocks, Oracle buffer cache buffers, etc. In Oracle, a logical I/O is neither a measure of the number of buffers visited, nor the number of distinct buffers visited. We could of course craft scenarios yielding these results, but these would be contrived special cases - like an episode of Law and Order only better. Instead, logical I/O is the number of buffer visits required to satisfy your SQL statement. There is clearly a distinction between the number of buffers visited and the number of buffer visits. The distinction lies in the target of the operation being measured: the visits not the buffers. As evidenced in the previous post we can issue a full table scan and perform far more logical I/O operations than there are blocks in the table that precede the high water mark. In this case I was visiting each buffer more than one time gathering up ARRAYSIZE rows per visit.

If I had to gather up 313 oranges from an orchard using a basket that could only hold 25 oranges, then it would take me at least 13 visits to one or more trees to complete the task. Don't count the trees. Count the visits.

Oracle Riddles: What's Missing From This Code?

Eric S. Emrick - Mon, 2007-01-15 18:56
The SQL script below has one line intentionally omitted. The missing statement had a material impact on the performance of the targeted query. I have put diagnostic bookends around the targeted query to show that no DML or DDL has been issued to alter the result. In short, the script inserts 32K rows into a test table. I issue a query requiring a full table scan, run a single statement and rerun the same query - also a full table scan. While the second query returns the same number of rows, it performs far fewer logical I/O operations to achieve the same result set. Review the output from the script. Can you fill in the missing statement? Fictitious bonus points will be awarded for the Oracle scholar that can deduce the precise statement :)

/* Script blog.sql


spool blog.out
set feed on echo on;
select * from v$version;
drop table mytable;
create table mytable (col1 number) tablespace users;
insert into mytable values (3);
commit;
begin
for i in 1..15 loop
insert into mytable select * from mytable;
commit;
end loop;
end;
/
analyze table mytable compute statistics;
select count(*) from mytable;
select blocks from dba_tables where table_name = 'MYTABLE';
select blocks from dba_segments where segment_name = 'MYTABLE';
select index_name from user_indexes where table_name = 'MYTABLE';
set autot traceonly;
select * from mytable;
set autot off;
REM Bookends to show no DML or DDL statement has been executed.
select statistic#, value from v$mystat where statistic# in (4,134);
... missing statement
REM Bookends to show no DML or DDL statement has been executed.
select statistic#, value from v$mystat where statistic# in (4,134);
set autot traceonly;
select * from mytable;
set autot off;
select blocks from dba_tables where table_name = 'MYTABLE';
select blocks from dba_segments where segment_name = 'MYTABLE';
select index_name from user_indexes where table_name = 'MYTABLE';
select count(*) from mytable;
spool off;


End Script blog.sql */

/* Output

oracle@eemrick:SQL> select * from v$version;
BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bi
PL/SQL Release 10.2.0.1.0 - Production
CORE 10.2.0.1.0 Production
TNS for Solaris: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production
5 rows selected.
oracle@eemrick:SQL> drop table mytable;
Table dropped.
oracle@eemrick:SQL> create table mytable (col1 number) tablespace users;
Table created.
oracle@eemrick:SQL> insert into mytable values (3);
1 row created.
oracle@eemrick:SQL> commit;
Commit complete.
oracle@eemrick:SQL> begin
2 for i in 1..15 loop
3 insert into mytable select * from mytable;
4 commit;
5 end loop;
6 end;
7 /
PL/SQL procedure successfully completed.
oracle@eemrick:SQL> analyze table mytable compute statistics;
Table analyzed.
oracle@eemrick:SQL> select count(*) from mytable;
COUNT(*)
----------
32768
1 row selected.
oracle@eemrick:SQL> select blocks from dba_tables where table_name =
'MYTABLE';
BLOCKS
----------
61
1 row selected.
oracle@eemrick:SQL> select blocks from dba_segments where segment_name =
'MYTABLE';
BLOCKS
----------
64
1 row selected.
oracle@eemrick:SQL> select index_name from user_indexes where table_name =
'MYTABLE';
no rows selected
oracle@eemrick:SQL> set autot traceonly;
oracle@eemrick:SQL> select * from mytable;
32768 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 1229213413
-----------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time

-----------------------------------------------------------------------------
0 SELECT STATEMENT 32768 65536 26 (4) 00:00:01

1 TABLE ACCESS FULL MYTABLE 32768 65536 26 (4) 00:00:01

-----------------------------------------------------------------------------

Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
2248 consistent gets
0 physical reads
0 redo size
668925 bytes sent via SQL*Net to client
24492 bytes received via SQL*Net from client
2186 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
32768 rows processed
oracle@eemrick:SQL> set autot off;
oracle@eemrick:SQL> REM Bookends to show no DML or DDL statement has been
executed.
oracle@eemrick:SQL> select statistic#, value from v$mystat where statistic#
in (4,134);
STATISTIC# VALUE
---------- ----------
4 18 <-- Statistic #4 is user commits

134 461920 <-- Statistic #134 is redo size
2 rows selected.
oracle@eemrick:SQL> ... missing echo of statement
oracle@eemrick:SQL> REM Bookends to show no DML or DDL statement has been
executed.
oracle@eemrick:SQL> select statistic#, value from v$mystat where statistic#
in (4,134);
STATISTIC# VALUE
---------- ----------
4 18
134 461920
2 rows selected.
oracle@eemrick:SQL> set autot traceonly;
oracle@eemrick:SQL> select * from mytable;
32768 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 1229213413
-----------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time

-----------------------------------------------------------------------------
0 SELECT STATEMENT 32768 65536 26 (4) 00:00:01

1 TABLE ACCESS FULL MYTABLE 32768 65536 26 (4) 00:00:01

-----------------------------------------------------------------------------

Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
173 consistent gets
0 physical reads
0 redo size
282975 bytes sent via SQL*Net to client
1667 bytes received via SQL*Net from client
111 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
32768 rows processed
oracle@eemrick:SQL> set autot off;
oracle@eemrick:SQL> select blocks from dba_tables where table_name =
'MYTABLE';
BLOCKS
----------
61
1 row selected.
oracle@eemrick:SQL> select blocks from dba_segments where segment_name =
'MYTABLE';
BLOCKS
----------
64
1 row selected.
oracle@eemrick:SQL> select index_name from user_indexes where table_name =
'MYTABLE';
no rows selected
oracle@eemrick:SQL> select count(*) from mytable;
COUNT(*)
----------
32768
1 row selected.
oracle@eemrick:SQL> spool off;


End Output */

Clue: The missing statement is not "alter system set do_less_work = true;"

Second Life client running on Solaris x64 - contd

Siva Doe - Thu, 2007-01-11 20:38

As promised, here is the update.

For Mads and whomever is interested in building the Second Life client on Solaris (x64), this is what I did.
Please do remember, these are just to get the client build on Solaris. I havent completely run the client yet (nothing beyond the login screen). So, buyer beware.
Basically, I followed the Linux instructions in Second Life Twiki page.
Downloaded and built all the libraries mentioned and copied the libraries and headers in the directory under, 'i686-sunos5', instead of 'i686-linux'
Modified SConstruct  (for scons building).

  • Look for 'linux' and introduce the code for 'sunos5' (replace -DLL_LINUX with -DLL_SOLARIS)
  • Remove the 'db-4.2' entry under libs line.
  • Replace 'yacc' with 'bison -y'; 'lex' with 'flex'; 'g++-3.4' with 'g++' (under /usr/sfw/bin); 'strip' with 'gstrip'

Whichever subdirectory contains 'files.linux.lst', make a copy of it called 'files.sunos5.lst'.
Then comes the code changes. Basically, search for files containing 'LL_LINUX' and add "|| LL_SOLARIS" or "&& ! LL_SOLARIS" as appropriate.
In llcommon/llpreprocessor.h:38, I added  "|| (defined(LL_SOLARIS) && !defined(__sparc))" to the line to set the ENDIAN correctly.
In llcommon/llsys.cpp, I added code to use the output of 'psrinfo -v', instead of reading from '/proc/cpuinfo' for SOLARIS.  Similarly, used 'getpagesize() \* sysconf(_SC_PHYS_PAGES)' to get the "Physical kb". I know there are better ways, but just wanted to get the build completed.
In llmath/llmath.h, I was running into some problems regarding 'isfinite'. I replaced with this.
#define llfinite(val) (val <= std::numeric_limits<double>::max())
The other significant work is in 'llvfs/llvfs.cpp' and 'newview/viewer.cpp'. Replaced the code for 'flock' with the appropriate 'fcntl' code.
In files 'newview/lldrawpoolsky.h' and 'newview/llvosky.cpp', replace the variable 'sun' with 'Sun'. 'sun' in a SunOS is defined already, of course.
In 'newview/viewer.cpp', rewrote the 'do_basic_glibc_backtrace()' to use 'printstacktrace()' instead.
Well, you can follow the above instructions, or send me a mail, I will send the diff output. :-)

For runtime, you will have to set your LD_LIBRARY_PATH as mentioned in the Twiki page.
Thats all I have for now. Will update later, if any.

Regards
Siva

PS: It is indeed a pity that I cant login to a server named after me ;-) (userserver.siva.lindenlab.com)

Increasing the Longevity of Your CPU

Eric S. Emrick - Thu, 2007-01-11 18:42

One of my current assignments is to evaluate the potential to increase CPU headroom on a server running a large OLTP Oracle database. Of course, any project such as this is typically motivated by the desire to save money by foregoing a seemingly imminent hardware upgrade. Realizing more CPU headroom for your Oracle database server can be achieved, but not limited to, the following approaches:

1. Add more same-speed CPUs to your existing server.
2. Replace your existing CPUs with faster CPUs.
3. Replace your existing CPUs with a greater number of faster CPUs.
4. Commission a new server platform with more same-speed and/or faster CPUs.
5. Commission a new server platform with a greater number of slower CPUs.
6. Chronologically distribute the load on the system to avoid spikes in CPU.
7. Reduce the work required of the system to satisfy the business.


More times than not I suspect approaches 1-5 are chosen. I am of the opinion that 1) and 3) are more predictable when trying to evaluate the expected CPU headroom yield. Propositions 2), 4) and 5) can be a little less predictable. For example, if I double the speed of my current CPUs will I yield the upgraded CPU cycles as headroom? That is, if I am running 10x500MHz and upgrade to 10x1GHz will I now have the additional 5GHz as headroom? It has been my experience that upgrades such as these do not produce such predictable results, especially if your current box approximates 100% utilization. Certainly, moving to a new server with a greater number of same-speed and/or faster CPUs is a tricky proposition. New servers need to be tested using Production volume with great rigor. While at face value 500MHz would appear to be universally “portable” to any server, there are many other factors that can influence your CPU horsepower: memory architecture, amount of processor cache, processor crosstalk, etc. Options 1-5 can all be very costly and in some cases yield undesirable and unpredictable results.

If you have the luxury of distributing the load on your system to avoid spikes in CPU then that is a great first option. It could buy you more time to evaluate a longer-term solution. For example, shifting any batch jobs to off-peak OLTP hours might give you immediate relief.

What if we can simply “do less” to satisfy the needs of the business? This concept is not new to most Database Administrators and rings a rather cliché tone. After all, aren’t we brow-beaten by the dozens of books and countless articles that speak to database and SQL optimization? The “do less” principle is very sound, but it can be intractable. Reducing the work required of an application often requires management support and can run into political obstacles at every turn. Getting Application Developers and Database Administrators to work in lockstep can require a significant effort. If Management, Developers and Database Administrators buy into a synergistic endeavor the benefits can be amazing – and can save the company a large sum of money.

If you are lucky enough to be working on a project where the common goal of all parties is to reduce the CPU load on your system then I have learned a few things that I hope can help you.

Identify the Targets for Optimization

Identify those SQL statements that contribute the greatest to the CPU load on your database server. These statements usually relate to those that produce the most logical I/O on your database. Caution needs to be taken when trying to identify these statements. You shouldn’t focus solely on those statements that have the highest logical I/O (LIO) to execution ratio. Often you will find statements that are well optimized but are executed with extremely high frequency. Look for the aggregate LIO footprint of a SQL statement. Without Statspack or AWR this analysis might be very difficult. However, if you collect this diagnostic data you can use the LEAD analytical function to craft a nice SQL statement to identify the top CPU consuming statements on your system (join stats$sql_summary and stats$snaphot).

Don’t limit your SQL statement identification to just those statements flagged by your analysis as a top CPU consumer. Go another step and identify the most frequently executed statements. Some of the most frequently executed statements are the most optimized on your system. These statements if executed by many programs concurrently can influence concurrency and thusly CPU load. One approach I took recently identified the top 20 CPU consuming statements during a 12 hour window of each week day. I then ran the same analysis against the most frequently executed statements on the system. The results yielded only 31 distinct statements as 9 were on both lists. The amazing thing is that, on average, these 31 statements contributed to 58% of all logical reads on the system and 59% of all executions. Keep in mind that there were over 20 thousand distinct statements cached in the Shared Pool. It is rather amazing that such a small subset of the application footprint contributed so greatly to the aggregate load.

Ask The Right Questions

The identification phase is crucial as you want to optimize that which will yield the greatest benefit. Subsequent to the identification phase the Database Administrators and Developers can sit and discuss approaches to reduce the load incurred by these SQL statements. Here are some of the key points I have taken away during such collaborative efforts.

1. Is there a better execution plan for the statement? Optimization is often achieved by rewriting the query to get at a better execution plan. While I don’t like hinting code, they can relieve pressure in a pinch.

2. Does the statement need to be executed? If you see SQL statements that seldom/never return rows (rows/exec approaches 0) there is a possibility it can be eliminated from your application.

3. Does the statement need to be executed so frequently? You might be surprised that Developers often have other application-side caching techniques that can dramatically reduce the frequency of a statement’s execution against the database. Or, the application might simply call the statement needlessly. It doesn’t hurt to ask!

4. Are the requirements of the business immutable? Sometimes you can work an optimization by simply redefining what is required. This is not the tail wagging the dog here. It is possible that the business would be completely happy with a proposed optimization. For example, can the query return just the first 100 rows found instead of all rows.

5. Do the rows returned to the application need to be sorted? Highly efficient SQL statements can easily have their CPU profile doubled by sorting the output.

6. Are all columns being projected by a query needed? If your application retrieves the entire row and it only needed a very small subset of the attributes it is possible you could satisfy the query using index access alone.

7. Is the most suitable SQL statement being executed to meet the retrieval requirements of the application? Suitability is rather vague but could apply to: the number of rows fetched, any misplaced aggregation, insufficient WHERE clause conditions etc.

8. Are tables being joined needlessly? I have encountered statements that Developers have determined are joining a table, projecting some of its attributes, without using its data upon retrieval. The inclusion of another table in such a manner can dramatically increase the logical I/O required. This is extremely difficult for a DBA to discern without intimate application code knowledge.

9. How well are your indexes clustered with your table(s)? Sometimes data reorganization techniques can greatly reduce the logical I/O required of a SQL statement. Sometimes IOTs prove to be very feasible solutions to poor performing queries.

10. Can I add a better index or compress/rebuild an existing index to reduce logical I/O? Better indexing and/or index compression could take a query that required 10 logical I/O operations down to 5 or 6. This might feel like a trivial optimization. But, if this statement is executed 300 times each second that could save your system 1,500 logical I/Os per second. Never discount the benefit of a 50% reduction of an already seemingly optimized statement.

11. Can I reorganize a table to reduce logical I/O?


I suspect most of us have read that 80% of optimization is application centric (I tend to feel that the percentage is higher). Usually the implication is that the SQL being generated and sent to the database is 80% of the target for optimization. More specifically, optimization requires the tuning of SQL 80% of the time. However, don’t limit your efforts to optimize your application to “tuning the SQL.” Sometimes a portion of your optimization will include “tuning the algorithms” used by the application. Needless execution and improper execution of SQL statements can be equally destructive. Hardware resources, in particular CPUs, can be very expensive to purchase and license for production Oracle databases. It is well worth the effort to at least investigate the possibilities of increasing CPU headroom by decreasing CPU utilization.

Update: An astute reader suggested I mention Cary Millsap's Optimizing Oracle Performance with regard to this topic. I highly recommend reading this book as it weighs in heavy on Oracle optimization and Method-R. Trust me if you have optimization on the brain don't miss this read.

9i bug using Oracle Text to search XML data

Vidya Bala - Thu, 2007-01-11 13:05
Using Oracle Text to Search XML Data: XML Data inserted in DATA column in TEST table

Create table test(id integer,DATA CLOB)


Department name="CS"
Employee
Vidya Bala
/Employee
/Department


Step1:
--------

Create an auto section group

begin
ctx_ddl.create_section_group('myautosectiongroup', 'AUTO_SECTION_GROUP');
end;

Step2:
-------
create index test_index on test(DATA)
indextype is ctxsys.context
parameters ('SECTION GROUP myautosectiongroup');

Step3:
---------
SELECT DATA FROM TEST
WHERE CONTAINS(DATA, 'Vidya WITHIN Employee') > 0;
1 Row Returned

SELECT DATA FROM TEST
WHERE CONTAINS(DATA, 'CS WITHIN Department@name') > 0;

0Rows (10g returns 1 row – 9i returns no row – Support is working on getting bug fix for the issue)
Categories: Development

Business Integration Journal changes its name - and the last BIJ contains my new article

Clemens Utschig - Wed, 2007-01-10 15:06
"Business Integration Journal (BIJ) is changing to Business Transformation & Innovation (BTIJ) to better reflect the current editorial content being published and new topics readers want covered.

Targeted at the intersection of business and IT, BTIJ carries on the “how-to” and “what-works” tradition of Business Integration Journal helping IT and business managers quickly adapt to changing business needs, continuously innovate successful new products and services, and consistently gain and maintain a competitive advantage.
"

In the last edition of the BIJ - as we know it - my article on SOA projects got published - and I am very proud of it.

"A World of Assumptions That Make an SOA Project the Perfect Storm—and What You Should Know About Them by Clemens Utschig-Utschig. This article shows that, while Service-Oriented Architecture (SOA) brings many advantages, it doesn’t alleviate the need for good planning, organization, and training."

The whole article can be found here
http://www.bijonline.com/index.cfm?section=article&aid=802

Second Life client running on Solaris x64

Siva Doe - Wed, 2007-01-10 14:31

Yes. I was able to build the recently open sourced Second Life client. It took me couple of days to get to this stage. To get it running took half a day. The reason being that ENDIAN was set to BIG for non-Linux boxes and that had me stumped for a long time. Of course, being behind SWAN firewall doesnt let the client connect to the server :-( Any one knows a way to achieve this?

Any way, here is the obligatory screenshot. Will keep posted on the updates.

Siva

Remote automated install of Oracle 10g client

Stephen Booth - Wed, 2007-01-10 06:04
We have a situation where we need to rationalise the range of installed Oracle clients (i.e. the bit that sits between the app and the network stack) we have installed. We currently have versions from 7.x through to 10.2 installed accross approximately 12,000 desktops (accross various locations in an area of around 26 square miles) running various apps on Windows versions from NT4 to XP (mostly Stephen Boothhttps://plus.google.com/107526053475064059763noreply@blogger.com1

Auditing on FND_FLEX_VALUES: How to see Audit History

Jo Davis - Tue, 2007-01-09 21:23
To keep an audit trail be kept for mapping of segment values to external values (such as for extracts to other systems) and changes to it
1) Define DFF on FND_FLEX_VALUES to contain the mapping values
2) enabled auditing on FND_FLEX_VALUES,
3) if they wish to query the audit data start with this:
select *
from FND_FLEX_VALUES_AC1
where attribute1 is not null
FND_FLEX_VALUES_AC1 is a view of the audit table

How to Migrate Personalizations from TEST to PROD on Release 11.5.10.2

Jo Davis - Tue, 2007-01-09 20:35
This applies to iProcurement (11.5.10 only), iExpenses 2nd Generation (i.e. above 11.5.7 and the white screens, not the blue ones) and anything else that uses the new self-service architecture (i.e. NOT iTime or anything else with those blue screens and no bouncing balls across the top.... you know what I mean)

It's all covered in Metalink Note 370734.1 but suffice is to say

- it can be done, you don't need to redo all the personalizations

- you can only use this on release 11.5.10.2

- you will need the appltest and applprod passwords and a passing familiarity with Unix (or a tame developer or DBA to help you)

Have a great day

Jo

Producing Estimates

Rob Baillie - Sun, 2007-01-07 07:52
OK, so it's about time I got back into writing about software development, rather than software (or running, or travelling, or feeds) and the hot topic for me at the moment is the estimation process.

This topic's probably a bit big to tackle in a single post, so it's post series time. Once all the posts are up I'll slap another up with all the text combined.

So – Producing good medium term estimates...

I'm not going to talk about the process for deriving a short term estimate for a small piece of work, that's already covered beautifully by Mike Cohn in User Stories Applied, and I blogged on that topic some time ago. Rather I'm going to talk about producing an overall estimate for a release iteration or module.

I've read an awful lot on this topic over the last couple of years, so I'm sorry if all I'm doing is plagiarising things said by Kent Beck, Mike Cohn or Martin Fowler (OK, the book's Kent as well, but you get the point), or any of those many people out there that blog, and that I read. Truly, I'm sorry.

I'm not intending to infringe copyright or take credit for other people's work, it's just that my thinking has been heavily guided by these people. This writing is how I feel about estimating, having soaked up those texts and put many of their ideas into practice.

Many people think that XP is against producing longer term estimates, but it isn't. It's just that it's not about tying people down to a particular date 2 and a half years in the future and beating them with a 400 page functional design document at the end of it; it's about being able to say with some degree of certainty when some useful software will arrive, the general scope of that software, and ultimately to allow those people in power to be able to predict some kind of cost for the project.

Any business that allows the development team to go off and do a job without providing some kind of prediction back to the upper management team being irresponsible at best, grossly negligent at worst.

Providing good medium term estimates is a skill, and a highly desirable one. By giving good estimates and delivering to them you are effectively managing the expectations of upper management and allowing them to plan the larger scale future of the software in their business. When they feel they can trust the numbers coming out of the development teams it means they are less likely to throw seemingly arbitrary deadlines at their IT departments and then get angry when they're not met. Taking responsibility for, and producing good estimates will ultimately allow you to take control of the time-scales within your department. If you continually provide bad estimates or suggest a level of accuracy that simply isn't there by saying "it'll take 1,203 ½ man days to complete this project", then you've only got yourself to blame when that prediction is used to beat you down when your project isn't complete in exactly 1,203 ½ man days.

Whilst being written within the scope of XP, there's no reason why these practices won't work when you're not using an agile process. All you need to do is to split your work into small enough chunks so that each one can be estimated in the region of a few hours up to 5 days work before you start.

In summary, the process is this:
  • Produce a rough estimate for the whole of the release iteration.
  • Iteratively produce more detailed estimates for the work most likely to be done over the next couple of weeks.
  • Feed the detailed estimates into the overall cost.
  • Do some development.
  • Feed the actual time taken back into the estimates and revise the overall cost.
  • Don't bury your head in the sand if things aren't going to plan.


1 – Produce a rough estimate for the whole of the release iteration.

Before you can start work developing a new release you should have some idea of what functionality is going into that release. You should have a fairly large collection of loosely defined pieces of functionality. You can't absolutely guarantee that these stories will completely form the release that will go out in 3 months time, but what you can say is: "Today we believe that it would be most profitable for us to work on these areas of functionality in order to deliver a meaningful release to the business within a reasonable time-frame".

Things may change over the course of the next few months, and the later steps in this process will help you to respond to those changes and provide the relevant feedback to the business.

Kent Beck states (I paraphrase): You don't drive a car by pointing it at the end of the road and driving in a straight line with your eyes closed. You stay alert and make corrections moment by moment.
I state: When you get in a car and start to drive you should at least have some idea where you're going.

The two statements don't oppose each other.

The basic idea is this:
  • In broad strokes, group the stories in terms of cost in relation to each other. If X is medium and Y is huge then Z is tiny.
  • Assign a numbered cost to each group of stories.
  • Inform that decision with past experience.
    Produce a total.


So, get together the stories that you're putting into this release iteration, grab a room with a big desk, some of your most knowledgeable customers and respected developers and off you go.

Ideally you want to have around 5 to 8 people in the room, more than this and you'll find you end up arguing over details, less and you may find you don't have enough buy in from the customer and development teams.

As a golden rule: developers that are not working on the project should NOT be allowed to estimate. There's nothing worse for developer buy-in than having an estimate or deadline over which they feel that have no control.

Try to keep the number of stories down, I find around 50 stories is good. If you have more you may be able to group similar ones together into a bigger story, or you may find you have to cull a few. That's OK, you can go through this process several times. You can try to go through the process with more, but I find that you can't keep more than 50 stories in your head at one time. You're less likely to get the relative costs right. And it'll get boring.

Quickly estimate each story:
  • The customer explains the functionality required without worrying too much about the about the detail;
  • The developers place each story into one of 4 piles: small, medium, large and wooooaaaah man that's big.


People shouldn't be afraid to discuss their thinking, though you should be worried if it's taking 10 minutes to produce an estimate for each story.

You may find that developers will want to split stories down into smaller ones and drive out the details. Don't, if you can avoid it. At this point we want a general idea of the relative size of each story. We don't need to understand the exact nature of each individual story, we don't need enough to be able to sit down and start developing. All we need is a good understanding of the general functionality needed and a good idea of the relative cost. Stories WILL be re-estimated before they're implemented and many of them will be split into several smaller ones to make their development simpler. But we'll worry about that later.

As the stories are being added to piles be critical of past decisions. Allow stories to be moved between piles. Allow piles to be completely rebuilt. You may start off with 10 stories that spread across all 4 piles, then the 11th 12th and 13th stories come along and have nowhere to go – they're all an order of magnitude bigger than the ones that have gone before. That's OK, look at the stories you've already grouped in light of this new knowledge and recalibrate the piles.

Once all the stories are done, spread the piles out so you can see every story in each one. Appraise your groupings. Move stories around. It's important that you're comfortable that you've got them in the right pile relative to each other.

If the developers involved in this round of estimating recently worked on another project or a previous release of this project then add some recently completed stories from that work. Have the developers assign those stories into the piles. You know how long those stories took and you can use that to help guide your new estimates. There's nothing more useful in estimating than past experience... don't be afraid to use that knowledge formally.

Once you're that the piles are accurate, put a number of days against each pile. The idea is to get 'on average a single story in this pile will take x days'.

If you have to err, then err on the side of bigger. Bear in mind that people always estimate in a perfect world where nothing ever goes wrong. The stories you added from the last project can be used to guide the number. You know exactly how long it took to develop those stories and you'd expect the new numbers to be similar.

The resulting base estimate is then just the sum of (number of stories in each pile * cost of that pile). This number will give you a general estimate on the overall cost.

You'll probably want to add a contingency, and past experience on other projects will help you guide that. If you've gone through this process before, pull out the numbers from the last release... when you last performed this process and compare the starting estimate with the actual amount of time taken. Use that comparison for the contingency.

The gut feeling is to say, "Hang on, the last time we did this we estimated these 50 stories, but only built these 20 and added these other 30. That means that the starting estimate was nothing like what we actually did". But the point is that if every release has the same kind of people working on it, working for the same business with the same kinds of pressures then this release is likely to be the same as the last. Odds are that THIS release will only consist of 20 of these stories plus 30 other unknown ones... and the effect on the estimate is likely to be similar. Each release is NOT unique, each release is likely to go ahead in exactly the same way as the last one. Use that knowledge, and use the numbers you so painstakingly recorded last time.

Aside: Project managers tend to think that I have a problem with them collecting numbers (as they love to do). They're wrong... I have a problem with numbers being collected and never being used. The estimation process is the place where this is most apparent. Learn from what happened yesterday to inform your estimate for tomorrow.


Once the job's done you should have enough information to allow you to have the planning game. For those not familiar with XP, that is the point at which the customer team (with some advice from the development team) prioritises and orders the stories. It gives you a very good overall view of the direction of the project and a clear goal for the next few weeks of work.

At this point you should have a good idea of cost and scope and have a good degree of confidence in both. Now's the time to tell your boss...

Next up – using the shorter term estimates to inform the longer term one...

Technorati Tags: , , , , , , , , ,

A new SOA year has begun - what will the SOA Santa bring?

Clemens Utschig - Thu, 2007-01-04 16:00
Wow, the new SOA year is here - and many cool things are about to arrive. But before looking into the future, let's look back into 2006 - and what happened there.

My first year, living in San Francisco and working within a wonderfull group, envisioning future and creating our next generation products went by - and I must admit it was far better than I thought it'd be.

First, and most important - we delivered SOA Suite 10.1.3.1 - the first major, integrated SOA plattform. Not just integrated, but also with two new key components, the Oracle Enterprise Service Bus (ESB) and Rules. Since OOW 2006 we we presented the first public available release, our customers are using it heavily to build their next generation SOA - and according to many of them - they enjoy doing so :D

Also, we signed an OEM agreement with IDS Scheer, to build the ORACLE BPA Suite on top of it. Using Business Process Architect - for the first time Business Processes will make it all the road down to be executable, and all the way back to feed real world data into simulation and continous refinement.

Another also, was the overwhelming amount of SOA related sessions at Oracle Open World, and the great feedback we received. It was a blast, more than 40000 people made the city for one week being red, being Oracle. Elton John played during the big Oracle party, and so on.

Open SOA - www.osoa.org went live - and with it the big players in the SOA market space, to drive next generation, easy to use standards.

So after all what's in for 2007?

While we all work hard on delivering 11g - our next generation SOA infrastructure, based on Service Component Architecture (SCA) and supporting Service Data Objects (SDO) - we expect a patchset for 10.1.3.1 (which will offer a ton of new features, and little fixes here and there).

BPEL 2.0 will see the light of life - after more than 2 years, and the last public review finished, some more polishing needs to happen - and then we are ready to go. With it - BPEL will present a number of long awaited cool features, such as the new variable layout ($inputVariable) and scoped partnerlinks.

Evangelism on SOA will continue to drive the spirit of SOA and help customers adopting to it - I really look forward to even more presence around the globe, and many customers going live on SOA Suite.

Personally for me - it's moving on with my SOA Best practices series on OTN, supporting people around the globe and the OTN forums, doing evangelism on SOA & help creating 11g. More than enough.

10G SQL Access Advisor and SQL Tuning Advisor

Vidya Bala - Thu, 2006-12-28 15:45
I was able to take advantage of the holidays to complete a 9i to 10g cross platform migration. While I was quite happy with the migration process (especially on how datapump has made this effort much easier), I was a little bit disappointed on what SQL Access and SQL Tuning Advisor had to offer.

There were a bunch of queries that we have been meaning to tune for quite some time , I thought I could create a tuning task with 10G SQL Tuning advisor and see if I could get some valuable recommendations.

The Recommendations I got were far from anything of significance (eg: add an index to a small lkup table).

I couldn’t help but wonder is there is much success/help using Oracle 10g SQL Access/Tuning Advisor in the industry
Categories: Development

Merry Christmas, Happy New Year, and a Poll

Marcos Campos - Sun, 2006-12-24 04:30
It has been a great year. My daughter was born as well as this blog. I have launched this blog at the beginning of the year (January first to be more precise) and the readership has been great. Amongst the posts, Time Series and Automatic Pivoting were probably the most viewed. I am on vacation in Brazil right now enjoying a family reunion. I have a big family and it is hard to get everyone Marcoshttp://www.blogger.com/profile/14756167848125664628noreply@blogger.com0
Categories: BI & Warehousing

ASM and NetApp Filer

Vidya Bala - Thu, 2006-12-21 15:37
Link to ASM and NetApp

I have been spending the last few days looking into what advantages we would have using ASM on a NFS mount as opposed to having the database files directly on NFS. If your on RAC then ASM is mandatory but for non RAC 10g instances and NetApp - ASM is not mandatory.

The biggest benefit I see is volume management features with ASM.

does this mean I can change volume sizes etc actually online ? may be IO balancing across different volumes an added feature. I am walking into a totally new area (ASM on any kind of direct attach storage I can for sure see it being beneficial on NFS I am not so sure?)

anybody with sucess stories?
Categories: Development

Cross Platform Migration 9i to 10g

Vidya Bala - Thu, 2006-12-21 15:05
Migration Procedure Implemented.

Cross Platform Migration 9.2.0.6(Suse SLES8) Standard Edition to 10g Rel 2 (Solaris 10)

This is a quick overview of a migration procedure I have just finished implementing on a test environment– If you see anything else in the procedure that should be added or should be noted, please feel free to post comments – as always there has been a lot of mutual learning and help from my blog readers.

Step1: Server A :
Clone Production Database to Preprod Environment. (Datafile,Redologfile,controlfiles all on Shared File System NFS).
Database Release : 9.2.0.6
Suse Linix version : SLES 8
Database Size : 50.89 G


Step2: Server B:
Install 10g Release 2 on a new SLES 9 server. Note 10g Release 2 is not supported on SLES8.
Make sure shared file systems on ServerA are mounted on Server B.
Copy parameter file from Server A to Server B . Make appropriate path changes to parameter file on Server B.
Database Release : 10g Release 2
Suse Linix version : SLES9


Step3: Upgrade 9i database to 10G
On Server B upgrade 9.2.0.6 database to 10G

sqlplus /nolog
startup upgrade

CREATE TABLESPACE sysaux DATAFILE ' sysaux01.dbf'
SIZE 500M REUSE
EXTENT MANAGEMENT LOCAL
SEGMENT SPACE MANAGEMENT AUTO
ONLINE;

Set the system to spool results to a log file for later verification of success:

SQL> SPOOL upgrade.log

Run catupgrd.sql:

SQL> @catupgrd.sql

Run utlu102s.sql to display the results of the upgrade:

SQL> @utlu102s.sql

Turn off the spooling of script results to the log file:

SQL> SPOOL OFF

Shut down and restart the instance to reinitialize the system parameters for normal operation.

SQL> SHUTDOWN IMMEDIATE
SQL> STARTUP

Run utlrp.sql to recompile any remaining stored PL/SQL and Java code.

SQL> @utlrp.sql

Verify that all expected packages and classes are valid:

SQL> SELECT count(*) FROM dba_objects WHERE status='INVALID';
SQL> SELECT distinct object_name FROM dba_objects WHERE status='INVALID';

Exit SQL*Plus.

Total Time to Upgrade Database : 38 minutes

Step 4 – Upgrade 10g Database (using expdp)

Now that we have the database migrated to 10gRel2 on Server B(SLES9), we can export the database using 10g datapump. We will export only the Application Related Tablespaces. Tablespaces excluded are as below

PERFSTAT
SYSAUX
SYSTEM
UNDOTBS1
TEMP
DRSYS
INDX --- no application related objects in this tablespace
TOOLS
USERS
XDB
UNDOTBS2

Before running the export – OWM and OLAP options need to be de-installed if not being used to avoid export errors

If the Oracle Workspace Manager feature is not used in this database: de-install the Workspace Manager:
SQL> CONNECT / AS SYSDBA
SQL> @$ORACLE_HOME/rdbms/admin/owmuinst.plb

clean up AW procedural objects:SQL> conn / as SYSDBASQL> delete from sys.exppkgact$ where package = 'DBMS_AW_EXP';

Afterwards, run the export.

CREATE OR REPLACE DIRECTORY pump_dir AS 'xxxxxxxxxxxxxxxx';

Export only application related tablespaces

$ORACLE_HOME/bin/expdp system/manager tablespaces=\(t1,t2 \) directory=pump_dir dumpfile=pump.dmp logfile=pump.log

Full database export of 50+G database took about 80 minutes to export

Step5 – Prepare Target environment – Server C with 10g Release 2

Install 10g Release 2 on Solaris 10 Servers (SERVERC).

Installing Oracle Database 10g Products from the Companion CD
The Oracle Database 10g Companion CD contains additional products that you can install. Whether you need to install these products depends on which Oracle Database products or features you plan to use. If you plan to use the following products or features, then you must complete the Oracle Database 10g Products installation from the Companion CD:
· JPublisher
· Oracle JVM
· Oracle interMedia
· Oracle JDBC development drivers
· Oracle SQLJ
· Oracle Database Examples
· Oracle Text supplied knowledge bases
· Oracle Ultra Search
· Oracle HTML DB
· Oracle Workflow server and middle-tier components

On Server C use DBCA to create database creation scripts. Select the config parameters you need for your database as you go through the DCA wizard.

The scripts will create a standard database with no application related objects yet. Run create scripts to create database.

Tablespaces created (this is assuming none of the additional components were installed)
SYSTEM
UNDOTBS1
SYSAUX
TEMP
USERS

Make sure the Listener is up.
http://ServerC:1158/em
is the em console for the database

Note the below before proceeding with the em console
Oracle Enterprise Manager 10g Database Control is designed for managing a single database, which can be either a single instance or a cluster database. The following premium functionality contained within this release of Enterprise Manager 10g Database Control is available only with an Oracle license:
t(void 0,'12')
Database Diagnostics Pack
Automatic Workload Repository
ADDM (Automated Database Diagnostic Monitor)
Performance Monitoring (Database and Host)
Event Notifications: Notification Methods, Rules and Schedules
Event history/metric history (Database and Host)
Blackouts
Dynamic metric baselines
Memory performance monitoring
t(void 0,'12')
Database Tuning Pack
SQL Access Advisor
SQL Tuning Advisor
SQL Tuning Sets
Reorganize Objects
t(void 0,'12')
Configuration Management Pack
Database and Host Configuration
Deployments
Patch Database and View Patch Cache
Patch staging
Clone Database
Clone Oracle Home
Search configuration
Compare configuration
Policies

Step6 – Prepare Target environment – Server C with Application related Objects

IMPDP will be used to import Application Related objects into this database.
Before running IMPDP the target database will need to be prepared with the Application Tablespaces and Application Schema’s. This is also a great opportunity to reorg objects if you need to. Scripts to create Application Tablespaces and Schemas are prepared.
This is the most important step in preparing the target environment.

Once the Target environment is prepared – import the dumpfile using the following command.
$ORACLE_HOME/bin/impdp system/manager full=y directory=pump_dir1 dumpfile=pump.dmp logfile=pump_import.log

Before opening the database for public connections
1)Recompile for invalid objects (run utrp.sql)
2)Gather statistics for entire database


There will be a regression tests run at the end of all this to test Application Functionality, long datatypes etc.
Categories: Development

Future Direction - 10g Forms/Reports Developer vs JDeveloper

Vidya Bala - Wed, 2006-12-20 15:15
We are on this new effort to move some legacy rbase programs to Oracle. we are required to evaluate - a)would it be better to go with Oracle 10g forms/reports or with Jdeveloper. The data is going to reside on Oracle database servers. The concern about going with Oracle Forms/Reports was a) will Oracle support it in the future? my instant reaction was ofcourse Oracle will......we have bigger problems if Forms/Reports go away considering that the EBS Suite uses Forms/Reports technology as well.

so the final question while migrating new Apps would it be better to use Oracle Forms/Reports or Jdeveloper????? my 2 cents
1)If the application is database centric without too much business logic involved and if your team has a PL/SQL back ground as opposed to a Java backgrouund then 10g Forms/Reports may be a better bet.
2)If the team is pretty much a J2ee development team then JDeveloper may be the route to go.

I am not too worried about Oracle's strategy to support Forms/Reports (I think they will). The above is just my 2 cents any input from my blog readers will be greatly appreciated.
Categories: Development

Pages

Subscribe to Oracle FAQ aggregator