Feed aggregator

Preventing record deletion

Stephen Booth - Sun, 2007-02-11 18:33
This entry is partly an aide memoire for me, partly to try to get something that has been keeping me awake for the past hour or so out of my brain so I can sleep and partly in the hope that someone can suggest a way forward.A quick bit of background. Until April 06 most of our major systems were looked after by an external Faccilities Management company. In April 06 IT was kind of outsourced toStephen Boothhttps://plus.google.com/107526053475064059763noreply@blogger.com7

Getting Ready for Windows Vista

Christian Shay - Mon, 2007-02-05 05:24
The Oracle Database on Windows Vista Statement of Direction is now up on OTN and it provides time frames for the planned releases of Oracle Database on Vista.

EDIT (5/4/07): The 32-bit Oracle Database as well as Oracle Database XE are now certified on Windows Vista. The 32-bit Oracle Client is now packaged up in a simple single install so you don't need to patch your way from, nor will you need access to Metalink. Read my recent blog entry for more details. The text in the blog entry below was an unsupported "hack" that you won't need to use anymore. If you do decide to apply the patch instead of downloading the whole client from OTN, please follow the installation instructions in the patch release instead of the instructions below.

Prior to those release dates, many developers who support Windows clients (.NET, ODBC, OLEDB, OO4O etc) will want to get a head start and do some testing of their applications. Unfortunately, the Oracle Installer (the one on all the CDs) will not run properly on Vista. But there is a way you can get around this. It's a bit of convoluted hack, and none of this is supported officially, but it should help you get started testing.

The gist of it is: The installer that is included with the patchset DOES work on Vista and must be used to first install the Oracle Client software, and then used to upgrade to the Oracle Client. The upgrade to is required because there are a few other Vista related fixes in the patchset.

The following instructions explain how to do this:

1) Obtain client CD or zip file (from OTN)
2) Download the Oracle patchset for 32-bit Windows from Oracle MetaLink and unzip it. (https://metalink.oracle.com/)
3) Change directory to the Disk1\install directory, open oraparam.ini using notepad (or any editor) and comment out the line "SOURCE=../stage/products.xml" by adding a # to the beginning of this line.
4) Run setup.exe from Disk1 directory, point to the CD location's products.xml file and proceed with install of
5) The pre-requisite check will fail. This is expected. Click in the boxes next to the errors. This will change them to say "User Verified." Then click "Next" to continue.
6) Once has been installed as above, uncomment the "SOURCE=../stage/products.xml" line in Disk1\install\oraparam.ini file in patchset
7) If you wish to test "ODP.NET for .NET 2.0 version", you will need to download and install ODAC from OTN at this point, since only "ODP.NET for .NET 1.1" is included in the patch.
8) From the directory, run Disk1\setup.exe and upgrade the install to

That's it! Er, well of course that's just the start. Now you proceed to test your application!

I can sense a veritable storm of blogging coming up in my near future, so stay tuned!

Oracle Scene - And finally..

Neil Jarvis - Wed, 2007-01-31 03:38
I've become the technical deputy editor of Oracle Scene and we will be holding the first editorial meeting 5th Feb 2007. If you want to contribute any acticles/ comments please let me know.

In the meantime I am going to include my first 'And Finally...' acticale here for your 'friendly' comments.

For my first ‘And finally….’ I would like to talk about the UKOUG Special Interest Groups, but first the introduction. Many of you have already met me as I have been involved in the user group for over 6 years now, but for those who haven’t my name is Neil Jarvis and I’ve been working with the Oracle RDBMS for over 18 years, first as a programmer and then in 1998 as a Database Administrator. In 1999 I attended my first user group conference in Birmingham and in 2000 became a deputy chair of the UNIX SIG under the auspice leadership of David Kurtz. Since 2000 I have presented a few technical papers at the UNIX and DBMS SIGs, help arrange the agenda and chaired some of the UNIX SIGs. I am also on the committee for the forthcoming Northern SIG which will be held in April, somewhere north of Watford (watch this space for updates).

In the last 10 years or so I have been involved/ employed by many different organisations ranging from financial to local authority and retail, all of which held UKOUG membership. In over half of these cases I was surprised to see they were not taking full advantage of their membership. Membership of UKOUG entitles you to access of over 120 UKOUG events all of which are free to the first person and a nominal charge to subsequent attendees. You also have the same access to the annual conference, currently being held in Birmingham. This is a four day event with at least 5 streams running all day. Membership also gives you access to the online resource library which holds, amongst other things, most of the presentations not just at the SIGs but also the conference. The office also sends out an e-bulletin fortnightly with the latest news and reminders for forthcoming events. If this is not enough you also receive 30% off Oracle books and this magazine containing articles on not only business but technical and non-technical areas.

With all these benefits I would like to focus on the Special Interest Groups. The agendas for the meetings have to cater for the views of a large audience and as such you may feel going couldn’t justify taking the time out of the office, as some of the material may not be relevant. In my experience the opposite is true. Whilst you may not feel a presentation of ASM may be relevant for your company right now, the technology will eventually catch you up, and then at that stage your company will have to pay for a course, and ironically, you’ll have to take the time out of the office. But if you attended your free SIGs, that knowledge will be there, in the back of your mind, ready to be accessed. So the next time a SIG comes around don’t think, will this SIG be relevant for my company now? think, are the topics relevant for my company in the future? You must remember your committee will be thinking the same questions as to the appropriateness of the subjects.

And finally, I would like to personally thank David Krutz for all his time and effort he has put into the UNIX SIG over the past 6 years. Without him the UNIX SIG wouldn’t be as successful as it as been. I do hope him all the best in his directorship in UKOUG and that the present committee of UNIX SIG continues in the good work David performed.

Deployment Options with XML Publisher

Vidya Bala - Mon, 2007-01-29 16:25
This post will discuss the different ways XML Publisher can be used along with Microsoft Word Template builder to generate Reports.

XML Publisher (also called BI Publisher) has the following Deployment options

1) Oracle Applications (will not be discussed in this post)
2) XML Publisher Desktop Edition
Installs XML Publisher Template Builder in Microsoft word that helps you build templates for your Reports. The templates can be stored as rtf files. Following are the Source Data Options using Template Builder in word
a)XML File
b)SQL Query , needs connection information to source database
c)XML Schema
d)XML generated by Siebel Analytics Answers (I have not been able to get this to work , it may be something that will be available in the next releases and more easily integrated in the next few releases of Siebel Analytics)
3) XML Publisher Enterprise Edition
a)provides a web based console that can be used to publish multiple reports
b)XML Publisher enables you to define your reports and separate the data from the layout of the reports .
c)XML Publisher can run on any J2EE compliant Application Server

XML Publisher Desktop Edition:

If you have installed the Desktop Edition

Template Builder Options will be available from MS Word menu.
Template Builder – Data – Load XML Data (XML File) , XML Schema , Report wizard (lets you give database connect information and the sql to extract the data)
Below will list a quick example on how the Report Wizard can be used (connecting to the hr schema to get a list of Departments)

give the database connect information and sql query for the data that needs to be retrieved.We will choose the Default Template Layout in this example.
preview the Report and then save the RTF file (in this example we save the RTF file as hr_departments.rtfXML Publisher Enterprise Edition

As mentioned XML Publisher enterprise edition can run on any J2EE compliant Application Server.

The Admin and Reports directories are available under

The following files have the port numbers used by the application.
HTTP Port 15101 install_dir/default-web-site.xml
RMI Port 15111 install_dir/rmi.xml
JMS Port 15121 install_dir/jms.xml

Default URL to access XML Publisher Application http://host:15101/xmlpserver (default username/pwd admin/admin)

create new folder and new Report in the corresponding folder.
Edit the Report to define the following properties:
i)datasource for the Report (new datasources can be created in the Admin window)
ii)Data Model: Define the sql query

iii)New List of Values: If the Report uses LOV’s

iv)Parameters: if any parameters are needed for the Report

v)Layouts: create a new template called hr_departments
upload hr_departments.rtf and tie it to the hr_departments template.View the results
you can see that the template is chosen by default and the different output formats available. The above is a very simple illustration of how XML publisher will let your users design their own Reports(and manage changes to design templates of reports) while IT can focus on the data needed for the Reports and other important tasks.

Categories: Development

An ADF Faces and XML Publisher Success Story

Brenden Anstey - Sun, 2007-01-28 23:43
Deciding on an output mechanism for the application
After months of developing an application using ADF Faces and ADF Business components it came time to solve the last of my integration and design issues: Integrating a reporting/output mechanism. Early in the project I considered the option of using Oracle Reports or doing (redoing) printable versions of the entire entry form. With over 40 sections to the online application form, neither of these options seemed desirable or suitable.

At a recent conference XML Publisher generated quite a lot of interest so I thought I would download it and see what the hype was about. It didn't take long to work out that XML Publisher was going to be the tool of choice for my application. The original Word documents that the application was built from could be used with the data plugged straight in to where it needed to go. One of the big gains was not having to worry about formatting, this alone saved a huge amount of time.

Integrating XML Publisher with JDeveloper
Deepak Vohra has produced an article on OTN called Integrating Oracle XML Publisher with Oracle JDeveloper 10g which provided examples of using the XML Publisher API's in a simple Java Class. To start of with I created a small Java project based on one class which I executed from its Main method. This allowed me to get all the class paths and libraries sorted out and have me a good understanding of the XML Publisher Java API's. I was able to produce and merge PDF documents including page numbering in a very short space of time. I had to set enough memory for the XMP Publisher API's to overcome an out of memory error (Add "-Xmx256m"to your project run properties). The next step was to integrate this into a sandbox J2EE project and start writing to the browser response stream rather than disk. This presented a few challenges immediately, mostly around the location of the Data Template and XSL files which defined the output to be produced. In fact any interaction with the file system would prove to be a problem due to the relative execution changing to the location of the OC4J and not the class that is running. The other key problems were getting a pooled connection to the database and coding any file system operations to run in the embedded OC4J (XP) and target 10.1.3 middle tier (Linux).

Key design problems and their solutions
Getting a pooled connection for use by XML Publisher was solved by exposing a client method from the Application Module which returned a database connection from the connection pool. Getting the connection is described here in Steve Muench's blog.

The next design problem was where to actually call the XML Publisher API's to do all of the processing. The answer in short was a Managed Bean. The bean references the binding layer to get its parameters and is called by a JSP, which sets the response stream for the XML Publisher PDFDocMerger to write its PDF output to.

As for the interaction with the file system, I used a system properties table within the database which contained the various directories for the XML Data Templates and XSL style sheets for the output. I coded an AM method which used System.getProperties().getProperty("os.name"); to work out what OS the OC4J was running under and get the property for that Operating System eg. xmlp.windows.template.dir would get the full path on Windows for the XML Publisher template directory. (not quite write once - run anywhere, but close enough)

XML Publisher has been a huge win for this particular project and has successfully plugged a huge gap in the output mechanism for the application. The API's are comprehensive and every time looked to see if it could do something, the answer seemed to be yes and more! XML Publisher will definately be my preferred reporting tool for future ADF based projects.

REPOST: Pivot and Crosstab Queries

Robert Vollman - Thu, 2007-01-25 15:00
Here is another advanced concept that will come in useful when solving Oracle problems. Imagine you're trying to create a result set where the rows need to be columns, or vice versa. In essence, you need to "pivot" rows into columns, or vice versa. That is a very common requirement, and this is where you need to look at a pivot (or crosstab) query to get the job done.As always, when you want Robert Vollmanhttp://www.blogger.com/profile/08275044623767553681noreply@blogger.com3

OracleXml putxml limitations

Vidya Bala - Wed, 2007-01-24 10:29
OracleXml putxml limitations:

Example1:xml file(test.xml):

ROW num="2"
ID 15 /ID

(using the java API for XDK) the below command
java OracleXML putXML -user "vidya/vidya" -ignorecase –filename "test.xml" "emp"

will load one row into table emp.

Example2:Xml file with namespaces (test1.xml):

ns:ID 2 /ns:ID

(using the java API for XDK) the below command
java OracleXML putXML -user "vidya/vidya" -ignorecase –filename "test1.xml" "emp"

if “OracleXML” cannot identifiy the ROWTAG the above command can be modified as below

java OracleXML putXML -user "vidya/vidya" -ignorecase –rowtag “ITEM” –filename "test1.xml" "emp"

note the above will error as OracleXML doesn’t seem to be working on
a file with namespaces. The only way to probably get around this is by applying a stylesheet.
Categories: Development

What hardware should I buy?

Edward Whalen - Wed, 2007-01-24 10:28

I recently had one of my blog readers ask me for some advice on some new hardware that he was going purchase for running Oracle on Windows. It is difficult to give specific advice on what to purchase, but I can provide a few general guidelines.

  1. Get something that is expandable. If you don’t need 4 CPUs now you can get a system capable of supporting 4 CPUs, but only purchase one or two. Make sure that you can add sufficient memory as necessary. Start with 2 or 4 GB but make sure that there are free slots in case you need to add more.
  2. If you will be running Oracle 10g, absolutely go 64-bit. Any recent Xeon or Opteron processor supports 64-bit Windows. The 64-bit version of Windows 2003 works great and is priced similar to the 32-bit version.
  3. Get a name brand. HP, IBM, Dell, etc. Get something that is supported by the manufacturer.
  4. If possible, separate the application tier from the database tier.
  5. Get sufficient disk drives. I'm not saying that you need to start with 8, but you need enough so that IO performance is not a problem. How many do you need? I can't tell you without knowing more about the database and application, etc.
  6. Absolutely use a RAID controller and disk mirroring (RAID 1). If you lose your data, you are out of business.
  7. In larger systems I recommend separating the log drives and data drives, since the loss of one of the two is a recoverable failure. The loss of both is catastrophic.
  8. If you don't already have one, get something to back up your database with. This can be tape, DVD, etc..
  9. Get trained, get some books, etc.. I'm trying to convince Oracle Press to let me do an Oracle on Windows book.

I hope that this is helpful. If you have any comments or a suggestion for a future topic, drop me an email at ewhalen@perftuning.com.

Oracle Text 9i Bug searching XML Data

Vidya Bala - Wed, 2007-01-24 10:25
My last post I discussed that there was a 9i ( bug using section_group_type auto section group . This group type automatically creates a zone section for each start-tag/end-tag pair in a XML document. The section names derived from XML tags are case sensitive as in XML.
Searches with auto_section_group work in but not for attributes within a tag. For example
Book title="A" author="B"
attributes title and author cannot be searched using auto_section_group section type in The bug has been fixed in
Categories: Development

Oracle SOA Suite review - Oracle sows the seeds for SOA

Clemens Utschig - Wed, 2007-01-24 02:47
Infoworld published a review of our SOA Suite here today.

Here is one quote

My experience with the Oracle SOA Suite revealed a top-notch toolset well-culled from a variety of sources without much sacrifice to aptitude or usability. When it comes to message routing and services orchestration, Oracle SOA Suite meets or exceeds most needs for governance, security, insight, and optimization at a price that’s hard to beat.
James R. Borck is senior contributing editor of the InfoWorld Test Center.
-- http://www.infoworld.com/article/07/01/22/04TCoraclesoa_2.html

Emjoy reading,..

OOP 2007 Keynote - Pictures

Clemens Utschig - Wed, 2007-01-24 01:37
As already announced earlier, I had a keynote an OOP this year (on "managing successfull SOA projects") here in munich. Aside from all the really good speakers and a very geeky crowd - snow finally caught even me :D

What a blast, monday evening - and a full room of interested people - something you don't see too often at conferences, where the exhibition starts on tuesday.

Everytime I speak at a conference on this topic I learn something new, from questions after a session as well as from how the crowd reacts. This time the biggest take-away from OOP - the waterfall is dead, at least there :D

Here are 2 pictures from my keynote.


Adrian Billington - Mon, 2007-01-22 02:00
A function to simplify string building and debugging by replacing multiple placeholders from a collection of inputs. January 2007

Presenting at ODTUG Kaleidoscope 2007 in Orlando

Clemens Utschig - Sat, 2007-01-20 16:15
As fellow blogger Wilfred van der Deijl wrote on his blog, I got an email too, invitation to speak there - last year I got the slots from my manager (thx Dave :D), this year the are my own - awesome.

I'll present on 3 topics, two for beginners, and one intermediate
  • Managing successful SOA projects, a view beyond agile science
    The session will give an inside view into methodologies applied to govern successful SOA projects.We will discover some of the common challenges by examining the different phases of a project and by splitting the project into different categories - each representing its own set of issues, and how these can be successfully mastered - applying the right tools and the right skills. An insiders view - to make your SOA projects successful.

  • Oracle 11g — Oracle's Next Generation SOA Infrastructure
    which will give a sneak preview on 11g - the all integrated SCA based SOA plattform

  • Advanced Concepts of the BPEL Language
    which is one of the sessions from last year that was very well attended and that I also presented at Oracle Open World last year
Last year the conference was really well attended and Washington DC was great - although insanely hot and humid, which made every walk outside a little bit of a sauna, bath trip.

I hope to see you there - This conference is definetely worth the trip, and hey Daytona is not that bad :D

Heading for Europe - to support one of our key SOA Suite customers

Clemens Utschig - Sat, 2007-01-20 15:47
After a bad abdominal virus this week, that seems to pollute the Bay Area, and got me a sleepless night in the ER at UCSF Med - I am somewhat well again, and on my way to Europe.

I got the chance of a keynote at the OOP 2007 in Munich (Germany), speaking about Managing successfull SOA projects - and the importance of the human/organizational side in SOA projects.

If you happen to be at the OOP or around Munich, I think the entry to the demo pods / show floor is free - so come by and visit me - I'll be there to answer questions around SOA in general, demo the SOA Suite and meet customers.

Also during the week I'll be visiting one of our customers in Germany to make sure their large, and fledged SOA implementation hits the target.
With this in mind, and all the feedback we receive from the fields in our OTN forums I am very happy to see the industry and the market adopting our SOA Suite, the latest edition of, an all over integrated plattform from Governance, Development and Security, to BPEL as well as ESB.

Logical Reads and Orange Trees

Eric S. Emrick - Thu, 2007-01-18 23:36
My previous post was a riddle aimed to challenge us to really think about logical I/O (session logical reads). Usually we think of I/O in terms of OS block(s), memory pages, Oracle blocks, Oracle buffer cache buffers, etc. In Oracle, a logical I/O is neither a measure of the number of buffers visited, nor the number of distinct buffers visited. We could of course craft scenarios yielding these results, but these would be contrived special cases - like an episode of Law and Order only better. Instead, logical I/O is the number of buffer visits required to satisfy your SQL statement. There is clearly a distinction between the number of buffers visited and the number of buffer visits. The distinction lies in the target of the operation being measured: the visits not the buffers. As evidenced in the previous post we can issue a full table scan and perform far more logical I/O operations than there are blocks in the table that precede the high water mark. In this case I was visiting each buffer more than one time gathering up ARRAYSIZE rows per visit.

If I had to gather up 313 oranges from an orchard using a basket that could only hold 25 oranges, then it would take me at least 13 visits to one or more trees to complete the task. Don't count the trees. Count the visits.

Oracle Riddles: What's Missing From This Code?

Eric S. Emrick - Mon, 2007-01-15 18:56
The SQL script below has one line intentionally omitted. The missing statement had a material impact on the performance of the targeted query. I have put diagnostic bookends around the targeted query to show that no DML or DDL has been issued to alter the result. In short, the script inserts 32K rows into a test table. I issue a query requiring a full table scan, run a single statement and rerun the same query - also a full table scan. While the second query returns the same number of rows, it performs far fewer logical I/O operations to achieve the same result set. Review the output from the script. Can you fill in the missing statement? Fictitious bonus points will be awarded for the Oracle scholar that can deduce the precise statement :)

/* Script blog.sql

spool blog.out
set feed on echo on;
select * from v$version;
drop table mytable;
create table mytable (col1 number) tablespace users;
insert into mytable values (3);
for i in 1..15 loop
insert into mytable select * from mytable;
end loop;
analyze table mytable compute statistics;
select count(*) from mytable;
select blocks from dba_tables where table_name = 'MYTABLE';
select blocks from dba_segments where segment_name = 'MYTABLE';
select index_name from user_indexes where table_name = 'MYTABLE';
set autot traceonly;
select * from mytable;
set autot off;
REM Bookends to show no DML or DDL statement has been executed.
select statistic#, value from v$mystat where statistic# in (4,134);
... missing statement
REM Bookends to show no DML or DDL statement has been executed.
select statistic#, value from v$mystat where statistic# in (4,134);
set autot traceonly;
select * from mytable;
set autot off;
select blocks from dba_tables where table_name = 'MYTABLE';
select blocks from dba_segments where segment_name = 'MYTABLE';
select index_name from user_indexes where table_name = 'MYTABLE';
select count(*) from mytable;
spool off;

End Script blog.sql */

/* Output

oracle@eemrick:SQL> select * from v$version;
Oracle Database 10g Enterprise Edition Release - 64bi
PL/SQL Release - Production
CORE Production
TNS for Solaris: Version - Production
NLSRTL Version - Production
5 rows selected.
oracle@eemrick:SQL> drop table mytable;
Table dropped.
oracle@eemrick:SQL> create table mytable (col1 number) tablespace users;
Table created.
oracle@eemrick:SQL> insert into mytable values (3);
1 row created.
oracle@eemrick:SQL> commit;
Commit complete.
oracle@eemrick:SQL> begin
2 for i in 1..15 loop
3 insert into mytable select * from mytable;
4 commit;
5 end loop;
6 end;
7 /
PL/SQL procedure successfully completed.
oracle@eemrick:SQL> analyze table mytable compute statistics;
Table analyzed.
oracle@eemrick:SQL> select count(*) from mytable;
1 row selected.
oracle@eemrick:SQL> select blocks from dba_tables where table_name =
1 row selected.
oracle@eemrick:SQL> select blocks from dba_segments where segment_name =
1 row selected.
oracle@eemrick:SQL> select index_name from user_indexes where table_name =
no rows selected
oracle@eemrick:SQL> set autot traceonly;
oracle@eemrick:SQL> select * from mytable;
32768 rows selected.

Execution Plan
Plan hash value: 1229213413
Id Operation Name Rows Bytes Cost (%CPU) Time

0 SELECT STATEMENT 32768 65536 26 (4) 00:00:01

1 TABLE ACCESS FULL MYTABLE 32768 65536 26 (4) 00:00:01


1 recursive calls
0 db block gets
2248 consistent gets
0 physical reads
0 redo size
668925 bytes sent via SQL*Net to client
24492 bytes received via SQL*Net from client
2186 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
32768 rows processed
oracle@eemrick:SQL> set autot off;
oracle@eemrick:SQL> REM Bookends to show no DML or DDL statement has been
oracle@eemrick:SQL> select statistic#, value from v$mystat where statistic#
in (4,134);
---------- ----------
4 18 <-- Statistic #4 is user commits

134 461920 <-- Statistic #134 is redo size
2 rows selected.
oracle@eemrick:SQL> ... missing echo of statement
oracle@eemrick:SQL> REM Bookends to show no DML or DDL statement has been
oracle@eemrick:SQL> select statistic#, value from v$mystat where statistic#
in (4,134);
---------- ----------
4 18
134 461920
2 rows selected.
oracle@eemrick:SQL> set autot traceonly;
oracle@eemrick:SQL> select * from mytable;
32768 rows selected.

Execution Plan
Plan hash value: 1229213413
Id Operation Name Rows Bytes Cost (%CPU) Time

0 SELECT STATEMENT 32768 65536 26 (4) 00:00:01

1 TABLE ACCESS FULL MYTABLE 32768 65536 26 (4) 00:00:01


0 recursive calls
0 db block gets
173 consistent gets
0 physical reads
0 redo size
282975 bytes sent via SQL*Net to client
1667 bytes received via SQL*Net from client
111 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
32768 rows processed
oracle@eemrick:SQL> set autot off;
oracle@eemrick:SQL> select blocks from dba_tables where table_name =
1 row selected.
oracle@eemrick:SQL> select blocks from dba_segments where segment_name =
1 row selected.
oracle@eemrick:SQL> select index_name from user_indexes where table_name =
no rows selected
oracle@eemrick:SQL> select count(*) from mytable;
1 row selected.
oracle@eemrick:SQL> spool off;

End Output */

Clue: The missing statement is not "alter system set do_less_work = true;"

Second Life client running on Solaris x64 - contd

Siva Doe - Thu, 2007-01-11 20:38

As promised, here is the update.

For Mads and whomever is interested in building the Second Life client on Solaris (x64), this is what I did.
Please do remember, these are just to get the client build on Solaris. I havent completely run the client yet (nothing beyond the login screen). So, buyer beware.
Basically, I followed the Linux instructions in Second Life Twiki page.
Downloaded and built all the libraries mentioned and copied the libraries and headers in the directory under, 'i686-sunos5', instead of 'i686-linux'
Modified SConstruct  (for scons building).

  • Look for 'linux' and introduce the code for 'sunos5' (replace -DLL_LINUX with -DLL_SOLARIS)
  • Remove the 'db-4.2' entry under libs line.
  • Replace 'yacc' with 'bison -y'; 'lex' with 'flex'; 'g++-3.4' with 'g++' (under /usr/sfw/bin); 'strip' with 'gstrip'

Whichever subdirectory contains 'files.linux.lst', make a copy of it called 'files.sunos5.lst'.
Then comes the code changes. Basically, search for files containing 'LL_LINUX' and add "|| LL_SOLARIS" or "&& ! LL_SOLARIS" as appropriate.
In llcommon/llpreprocessor.h:38, I added  "|| (defined(LL_SOLARIS) && !defined(__sparc))" to the line to set the ENDIAN correctly.
In llcommon/llsys.cpp, I added code to use the output of 'psrinfo -v', instead of reading from '/proc/cpuinfo' for SOLARIS.  Similarly, used 'getpagesize() \* sysconf(_SC_PHYS_PAGES)' to get the "Physical kb". I know there are better ways, but just wanted to get the build completed.
In llmath/llmath.h, I was running into some problems regarding 'isfinite'. I replaced with this.
#define llfinite(val) (val <= std::numeric_limits<double>::max())
The other significant work is in 'llvfs/llvfs.cpp' and 'newview/viewer.cpp'. Replaced the code for 'flock' with the appropriate 'fcntl' code.
In files 'newview/lldrawpoolsky.h' and 'newview/llvosky.cpp', replace the variable 'sun' with 'Sun'. 'sun' in a SunOS is defined already, of course.
In 'newview/viewer.cpp', rewrote the 'do_basic_glibc_backtrace()' to use 'printstacktrace()' instead.
Well, you can follow the above instructions, or send me a mail, I will send the diff output. :-)

For runtime, you will have to set your LD_LIBRARY_PATH as mentioned in the Twiki page.
Thats all I have for now. Will update later, if any.


PS: It is indeed a pity that I cant login to a server named after me ;-) (userserver.siva.lindenlab.com)

Increasing the Longevity of Your CPU

Eric S. Emrick - Thu, 2007-01-11 18:42

One of my current assignments is to evaluate the potential to increase CPU headroom on a server running a large OLTP Oracle database. Of course, any project such as this is typically motivated by the desire to save money by foregoing a seemingly imminent hardware upgrade. Realizing more CPU headroom for your Oracle database server can be achieved, but not limited to, the following approaches:

1. Add more same-speed CPUs to your existing server.
2. Replace your existing CPUs with faster CPUs.
3. Replace your existing CPUs with a greater number of faster CPUs.
4. Commission a new server platform with more same-speed and/or faster CPUs.
5. Commission a new server platform with a greater number of slower CPUs.
6. Chronologically distribute the load on the system to avoid spikes in CPU.
7. Reduce the work required of the system to satisfy the business.

More times than not I suspect approaches 1-5 are chosen. I am of the opinion that 1) and 3) are more predictable when trying to evaluate the expected CPU headroom yield. Propositions 2), 4) and 5) can be a little less predictable. For example, if I double the speed of my current CPUs will I yield the upgraded CPU cycles as headroom? That is, if I am running 10x500MHz and upgrade to 10x1GHz will I now have the additional 5GHz as headroom? It has been my experience that upgrades such as these do not produce such predictable results, especially if your current box approximates 100% utilization. Certainly, moving to a new server with a greater number of same-speed and/or faster CPUs is a tricky proposition. New servers need to be tested using Production volume with great rigor. While at face value 500MHz would appear to be universally “portable” to any server, there are many other factors that can influence your CPU horsepower: memory architecture, amount of processor cache, processor crosstalk, etc. Options 1-5 can all be very costly and in some cases yield undesirable and unpredictable results.

If you have the luxury of distributing the load on your system to avoid spikes in CPU then that is a great first option. It could buy you more time to evaluate a longer-term solution. For example, shifting any batch jobs to off-peak OLTP hours might give you immediate relief.

What if we can simply “do less” to satisfy the needs of the business? This concept is not new to most Database Administrators and rings a rather cliché tone. After all, aren’t we brow-beaten by the dozens of books and countless articles that speak to database and SQL optimization? The “do less” principle is very sound, but it can be intractable. Reducing the work required of an application often requires management support and can run into political obstacles at every turn. Getting Application Developers and Database Administrators to work in lockstep can require a significant effort. If Management, Developers and Database Administrators buy into a synergistic endeavor the benefits can be amazing – and can save the company a large sum of money.

If you are lucky enough to be working on a project where the common goal of all parties is to reduce the CPU load on your system then I have learned a few things that I hope can help you.

Identify the Targets for Optimization

Identify those SQL statements that contribute the greatest to the CPU load on your database server. These statements usually relate to those that produce the most logical I/O on your database. Caution needs to be taken when trying to identify these statements. You shouldn’t focus solely on those statements that have the highest logical I/O (LIO) to execution ratio. Often you will find statements that are well optimized but are executed with extremely high frequency. Look for the aggregate LIO footprint of a SQL statement. Without Statspack or AWR this analysis might be very difficult. However, if you collect this diagnostic data you can use the LEAD analytical function to craft a nice SQL statement to identify the top CPU consuming statements on your system (join stats$sql_summary and stats$snaphot).

Don’t limit your SQL statement identification to just those statements flagged by your analysis as a top CPU consumer. Go another step and identify the most frequently executed statements. Some of the most frequently executed statements are the most optimized on your system. These statements if executed by many programs concurrently can influence concurrency and thusly CPU load. One approach I took recently identified the top 20 CPU consuming statements during a 12 hour window of each week day. I then ran the same analysis against the most frequently executed statements on the system. The results yielded only 31 distinct statements as 9 were on both lists. The amazing thing is that, on average, these 31 statements contributed to 58% of all logical reads on the system and 59% of all executions. Keep in mind that there were over 20 thousand distinct statements cached in the Shared Pool. It is rather amazing that such a small subset of the application footprint contributed so greatly to the aggregate load.

Ask The Right Questions

The identification phase is crucial as you want to optimize that which will yield the greatest benefit. Subsequent to the identification phase the Database Administrators and Developers can sit and discuss approaches to reduce the load incurred by these SQL statements. Here are some of the key points I have taken away during such collaborative efforts.

1. Is there a better execution plan for the statement? Optimization is often achieved by rewriting the query to get at a better execution plan. While I don’t like hinting code, they can relieve pressure in a pinch.

2. Does the statement need to be executed? If you see SQL statements that seldom/never return rows (rows/exec approaches 0) there is a possibility it can be eliminated from your application.

3. Does the statement need to be executed so frequently? You might be surprised that Developers often have other application-side caching techniques that can dramatically reduce the frequency of a statement’s execution against the database. Or, the application might simply call the statement needlessly. It doesn’t hurt to ask!

4. Are the requirements of the business immutable? Sometimes you can work an optimization by simply redefining what is required. This is not the tail wagging the dog here. It is possible that the business would be completely happy with a proposed optimization. For example, can the query return just the first 100 rows found instead of all rows.

5. Do the rows returned to the application need to be sorted? Highly efficient SQL statements can easily have their CPU profile doubled by sorting the output.

6. Are all columns being projected by a query needed? If your application retrieves the entire row and it only needed a very small subset of the attributes it is possible you could satisfy the query using index access alone.

7. Is the most suitable SQL statement being executed to meet the retrieval requirements of the application? Suitability is rather vague but could apply to: the number of rows fetched, any misplaced aggregation, insufficient WHERE clause conditions etc.

8. Are tables being joined needlessly? I have encountered statements that Developers have determined are joining a table, projecting some of its attributes, without using its data upon retrieval. The inclusion of another table in such a manner can dramatically increase the logical I/O required. This is extremely difficult for a DBA to discern without intimate application code knowledge.

9. How well are your indexes clustered with your table(s)? Sometimes data reorganization techniques can greatly reduce the logical I/O required of a SQL statement. Sometimes IOTs prove to be very feasible solutions to poor performing queries.

10. Can I add a better index or compress/rebuild an existing index to reduce logical I/O? Better indexing and/or index compression could take a query that required 10 logical I/O operations down to 5 or 6. This might feel like a trivial optimization. But, if this statement is executed 300 times each second that could save your system 1,500 logical I/Os per second. Never discount the benefit of a 50% reduction of an already seemingly optimized statement.

11. Can I reorganize a table to reduce logical I/O?

I suspect most of us have read that 80% of optimization is application centric (I tend to feel that the percentage is higher). Usually the implication is that the SQL being generated and sent to the database is 80% of the target for optimization. More specifically, optimization requires the tuning of SQL 80% of the time. However, don’t limit your efforts to optimize your application to “tuning the SQL.” Sometimes a portion of your optimization will include “tuning the algorithms” used by the application. Needless execution and improper execution of SQL statements can be equally destructive. Hardware resources, in particular CPUs, can be very expensive to purchase and license for production Oracle databases. It is well worth the effort to at least investigate the possibilities of increasing CPU headroom by decreasing CPU utilization.

Update: An astute reader suggested I mention Cary Millsap's Optimizing Oracle Performance with regard to this topic. I highly recommend reading this book as it weighs in heavy on Oracle optimization and Method-R. Trust me if you have optimization on the brain don't miss this read.

9i bug using Oracle Text to search XML data

Vidya Bala - Thu, 2007-01-11 13:05
Using Oracle Text to Search XML Data: XML Data inserted in DATA column in TEST table

Create table test(id integer,DATA CLOB)

Department name="CS"
Vidya Bala


Create an auto section group

ctx_ddl.create_section_group('myautosectiongroup', 'AUTO_SECTION_GROUP');

create index test_index on test(DATA)
indextype is ctxsys.context
parameters ('SECTION GROUP myautosectiongroup');

WHERE CONTAINS(DATA, 'Vidya WITHIN Employee') > 0;
1 Row Returned

WHERE CONTAINS(DATA, 'CS WITHIN Department@name') > 0;

0Rows (10g returns 1 row – 9i returns no row – Support is working on getting bug fix for the issue)
Categories: Development

Business Integration Journal changes its name - and the last BIJ contains my new article

Clemens Utschig - Wed, 2007-01-10 15:06
"Business Integration Journal (BIJ) is changing to Business Transformation & Innovation (BTIJ) to better reflect the current editorial content being published and new topics readers want covered.

Targeted at the intersection of business and IT, BTIJ carries on the “how-to” and “what-works” tradition of Business Integration Journal helping IT and business managers quickly adapt to changing business needs, continuously innovate successful new products and services, and consistently gain and maintain a competitive advantage.

In the last edition of the BIJ - as we know it - my article on SOA projects got published - and I am very proud of it.

"A World of Assumptions That Make an SOA Project the Perfect Storm—and What You Should Know About Them by Clemens Utschig-Utschig. This article shows that, while Service-Oriented Architecture (SOA) brings many advantages, it doesn’t alleviate the need for good planning, organization, and training."

The whole article can be found here


Subscribe to Oracle FAQ aggregator