Feed aggregator

xTuple Postbooks GUI client on OpenSolaris

Siva Doe - Sun, 2008-12-14 19:16

Postbooks from xTuple is an open source edition of their ERP and Accounting software. They have their GUI client available for Linux, Windows and Mac. No Solaris though. With the required components for Postbook, namely Postgres and Trolltech Qt, is known to work in OpenSolaris, I took a plunge to get the GUI client working on my OpenSolaris 2008.11.

It was not very tough to get it going. We need to set up OS 0811 to get it going. First, install SunStudio and family by installing the 'ss-dev' pkg.

pfexec pkg install ss-dev

Next in line is Postgres. I decided on installing version 8.3, even though Postbooks will work with 8.2 also. There is a list of packages that is required for 8.3.

pfexec pkg install -v SUNWpostgr-83-server SUNWpostgr-83-libs SUNWpostgr-83-client \\
SUNWpostgr-jdbc SUNWpostgr-83-docs SUNWpostgr-83-contrib SUNWpostgr-83-devel \\
SUNWpostgr-83-pl SUNWpostgr-83-tcl SUNWpgbouncer-pg83

As for the Qt version 4, I took the easy way out by using spec-files-extra (SFE) to build Qt for me. Please see Building JDS on OpenSolaris for instructions to set up the build environment for building Qt out of SFE. The SFEqt4.spec file, has to be tweaked for building the required Postgres SQL driver plugin, that is required for Postbooks. In the spec files directory, there is a subdirectory called 'ext-sources'. Within that, edit the file 'qt4-solaris-cc-ss12' and change the QMAKE_CXXFLAGS and QMAKE_LIBS variables as follows.

QMAKE_CXXFLAGS          = $$QMAKE_CFLAGS -library=stlport4 -I/usr/postgres/8.3/include
QMAKE_LIBS              = -L/usr/postgres/8.3/lib -R/usr/postgres/8.3/lib

Now, build and install SFEqt4.spec.

/opt/jdsbld/bin/pkgtool -v --download --autodeps build SFEqt4.spec

This will install two packages SFEqt4 and SFEqt4-devel in your system. Now on to the next step of building Postbooks.

Postbooks GUI client requires the source code for OpenRPT, which is an open sourced SQL report writer. Download it from OpenRPT sourceforge site. Since, Qt4 is built using SunStudio, there is a small change required for OpenRPT to get it built on SunStudio. In the file OpenRPT/renderer/graph.cpp, change line 780 to look like these two lines.

QColor cMap(_colorMap[sd->color]);
graph.setSetColor(snum, &cMap);

When extracting OpenRPT source code, you will have a directory like 'openrpt-3.0.0Beta-source', or something like that. Create a symbolic link 'openrpt' for it.

ln -s openrpt-3.0.0Beta-source openrpt

This is required as the GUI client looks for 'openrpt' at some level above its source directory. Then build the client, by running 'qmake' followed by 'gmake'.


Next is GUI client itself. Download the source from Postbooks sourceforge site. Extract it at the same level as that of 'openrpt'. For building under SunStudio, one file needs to be changed. Edit the file 'xtuple/guiclient/postPurchaseOrdersByAgent.cpp', line 90, to replace '__PRETTY_FUNCTION__' with the name of the method itself (hardcode with ::sPost). The build procedure is the same as above: 'qmake' followed by 'gmake'.

Now the GUI client can be launched by 'bin/xtuple'. I encountered a missing 'libiconv.so.2' library. I overcame this by running

env LD_LIBRARY_PATH=/usr/gnu/lib bin/xtuple

I know, using LD_LIBRARY_PATH is unsafe. May be including '/usr/gnu/lib' in the spec file change might solve this problem.

Now you should see the login screen and you should be able to log in to the configured Postgres database. Here is the obligatory screenshot.

Hope this was helpful for you. Comments and feedback are most welcome.

Should I go for Oracle 10g OCA/OCP or 11g OCA/OCP Certification?

Sabdar Syed - Sat, 2008-12-13 03:39
Hello All,

Choosing whether to go for Oracle 10g OCA/OCP or 11g OCA/OCP Certification is becoming complex to the Novice or Newbie DBAs. And also, I have seen a couple of posts asking such similar certification doubts more frequently in the OTN Forums, when they are not really sure or confused.

Well, this is ever been told by everyone that going for latest version certification is good and ideal. But, what I advise is, first go for Oracle 10g OCA and OCP, then upgrade to Oracle 11g OCP.

Following are my views on why to go for Oracle 10g OCA/OCP initially rather than 11g OCA/OCP directly.

  • As we all know that newer version (11g) does include older version features plus new features and bug fixes of older version issues.
  • Retirement date for Oracle 10g Certification has not yet been announced by Oracle. Moreover, Oracle Database 11g: Administration I (OCA) exam is only on production i.e. regular and Oracle Database 11g: Administration II (OCP) exam is not yet gone for Production i.e. still Beta Exam.
  • Oracle Database 10g is still being used as Production for the business in all most all the organizations in the globe. But very less companies are using the Oracle Database 11g for their business, as Oracle 11g is still a base release and yet to go for standard release 11g (11.2.X.X) shortly. This means that Oracle 11g is not fully deployed or used for Production purpose yet.
  • Oracle Database 10g Release (10.2) still has Oracle primary and extended (Metalink) support for few more years from now, after that Oracle 10g will also be de-supported by Oracle.
  • Both versioned (10g and 11g) certifications have two exams – Administration I (OCA) and Administration II (OCP). Each exam fees of them is $125 US i.e. there is no vary.
  • It’s mandatory for the OCP candidates to undergo one approved course from the Approved list of Oracle University Courses for hands on course requirement. This applies to both Oracle 10g and 11g Certification takers.
  • For Oracle 10g OCP Certification holders, there is only one exam 1Z0-050 - Oracle Database 11g: New Features for Administrators given to upgrade directly to the Oracle 11g OCP. No course or hands on course requirement form is to be submitted to achieve the Oracle 11g OCP (Upgrade) Certification.
  • In this way, one will have both Oracle 10g and 11g Certifications in hand, and can show the same in their resume or CV. This also fulfills the requirement where the companies are looking for the candidates those are having enough experience with Oracle 10g and 11g, and holding multiple certifications in it.
One can go for direct Oracle 11g Certification under the following circumstances.

  • If you are forced, by your company or manager, to undergo Oracle 11g Course and take Oracle 11g Certification Exams, for their specific requirement on Oracle Database 11g related projects.
  • When there is no Oracle 10g course listed in the training institute in your city, instead only Oracle 11g Courses are available.
  • When unable to afford to take Oracle 11g Upgrade exam.
  • If my above views are not Okay with you -:)

Note: Folks, above are only my views, and need not to be the same with others. So, it’s left up you to decide whether to go for Oracle 10g OCA/OCP or 11g OCA/OCP Certification. For any information or doubts, then refer the Oracle Certification Program link and navigate yourself to the options to know more about Beta Exams, Retirements, List of Exams, Upgrade your Certification, Hands on Course Requirement etc.,

Your comments are welcomed if this article information helps you.

Sabdar Syed,

My UKOUG 2008

Oracle Apex Notebook - Fri, 2008-12-12 11:39
Last week I went to Birmingham UK to attend the UKOUG 2008 conference. João came along with me and I can speak for both of us when I say that we had a great time. We found Birmingham to be a very clean and organized city and it was also good because we managed to stay in London for the weekend where we had the time to visit some of the most famous touristic attractions.The conference itself was
Categories: Development

'Publish to APEX' from SQL Developer 1.5.3

Anthony Rayner - Fri, 2008-12-12 05:22
Since SQL Developer 1.2.1 and APEX 3.0.1, we've had some useful integration between APEX and SQL Developer, the ability to import and deploy applications, browse the APEX metadata, remote debug PL/SQL in your applications and more. With the latest release of SQL Developer 1.5.3, it is now possible to create a quick and simple APEX application from within SQL Developer (thanks to a bug fix). This is possible through the 'Publish to APEX' feature and creates a simple 1 page APEX application containing an Interactive Report Region based upon a SQL statement.

(Note: Requires APEX 3.1 or above.)

This feature allows you to take any grid of data, right click on it and select to 'Publish to APEX'. (Note a 'grid of data' includes results from executing a SQL statement, results of pre-defined or user-defined reports from the 'Reports' tab, specific table / view properties such as columns, data, constraints etc. and I'm sure there are more.) Upon selecting 'Publish to APEX', the following dialog is displayed:

This dialog allows you to specify 5 properties:
1) Workspaces- The workspace where you want the application to be created (this list will only display workspaces that are associated with the schema of the context of your current grid of data).
2) Application Name - The name of the application that will be created.
3) Theme - The theme for your new application, specifying look and feel.
4) Page Name - The name of the page that will be created.
5) SQL - The SQL that will be used to generate an interactive report region within the page. This defaults to the SQL used to build the original grid of data, but can be changed.

Upon clicking 'Apply' SQL Developer will create the application and show a dialog with some basic creation information such as application name and ID. This creates a 1 page application in the workspace defined, containing an interactive report region with a source of the SQL specified in the dialog.

A few more points to note about this application:
- The application's authentication scheme defaults to 'Database Account Credentials' meaning that you'll need to authenticate into the application using valid database account username and password. This can obviously be changed to something else if required.
- The application will be created in an application group called 'Published from SQL Developer'.
- The interactive report region only displays the first ten columns of the report by default, but again this can easily be changed via the interactive reports menu bar 'Actions' green cog drop down, then select 'Select Columns'.

Here is an example of the application that is generated. I selected to publish data that showed access attempts to my workspace (selecting from the APEX metadata view apex_workspace_access_log). I then used interactive report features to refine my data to show all the failed login attempts for applications within a workspace, grouped by application:

Have fun publishing to APEX!!!
Categories: Development

Celebrating Second Year Anniversary of the Blog Launch

Sabdar Syed - Thu, 2008-12-11 23:22

Dear My Blog Readers,

It's been two years that I have lanuched and updated my blog by today. The blog originally started on 12-Dec-2006.

During the last year, I wrote and published good articles. I would like to thank all of my blog viewrs who made my blog popular by reading the interesting posts in m blog.

I hope, I would write more and more good articles which my blog viwers definitely like them and definitely get benifit from them.

Feel free to give comments, suggestions, and advices to improve this blog with more articles.

Once again thank you all.


Sabdar Syed,


Oracle SQL Developer 1.5.3 released

Oracle Apex Notebook - Thu, 2008-12-11 18:52
I've just noticed that, the new version of Oracle SQL Developer (1.5.3) is now available for download.Reading the release notes I can see the major functionality introduced, beyond the traditional list of bug fixes, is the translation to Japanese. The good news is that the next SQL Developer 1.5.4 release will be translated to other 7 languages: Spanish, Italian, German, French, Brazilian
Categories: Development

Browse JAR, WAR and EAR files in Explorer

Brenden Anstey - Tue, 2008-12-09 04:06

Found this really neat method if getting the Windows Zip Explorer Compressed (zipped) Folders extension to read JAR, WAR and EAR files in Windows Explorer.

Open notepad and paste the below Registry Entries into a text file named ear-jar-war.reg, save it on your desktop and then run it. This will associate the three types with the the Zip shell extension: Compressed (zipped) Folders
If any of the extensions are associated with something else, Compressed (zipped) Folders will now appear in the recommended programs for these file types so you can associate them manually.

Windows Registry Editor Version 5.00

"Content Type"="application/x-zip-compressed"





"Content Type"="application/x-zip-compressed"





"Content Type"="application/x-zip-compressed"





Oh and of course back up your registry before running. As you can see from above it only adds some keys to HKEY_CLASSES_ROOT, tested in XP, but at your own risk and all that ;)

A big 'Thank You'....

Lisa Dobson - Mon, 2008-12-08 14:12
..to everyone who voted me onto the UKOUG Board of Directors!There seems to have been a bit of a shake up this year with 4 new Directors being elected, which I believe is the most number of new Directors in one year.We will officially take up our posts on 15th January and I'm really looking forward to the next 2 years - although I've been well advised that there is a lot of work ahead!I didn't Lisahttp://www.blogger.com/profile/16434297444320005874noreply@blogger.com5

Learn using Sub Ledger Accounting (SLA) in R12 Oracle Payables

Krishanu Bose - Sun, 2008-12-07 12:03

Sub Ledger Accounting (SLA) is a Rule-Based accounting engine that defines how journal entries are generated in sub-ledger transactions in Oracle sub-ledger applications. However, SLA also supports external applications generating accounting information which ultimately needs to be transferred to Oracle General Ledger. Before we get into SLA we need to know few of the basic concepts like event types, event class, etc.

Event Class - classifies transaction types for accounting rule purposes. E.g. in Payables, following are possible event classes: Invoice, Debit Memo, Prepayments, Refunds and Payments.

Event Type - for each transaction type, defines possible actions with accounting significance. E.g. in Payables, following are possible event classes: AP Invoice Events – Validation, Adjustment and Cancellation. Similarly we will have event types for other event classes.

In most of the cases we would not need to customize SLA and accounting features will work same as 11i. Some of the typical business scenarios where we would need to customize SLA in Payables are as follows:

  • To have a different Liability account based on Operating Unit for which the invoice is entered.
  • To have different natural account (expense) based on different Invoice Type and Invoice Line type.
  • To have different natural account (expense) and different liability account based on different criteria like supplier type, entering currency, pay group, etc.
  • The cost center segment of Invoice distribution Liability account shall be picked from the Invoice distribution Account while the other segment values from the Liability account defined at supplier site.

To cater to some of the above requirements we can use other alternatives like using distribution sets also. But setting up a custom SLA for such scenarios is an easier approach with lower user maintenance. I will try and show a simple scenario of how to derive custom accounting for a business scenario using SLA in Oracle Payables.

Business Scenario: We need to define different liability account (natural account segment) based on Supplier Type so that business can track the liability by supplier type. The other segment values will default from supplier site. I am limiting this example to only one supplier type “Contractor". The objective would be to have a different natural account for Liability account for invoices of supplier type "Contractor" alone, while for other supplier types the normal liability account should default.


Step1: First define a mapping set for various supplier types.

Navigation: Setup > Accounting Setups >Sub Ledger Accounting Setups >Accounting Methods Builders > Journal Entry Setups > Mapping Sets

Step2: Define ADR (Account Derivation Rules)

Navigation: Setup > Accounting Setups >Sub Ledger Accounting Setups >Accounting Methods Builders > Journal Entry Setups > Account Derivation Rules

Step3: Define JLD (Journal Line Definition)

Navigation: Setup > Accounting Setups >Sub Ledger Accounting Setups > Accounting Methods Builder > Methods and Definitions > Journal Line Definitions

Always create a copy of the seeded JLD and do not modify a seeded JLD. We will create a copy of ACCRUAL_INVOICES_ALL for our Chart of Accounts ‘Operations Accounting Flex’ only. Add the custom ADR created to ‘Liability, Basic’ (Line Assignment)

Step 4: Setup AAD (Application Accounting Definition)

Navigation: Setup > Accounting Setups >Sub Ledger Accounting Setups > Accounting Methods Builder > Methods and Definitions > Application Accounting Definition

Create a copy of seeded AAD only and do not modify existing AAD. I am creating a custom AAD called ‘TEST_AAD’ for COA ‘Operations Accounting Flex’.

Step 5: Setup SAM (Subledger Accounting Methods)

Navigation: Setup > Accounting Setups >Sub Ledger Accounting Setups > Accounting Methods Builder > Methods and Definitions > Subledger Accounting Methods

Create a copy of a seeded SAM and do not modify seeded SAM. Add the custom AAD to the Event Class ‘Payables’.

Step 6: Assign the custom SAM to Primary Ledger

Navigation: Set ups > Accounting Setups > Ledger Setup > Define > Accounting Setup

Verification of new SLA rule:

Create an invoice for supplier type ‘Contractor’ and create accounting

Liability Account for Supplier Type “Contractor” is 01-000-2990-0000-000

Liability Account for other Supplier Types is 01-000-2210-0000-000


Mary Ann Davidson - Thu, 2008-12-04 10:49

I have been rather silent on the blog front for some time. The reason has not been a happy one. I went through something very painful this summer that all of us inevitably experience: the death of a loved one. In my case, it was my best friend of 17 years – though Kerry was more than that, truly. As my sister says of him, echoing Alec Guinness in Star Wars, “there has been a disturbance in The Force.” Someone who was larger than life leaves a void in many lives, most especially mine. For awhile, I was tied up in his illness, then the funeral, and then I just could not pick up my virtual pen because it is hard to live through this much less write about it. The blog entry I really meant to write (about force multipliers) kept being crowded into the back of my mind, because I really needed to write about my friend Kerry and the meaning of legacy.

The occasion of death is a forced milestone in that it is a logical time to assess what matters in life. And what seems to matter to us during our lives is often not what matters after one’s course has run. There’s a story about a man reading that John D. Rockefeller had died and asking a companion, “How much money did he leave?” The answer was, “All of it.” All the things we think matter in terms of accomplishments, press clippings, portfolio, and so on, are dust in the wind once we are gone. Even having a building named after you is not all that permanent. The sages of Israel taught of Herod’s temple in Jerusalem: "Whoever has never seen the building constructed by Herod, has never seen a beautiful building in his life." Herod’s temple took over eighty years to build, yet the Romans utterly destroyed it in a matter of days. All we have left is a retaining wall of the temple structure and a holy day to mourn its destruction:Tisha B’Av.

No wonder that Jesus wept over Jerusalem and advised his disciples, "Do not store up for yourselves treasures on earth, where moth and rust destroy, and where thieves break in and steal. But store up for yourselves treasures in heaven, where moth and rust do not destroy, and where thieves do not break in and steal. For where your treasure is, there your heart will be also.”

Kerry did not leave a legacy in things the world values. He left no assets. No property. No portfolio. No bank account. No buildings named after him. No children. Yet he left a very rich legacy in many hearts. People who loved him. Lives he changed, mine in particular. To name two things near and dear to my heart, Kerry taught me to surf, and talked me into buying my house in Idaho. Oh, and committed me to buying a dog without asking (said dog, Thunder, is howling for a treat a I write this). How can you thank someone for giving you a life, or for helping make you who you are? I can’t really imagine what my life would have been like without him, except that it would have been so much poorer.

The number of calls, cards, emails, and so on I have gotten from people who knew him and cannot believe he is gone astonishes and humbles me. And the way they talk about him is a reminder of what really matters to people. One friend said that Kerry was the only person he could ever trust with money – after years of being burned by partners in business. Kerry not only made my friend good money, but was giving away his “trade secrets” by teaching my friend to do what he did in the markets. The financial institution he cleared paper through called up (to a person) to tell me how much Kerry had meant to them though none of them had met him personally: “We talked every day for four years and we didn’t just talk about the stock market; we talked about life.”

Particularly as we watch the recent economic meltdown caused – if I may be indulged here – by a number of people at all levels of society engaging in financial deception or delusion (such as buying a house one knows very well one cannot afford and that is bigger than one needs, or taking equity out to finance a lifestyle one cannot afford) – Kerry stood out. He always “paid cash or did without.” An old-fashioned value that the world needs more of.

He also had the most honest business model that I know of, one in which he took part of the risk that he assumed for his clients. I get lots of cold calls from money managers. I tell them if they are willing to work on the same basis as Kerry did, I will consider it Kerry made 25% of closed out net capital gains, which means if he lost clients money, he had to make it back for them and be in the black before he made any money for himself. None of these MMLs (money management leeches) take my offer, and typically stutter that my counter-offer is not reasonable. I reply that they want to get paid regardless of whether they earn me money or lose it all. The risk, in other words, is all mine, and none of it is theirs. What, one asks, is fair about that? Kerry only did well if his clients did well. A “square deal meal” kind of guy, an increasing rarity in a world where so many are without honor or integrity, and where many are happy to take the reward on the upside but want a bailout on the downside of risk (which economists rightly call a “moral hazard”).

Back to my point, a legacy of changed lives is all that we can really leave behind us that matters. Yet for some reason, in the software industry, “legacy” as a term seems to only be uttered with a sneer. “That is a legacy system” is almost always said with disdain. Why? What’s wrong with old code? Actual users (remember them?) think “legacy” means “something that works, that meets my business needs and is paid for and I am happy with it so I want to keep using it if possible.” Software kahunas think “legacy” is a pejorative term, and that new is always better, old is always bad, and we all need to upgrade “just because.” (The last software upgrade I went through required me to install all new client software with really poor instructions – uh, is there some reason I should have to magically know to rename a file to BLAHBLAH.exe?) and I absolutely lost it. It was the weekend Kerry died and the thing that caused me to break down and “lose it” was the software upgrade, not Kerry’s death. The three most dreaded words in the English language, as all parents learn to their dismay on Christmas Eve at 3AM, is “some assembly required.”)

Merriam Webster defines “legacy” as follows:

1 : a gift by will especially of money or other personal property : bequest 2 : something transmitted by or received from an ancestor or predecessor or from the past

I’ve talked about the first meaning of “legacy.” Now, to the second. Granted, not everything in the past is worth pulling forward and celebrating, but many things are. At the very least, the passage of time allows us to pan through historical dust to find nuggets of permanent value. The second meaning of legacy reminds us that not everything new is wonderful simply because it is new. In particular, the belief that “new and improved” equates to progress is almost a de facto religious belief among many technologists. Yet never has the half-life of technological progress been shorter. Who among us really remembers (or cares) who invented the FOOBAR protocol? Especially when the FOOBAR protocol will be overtaken by something else within a few short years.

Many of the things that historically matter to us now were not obvious to the citizenry of the time (does anybody remember the number one tentmaker in Jerusalem circa 30AD? Yet most of us have at least heard of an obscure carpenter/rabbi named Yeshua). Western civilization, for example, has percolated along quite happily on the strength of the ideas and writings of (if I may be forgiven) innumerable dead white males. Has anyone in the 21st century approached the stature of Rabbi Yeshua or other dead white males (Aristotle, Plato?) We may only know in hindsight. Despite the compressed lifecycle of so much we work with and work on, we should resist the temptation to engage in hagiography on the strength of anything short-lived or of recent occurrence, because we will probably be wrong about who and what really mattered.

An example of near religious ecstasy around technology is all the hoopla around cloud computing, if anyone can decide what it actually is. If by cloud computing, someone really means “software as a service,” that’s not actually a “new” idea at all. It’s been around for eons (remember Compuserve?). And many software vendors offer hosted applications and make a nice business out of it, too. Software as a service makes sense in some scenarios (is that alliterative, or what?). I personally outsource buying anything electronic to my brother-in-law, who does extensive market research and then tells me what to get. “Gizmo-buying-as-a-service” works for me.

If cloud computing is the idea that all your “stuff” will magically be “out there somewhere, in the cloud,” well, that is looney tunes for obvious reasons. Just think basics. I still have cookbooks if for no other reason than I can “fire them up” without waiting for software to load, and I would really hate to have to access recipes in the cloud. Open book, read recipe, book does not need rebooting, ever. Sometimes I do look for recipes online when I realize (in Idaho) that the cookbook I had with my pecan bar recipes is in San Francisco. So, “recipes as a service” might be useful sometimes – but I sure do not want the recipe server to be down when I am in the middle of cooking Thanksgiving dinner.

More to the point, the “it’s stored wherever, and you don’t need to know where” hype around “everything will be in the cloud” is technogobbledygook. There are many things you aren’t going to want to store “somewhere out there,” for good reasons, especially if you have no idea how secure it is and it is something you find valuable. Imagine someone saying, “Mrs. Smith, we can’t actually tell you where your daughter Janie – who you dropped off at day care this morning - will be during the day, she is out there in the daycare cloud someplace, running around, we are not really sure where. But trust us, when you stop by at 5 to pick her up, we’ll have her at the right place.” Yeah, right. Not surprisingly, security people are not buying “somewhere, out there” model of cloud computing. Nobody should. At the very least, instead of having somewhat defensible enclaves of security, you’d have to make everything secure, which is simply not possible.

I was reminded in a frightening way recently that people worship new technology without in many cases either analyzing what problem it solves or whether the benefits are worth the risks. Specifically, I recently heard a highly placed official in the Department of Defense opine about the fact that DoD wants to embrace Web 2.0 because (to paraphrase), “We need to attract and keep all these young people and they won’t work here if we don’t let them use Facebook in the workplace.” What are people going to use Facebook for in the Defense Department, one wants to know? <”Hi, my name is Achmed and I am an Al Qaeda operative. I like long walks on the beach and IEDs. Will you be my friend?” I don’t think so.>

The official went on to say that industry really needed to secure all these Web 2.0 technologies. At that point, I could not contain myself. I asked the gentleman if the Department of Defense was planning on taking container ships and retrofitting them to be aircraft carriers, or buying Lear jets and making them into F-22 Raptors? No, he said. Then why, I offered, does DoD think that the IT industry can take technologies that were never designed with security in mind and “secure them?” Why is IT somehow different that we can, ex post facto, make things secure that were never designed for the threat environment in which they are now deployed? People don’t use a road bike to mountain bike, I don’t use my short board to surf big waves (if I surfed big waves, that is, which I don’t. But if I did I’d get a really expensive blank and get someone to shape me a Big Wave Board, aka “rhino chaser”).

Your “tools” need to be designed for the environment in which they are going to operate. If they aren’t, you are going to have trouble my friend, right here in River City (with apologies to Meredith Willson). To put it even more succinctly (more apologies to Meredith Willson): “You gotta know the territory.” Meredith Willson was not writing about security when he wrote The Music Man, but “you gotta know the territory” is as succinct a description of a security weenie’s responsibilities as ever there was.

Mind you, I understand that the idea of collaboration is a powerful one and, if it is appropriately secure, can be a powerful construct. We read, for example, that the intelligence community has created an internal Web 2.0 construct called Intellipedia (along the same lines as Wikipedia). It makes sense that, instead of having one expert on, say, Syrian antiaircraft defense, that that person’s knowledge can be written down and accessed by others. In a way, that kind of collaboration facilitates “legacy” because someone who knows something valuable can share it with others far more easily than through one-on-one oral transmission. But there is a big difference between “let’s embrace collaborative constructs” and “let’s allow insecure and unsecurable Web 2.0 technologies into a classified environment.”

The key to the new is remembering the universal truths of old – legacies. This is particular true in security in that, while the attack vectors may change as the technology does, there are principles of security that do not change (“trust, but verify” works just as well for IT security as for arms control). Remembering and applying “legacy truths” will help us to avoid getting wrapped up in the latest technical fads as something “new and different” when really, it is just the same security issues wrapped in shiny new code.

There’s a great story from Jewish lore about King Solomon challenging a servant to find a magic ring for him, magic in that a happy man wearing it would become sad, and a sad man would become happy. After a long search, the servant brought to King Solomon a ring engraved, “This, too shall pass.” Technologists would do well to remember that story.

I admit to being more backward looking than forward looking. But this much I know: the “old legacy” values that Kerry lived by are still timeless ones. “Honor thy father and mother.” “I am the Lord thy God, you will have no other gods before me.” “For where your treasure is, there your heart will be also.” Kerry died penniless, but richer than anybody else I know. That he gave of himself to so many people is the legacy he leaves us, and I for one feel so blessed to have known him and to have been cherished by him. As for grief, “this too, shall pass,” and someday I will only remember the happy memories.

The only accolade – the only legacy - that matters at the end of your life is the one that I know Kerry heard from his Creator in the early hours of August 17: “Well done, thou good and faithful servant.”

E Keli, ‘o ku’u pu’uwai ‘oe, mau loa.

Remembering Kerry:


Ua lawa.

Book of the Month: Tried By War: Abraham Lincoln as Commander in Chief by James M. McPherson

A really fascinating look at how Abraham Lincoln influence the military course of the Civil War, devising strategies that (once he could find generals who would adopt them) made a critical difference, such as attacking the Confederate lines at two different places at the same point in time. You also have a new appreciation (and frustration) of what Lincoln went through to find generals who understood how to win. And lastly, I have a new appreciation for the element of moral courage Lincoln displayed in prevailing against long odds. In 1862, the Democratic controlled Congress was whining that the war was taking too long, costing too many lives, and the North should sue for peace at any price, including taking the issue of slavery off the table. Had it not been for Lincoln’s moral courage in staying the course, the world would look very different indeed. Leadership is, among other things, taking the long moral view and not merely the expedient political one.

McPherson also wrote the Pulitzer Prize-winning Battle Cry of Freedom.


It’s that time of the year again. A really lovely album of Christmas music (not all of which is in Hawaiian) is:


Herod’s temple:


About Meredith Willson:


This, too, shall pass:


Switched from Oracle BEA BPM Enterprise Version (on Weblogic) to the Standalone Version for Evaluation Purposes.

Arvind Jain - Wed, 2008-12-03 19:03
Last week was a very short week during which I tried to install an Enterprise BEA BPM on Weblogic. There were a lot of configurations needed for Enterprise WebLogic Edition (Directory Server, Database, Deployment within the WebLogic JVM etc). I have listed the steps below.
It was taking too much time and was not very straightforward. I had to ensure that I have installed and configured the BEA WebLogic application server properly even before I could debug & play with the BPM engine.
<?xml:namespace prefix = o />

At end of last Tuesday I made a call to switch to Enterprise Standalone but the efforts put in were good learning and useful for Standalone Installation as well. So for the purpose of proceeding with evaluation going forward I have shifted to Enterprise Standalone Version as my focus is BPM.

Some learnings or observations .... On the Oracle website they refer to downloading Oracle BPM <?xml:namespace prefix = st1 />Enterprise Administration Guide.pdf but in real scenario there was no such file name. I realized that it was same as Oracle BPM Admin Guide.pdf and the same goes for configuration guide as well. So will not get confused in future :)

Ok so with the ultimate aim being to Deploying and Publishing a New BPM Project I had to go through a series of steps. (For standalone I needed a much smaller set but the practice and drill was worthwhile learning in terms of infrastructure and operationalization of product.

The whole list of steps:

  1. Creating Directory Service ( need to configure Directory Database Schema)
  2. Creating a Process Execution Engine ( need to configure a separate Execution Engine Database Schema)
  3. Configuring Weblogic Server
  4. Creating Weblogic Server Domain
  5. Create Oracle BPM Deploy User
  6. Installing Oracle BPM Deployer
  7. Creating JDBC Data Sources on BEA Weblogic Server
  8. Creating JMS Server, Module & Resources
  9. Configuring the Deployer and Deployment Targets
  10. Enabling Clustering
  11. Building and Deploying Application EAR Files
  12. Deploying and Publishing a New BPM Project

As of now I have Standalone Enterprise BEA BPM configured with Directory (Oracle 10g DB). Engine DB configuration has some issues due to privileges. Make sure you have a friendly DBA to help out.

I am trying to come up with a set of use cases to test out different features.

More next week as I try to put together a list of features .. dully prioritized that I will like to test out.

If you have a challenge for me ...Bring it ON :)

Oracle Approved Training Centers and Certification Test Sites & Address - Kingdom of Saudi Arabia

Sabdar Syed - Wed, 2008-12-03 02:40
Hello All,

Here is the list of Oracle University Approved Educational Training Partners/Centers, and the certification test taking sites in Saudi Arabia.

Oracle University Approved Education Partners - Kingdom of Saudi Arabia

New Horizons Jeddah - Oracle Approved Education Center

P.O. Box : 52171
Telephone : 026642277
Fax : 026642454
Address : Al Rawda St, Jeddah ,KSA

New Horizons Khobar - Oracle Approved Education Center

P.O. Box : 2060
Telephone : 038588882
Fax : 038584014
Address : AL Khaleej Blg, Khobar, KSA

Al-Khaleej Training and Education Co.(New Horizons Computer Learning Center,KSA)

Al Wallan Building,
Takhasusi Street,
Riyadh,Kingdom of Saudi Arabia

Contact: Louai Al-Amir Salem, PMP
Title: Platinum Center Manager
Tel: 009661 416 0123 Ext. 400

Prometric Test Sites and Address for taking Oracle Certifications OCA/OCP/OCE



RIYADH 11351
Phone: 14160123 Site Code: SU7


Olya Main Street
Behind Jareer Bookstore
Al Khaleej Ladies Center
Newhorizons P O Box 295300
RIYADH 11351
Phone: 1462 8393 Site Code: SU7M1


PO BOX 10202
RIYADH 11433
Phone: 14552444 Site Code: SU60

ExecuTrain of Riyadh

Zero Floor, South Bldg.
Khaledyah Business Centre ,
Olaya St .
Riyadh 11351
Phone: 14621118 Site Code: SU74



PO BOX 52171
JEDDAH 21563
Phone: 966 2 6642277 Site Code: SU5


Ibrahim Juffali Road
(P.O.Box 14730)
Jeddah 21434
Phone: 966 26678411 Site Code: SU59

Others Cities:

New Horizons Computer Learning Centre

Al-Khaleej Training & Education Company
PO Box 10968
Phone: 966 3 348 1166 Site Code: SU52

New Horizons Computer Learning Center

Al Khaleej Training and Education
Dhahran Street
Al Ahsa 31982
Phone: 966 3 5305007 Site Code: SU55

New Horizons Computer Learning Center

Khalid Bin Walid Street
Circle Masjid Qiblatein
Madinah 1875
Phone: 48223333 Site Code: SU48

Philippine International School Ruraydah

Information Technology Dept.
Faiziyah District
P.O. Box 27089
Buraydah, Al Qasim 51331
Phone: 966 63841975 Site Code: su120

Al-Khaleej Training and Edu. Mens Branch

New Horizons CLC
KSA - King Saud Street
Buridah, Al Qasim 51432
Phone: 966 6 3827999 Site Code: SU54


AL-AHSA 31982
PO BOX 5822
Phone: 966 530 5007 444 Site Code: SU17


P.O BOX 50991
Phone: 72375051 Site Code: SU57


Phone: 966 4424 8613 Site Code: SU39

Note: Check the below URLs, if the details given above are found incorrect.


Sabdar Syed.


NTLM Windows domain authentication for Rails application

Raimonds Simanovskis - Mon, 2008-12-01 16:00

RailsPlusWindows.pngIn one “enterprise” Ruby on Rails project we had an idea to integrate Windows domain user authentication with Rails application — as majority of users were using Windows and Internet Explorer and always were logged in Windows domain then it would be very good if they could log in automatically to the new Rails application without entering their username and password.

Windows is using NTLM protocol to provide such functionality — basically it uses additional HTTP headers to negotiate authentication information between web server and browser. It is tightly integrated into Microsoft Internet Information Server and if you live in pure Windows world then implementation of NTLM authentication is just a checkbox in IIS.

But if you are using Ruby on Rails with Apache web server in front of it and running everything on Linux or other Unix then this is not so simple. Therefore I wanted to share my solution how I solved this problem.

mod_ntlm Apache module installation

The first step is that we need NTLM protocol support for Apache web server so that it could handle Windows domain user authentication with web browser.

The first thing I found was mod_ntlm, but unfortunately this project is inactive for many years and do not have support for Apache 2.2 that I am using.

The other option I found was mod_auth_ntlm_winbind from Samba project but this solution requires Samba’s winbind daemon on the same server which makes the whole configuration more complex and therefore I was not eager to do that.

Then finally I found that someone has patched mod_ntlm to work with Apache 2.2 and this looked promising. I took this version of mod_ntlm but in addition I needed to make some additional patches to it and as a result I published my final mod_ntlm version in my GitHub repository.

If you would like to install mod_ntlm module on Linux then at first ensure that you have Apache 2.2 installed together with Apache development utilities (check that you have either apxs or apxs2 utility in your path). Then from the source directory of mod_ntlm (that you downloaded from my GitHub repository) do:

apxs -i -a -c mod_ntlm.c

If everything goes well then it should install mod_ntlm.so module in the directory where all other Apache modules is installed. It also tries to add module load directive in Apache configuration file httpd.conf but please check by yourself that you have

LoadModule ntlm_module ...directory.path.../mod_ntlm.so

line in your configuration file and directory path is the same as for other Apache modules. Try to restart Apache server to see if the module will be successfully loaded.

I also managed to install mod_ntlm on my Mac OS X Leopard so that I could later test NTLM authentication locally. Installation on Mac OS X was a little bit more tricky as I needed to compile 64-bit architecture module to be able to load it with preinstalled Apache:

sudo ln -s /usr/include/malloc/malloc.h /usr/include/malloc.h
sudo ln -s /usr/include/sys/statvfs.h /usr/include/sys/vfs.h
apxs -c -o mod_ntlm.so -Wc,"-shared -arch i386 -arch x86_64" -Wl,"-arch i386 -arch x86_64" mod_ntlm.c
sudo apxs -i -a -n 'ntlm' mod_ntlm.so

After this check /etc/apache2/httpd.conf file that it includes:

LoadModule ntlm_module        libexec/apache2/mod_ntlm.so

and try to restart Apache with

sudo apachectl -k restart
mod_ntlm Apache module configuration

The next thing is that you need to configure mod_ntlm. Put these configuration directories in the same place where you have your virtual host configuration directives related to your Rails application. Let’s assume that we have domain “domain.com” with domain controllers “dc01.domain.com” and “dc02.domain.com”. And let’s use /winlogin as a URL which will be used for Windows domain authentication.

RewriteEngine On
<Location /winlogin>
  AuthName "My Application"
  AuthType NTLM
  NTLMAuth on
  NTLMAuthoritative on
  NTLMDomain domain.com
  NTLMServer dc01.domain.com
  NTLMBackup dc02.domain.com
  require valid-user

mod_ntlm will set REMOTE_USER environment variable with authenticated Windows username. If we are using Mongrel servers cluster behind Apache web server then we need to add the following configuration lines to put REMOTE_USER in HTTP header X-Forwarded-User of forwarded request to Mongrel cluster.

RewriteCond %{LA-U:REMOTE_USER} (.+)
RewriteRule . - [E=RU:%1]
RequestHeader add X-Forwarded-User %{RU}e

Please remember to put all previous configuration lines before any other URL rewriting directives. In my case I have the following configuration lines which will forward all non-static requests to my Mongrel servers cluster (which in my case have HAproxy on port 3000 before them):

# Redirect all non-static requests to haproxy
RewriteRule ^/(.*)${REQUEST_URI} [L,P,QSA]
Rails sessions controller

Now the final part is to handle authenticated Windows users in Rails sessions controller. Here are examples how I am doing this.


map.winlogin 'winlogin', :controller => 'sessions', :action => 'create_from_windows_login'


def create_from_windows_login
  if !(login = forwarded_user)
    flash[:error] = "Browser did not provide Windows domain user name"
    user = nil
  elsif user = User.authenticated_by_windows_domain(login)
    # user has access rights to system
    flash[:error] = "User has no access rights to application"
  self.current_user = user
  if logged_in?
    # store that next time automatic login should be made
    cookies[:windows_domain] = {:value => 'true', :expires => Time.now + 1.month}
    # Because of IE NTLM strange behavior need to give 401 response with Javascript redirect
    @redirect_to = redirect_back_or_default_url(root_path)
    render :status => 401, :layout => 'redirect'
    render :action => 'new'
  def forwarded_user
    return nil unless x_forwarded_user = request.headers['X-Forwarded-User']
    users = x_forwarded_user.split(',')

User.authenticated_by_windows_domain is model method that either find existing or creates new user based on authenticated Windows username in parameter and checks that user has access rights. Private method forwarded_user extracts Windows username from HTTP header — in my case it always was formatted as “(null),username” therefore I needed to remove unnecessary “(null)” from it.

In addition I am storing browser cookie that user used Windows domain authentication — it means that next time we can forward this user directly to /winlogin instead of showing login page if user has this cookie. We cannot forward all users to /winlogin as then for all users browser will prompt for Windows username and password (and probably we are also using other authentication methods).

The last thing is that we need to do a little hack as a workaround for strange Internet Explorer behavior. If Internet Explorer has authenticated with some web server using NTLM protocol then IE will think that this web server will require NTLM authentication for all POST requests. And therefore it does “performance optimization” when doing POST requests to this web server — the first POST request from browser will have no POST data in it, just header with NTLM authentication message. In Rails application case we do not need these NTLM authentications for all POST requests as we are maintaining Rails session to identify logged in users. Therefore we are making this trick that after successful authentication we return HTTP 401 code which makes IE think that it is not authenticated anymore with this web server. But together with HTTP 401 code we return HTML page which forces client side redirect to home page either using JavaScript or


<% content_for :head do %>
  <script language="javascript">
      location.replace("<%= @redirect_to %>");
    <meta http-equiv="Refresh" content="0; URL=<%= @redirect_to %>" />
<% end %>
<%= link_to 'Redirecting...', @redirect_to %>

content_for :head is used to specify which additional content should be put in <header> part of layout.

As a result you now have basic Windows domain NTLM authentication working. Please let me know in comments if you have any issues with this solution or if you have other suggestions how to use Windows domain NTLM authentication in Rails applications.

Additional hints

NTLM authentication can be used also in Firefox. Enter about:config in location field and then search for network.automatic-ntlm-auth.trusted-uris. There you can enter servers for which you would like to use automatic NTLM authentication.

Categories: Development

Old tech you'd like to see updated and rereleased

Stephen Booth - Mon, 2008-12-01 02:33
Note an Oracle post, but I know a lot of DBAs and into tech as well.Yesterday I found my old Psion3 mx (which I used before having to switch to Palm handhelds due to work). I had a play and was reminded why I liked it so much. The key advantage it had, and still has over many more modern devices, is the size. It's small enough to fit in a suit jacket or coat pocket whilst being large enough toStephen Boothhttps://plus.google.com/107526053475064059763noreply@blogger.com0

Its Been a While

Padraig O'Sullivan - Sun, 2008-11-30 18:46
I had removed this blog but kept getting some emails asking for links to certain posts so I just posted some old posts again so that they are available to anyone who is interested in them.As an update for what I'm doing, I'm currently in my second year of graduate school. I plan on taking a grad class in database systems next semester so that should be interesting. I'll get to learn a lot about Padraighttp://www.blogger.com/profile/17562327461254304451noreply@blogger.com0

Configuring Oracle as a Service in SMF

Padraig O'Sullivan - Sun, 2008-11-30 15:33
In Solaris 10, Sun introduced the Service Management Facility (SMF) to simplify management of system services. It is a component of the so called Predictive Self Healing technology available in Solaris 10. The other component is the Fault Management Architecture.In this post, I will demonstrate how to configure an Oracle database and listener as services managed by SMF. This entails that Oracle Padraighttp://www.blogger.com/profile/17562327461254304451noreply@blogger.com2

Parallel Rollback

Fairlie Rego - Sun, 2008-11-30 03:13
I had a user call up and ask to kill a session which was causing him grief and hence I killed it without much thought since it was not a Production system

A few hours later I noticed that SMON was doing parallel txn recovery. This was validated by the view
select * from x$ktprxrt;

Unfortunately I have lost the output but it did show that it would take eons

But this was also confirmed from the pstack of smon which included the function ktprbeg which I believe begins parallel rollback. (Snippet below)

[syd0904:oracle:OSTA1]/u01/app/oracle => pstack 11905
11905: ora_smon_OSTA1

0000000100d80868 kturax (fffffffffffffffe, 380017150, b, 380017, 105ebe510, 5) + 928
0000000100e15620 ktprbeg (106502000, 0, 1065033c8, 105400, 1056b5, 1065033c8) + 1a0 ===> Begin parallel rollback
00000001007e9238 ktmmon (ffffffffffffffff, ffffffff7fffdda8, 0, 1056b5000, 1, 106502000) + f58
000000010106e0dc ksbrdp (105f1b000, 38000e, 106505ce0, 105f1b000, 105f1b, 1007e8260) + 39c
00000001024efed8 opirip (106510000, 106400, 106518, 380007000, 106510, 1066a5650) + 338
000000010033f7b4 opidrv (106512a90, 0, 32, 10650f7c8, 32, 0) + 4b4
0000000100339c50 sou2o (ffffffff7ffff3e8, 32, 4, ffffffff7ffff410, 105de9000, 105de9) + 50
00000001002fc00c opimai_real (3, ffffffff7ffff4e8, 0, 0, 247e09c, 14800) + 10c
00000001002fbe38 main (1, ffffffff7ffff5f8, 0, ffffffff7ffff4f0, ffffffff7ffff600, ffffffff79d00140) + 98
00000001002fbd5c _start (0, 0, 0, 0, 0, 0) + 17c
----------------- lwp# 2 / thread# 2 --------------------

and also confirmed from the SMON trace file

*** 2008-11-28 12:03:16.828
Parallel Transaction recovery caught exception 30319
Parallel Transaction recovery caught error 30319
*** 2008-11-28 12:07:17.163
Parallel Transaction recovery caught exception 30319
Parallel Transaction recovery caught error 30319

So the first thing I did was to disable parallel recovery because of the issue documented in Metalink

SQL> ALTER SYSTEM SET fast_start_parallel_rollback='FALSE';

System altered.

IMHO (atleast from past experience) serial recovery is faster than parallel recovery atleast in the case where the transaction causes a lot of index block splits.

After this the row from x$ktprxrt disappeared and the following appeared in the SMON trace file

*** 2008-11-28 12:08:32.763
SMON: parallel recovery restart with degree=0 (!=16)
Parallel Transaction recovery caught exception 30312
Parallel Transaction recovery caught error 30312
*** 2008-11-28 12:08:33.039
SMON: parallel recovery restart with degree=0 (!=16)
Parallel Transaction recovery caught exception 30312
Parallel Transaction recovery caught error 30312

The following views agree on how much time it is going to take to complete the rollback

SQL> select * from x$ktuxe where KTUXECFL='DEAD' and KTUXESTA='ACTIVE';

---------------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------------
------------------------ ---------- ---------- ---------- ---------- ---------- ---------- ----------
FFFFFFFF79969D78 114 1 1 16 174276 2 36154 1113256061 1021 ACTIVE
DEAD 13 0 0 0 0 0 639787


---------- ---------- ---------- ---------- ---------------- -------------- --------------- ---------- ---------- ---------- ----------
---------- ---------------- ---------------- ----------
1 1 16 174276 RECOVERING 122712 762423 6312
000100100002A8C4 0

As you can see the value of

So rollback will complete when KTUXESIZ in x$ktuxe drops down to 0 which looks like alot of time !!!
Dumping the redo confirmed that it was the same transaction which was killed

Surprisingly the value of "rollback changes - undo records applied" in v$sysstat was not increasing during this timeline. I have tested this again (kill a long running job and watch the rollback) and can confirm that the stat does get incremented.

srvctl Error in Solaris 10 RAC Environment

Padraig O'Sullivan - Sat, 2008-11-29 21:48
If you install a RAC environment on Solaris 10 and set kernel parameters using resource control projects (which is the recommended method in Solaris 10), then you will likely encounter issues when trying to start the cluster database or an individual instance using the srvctl utility. As an example, this is likely what you will encounter:$ srvctl start instance -d orclrac -i orclrac2PRKP-1001 : Padraighttp://www.blogger.com/profile/17562327461254304451noreply@blogger.com0

Building a Modified cp Binary on Solaris 10

Padraig O'Sullivan - Sat, 2008-11-29 21:48
I thought I would write a post on how I setup my Solaris 10 system to build an improved version of the stock cp(1) utility that comes with Solaris 10 in case anyone arrives here from Kevin Closson's blog. If you are looking for more background information on why I am performing this modification, have a look at this post by Kevin Closson.GNU Core UtilitiesWe need to download the source code for Padraighttp://www.blogger.com/profile/17562327461254304451noreply@blogger.com0

Oracle 10gR2 RAC with Solaris 10 and NFS

Padraig O'Sullivan - Sat, 2008-11-29 21:46
Recently, I setup a 2 node RAC environment for testing using Solaris 10 and NFS. This environment consisted of 2 RAC nodes running Solaris 10 and a Solaris 10 server which served as my NFS filer.I thought it might prove useful to create a post on how this is achieved as I found it to be a relatively quick way to setup a cheap test RAC environment. Obviously, this setup is not supported by Oracle Padraighttp://www.blogger.com/profile/17562327461254304451noreply@blogger.com1


Subscribe to Oracle FAQ aggregator