Feed aggregator


Chet Justice - Tue, 2013-03-19 21:08
I've been scratching my eyes out lately trying to reverse engineer some lots of PL/SQL.

One thing I've seen a lot of is calls to dbms_output.put_line. Fortunately, I've seen some dbms_application_info.set_module and other system calls too. But back to that first one.

1. When I used dbms_output, I would typically only use it in development. Once done, I would remove all calls to it, test and promote to QA. It would never survive the trip to production.
2. Typically, when I used it in development, I would tire of typing out d b m s _ o u t p u t . p u t _ l i n e so I would either a, create a standalone procedure or create a private procedure inside the package, something like this (standalone version).
dbms_output.put_line( p_text );
END p;
Easy. Then, in the code, I would simply use the procedure p all over the place...like this:
  l_start_time date;
l_end_time date;
l_start_time := sysdate;
p( 'l_start_time: ' || l_start_time );

--do some stuff here
--maybe add some more calls to p

l_end_time := sysdate;
p( 'l_end_time: ' || l_start_time );

Since the procedure is 84 characters long, I only have to use the p function 4 times to get the benefit. Yay for me...I think. Wait, I like typing.
Categories: BI & Warehousing

PeopleSoft 9.2 to be released March 22

Brent Martin - Mon, 2013-03-18 15:33

Oracle announced today at the Alliance '13 conference that the PeopleSoft 9.2 will be generally available on March 22.

Here's the link to the press release:  http://www.oracle.com/us/corporate/press/1920557

What and where is your concealed talent?

TalentedApps - Mon, 2013-03-18 12:21

Now that’s an expression for a skeptic!

The global workforce is loaded with concealed talent, resulting in lost value and opportunities for both business and workers.

Why is talent concealed? Two things really:

  1. We only see what we are looking for.
  2. We aren’t using reputation effectively.

The first causes the problem. The second is why it hasn’t been solved.

Concealed talent brings no reputation. – Desiderius Erasmus 1466/7/9?-1536

Who knew that a famous Renaissance humanist had such insight into two important 21st century concepts: talent and reputation?

The dirty lens of requirements

We can spend a lot of time coming up with job requirements and descriptions that don’t perform either function very well. Worse yet, they cause us to look at people in those roles solely through the lens of those requirements.

Anything else they might be able to add value with is ignored or overlooked most of the time, leading to lost value for the business and lost opportunity for the employee. This other talent is concealed.

“I’m an excellent driver.” – Rain Man

So what’s the answer? How do we make sure we know what concealed talent they have?

Is it self-identified skills? That’s a start, but it comes with its own set of problems, e.g. the “Lake Wobegon Syndrome” where everyone is above average.

Is it endorsements? That’s slightly better in that at least it’s someone else (we hope) saying you are good at something. Recent experience on a certain professional social networking service has led many to conclude that it’s a bit devalued.

What is it that’s missing from endorsements? It’s the validity of the endorsement.

Says who?

The answer is reputation. Sounds simple. But you have to do it right.

Your reputation is built on the perceptions of a wide array of perspectives of people who have worked with you, experienced your work, or heard about it from others. That’s both good and bad because sometimes reputation can be very different from reality.

The trick is to find out whose perspectives and which perceptions lead to more valid endorsements of talent. For instance, it doesn’t count so much if your 24-hour fitness instructor endorses your carbon fiber-based fuselage design skills, but maybe it’s someone well-respected in carbon fiber-based fuselage design (or perhaps just design around carbon-fiber materials or fuselages in general) who does. And your instructor might be better suited to endorse your self-discipline and ability to stay focused on goals.

In other words, those whose reputation is strong in an area are likely to be a more valid judge of talent in that area. So use that. It’s the gift that keeps giving, because those who get high marks by valid judges are themselves likely to be valid judges of others. Furthermore, reputation backlash can put some restraint on gratuitous endorsements. This isn’t earth-shattering news, but it’s not being used enough.

If only we knew what we know…

“If only HP knew what HP knows, we would be three-times more productive.” – Lew Platt

Find out what your company knows. Use reputation as a tool to discover the concealed talent in your workforce.

Picture from Wikimedia Commons.

Hyper-V: Installing a debian linux in a virtual machine - trouble with the (non legacy) network adapter

Dietrich Schroff - Sun, 2013-03-17 15:03
First try with a virtualization solution like Hyper-V is to install a guest. So let's try debian linux.
The installer runs through this points:
  1. Name & path of the virtual machine
  2. RAM
  3. Network (how to configure a virtual switch with internet connectivity or how to configure internet connectivity with NAT)
    at this point you have to choose no connection (i will explain this later)
  4. Create a hdd
  5. The summary should look like this:
Next step you have to open the configuration of this virtual machine. There you can see a network adapter with the following properties:
Bandwith management? This sounds really good. There are two types of network adapters:

  • A network adapter requires a virtual machine driver in order to work, but offers better performance. This driver is included with some newer versions of Windows. On all other supported operating systems, install integration services in the guest operating system to install the virtual machine driver. For instructions, see Install a Guest Operating System. For more information about which operating systems are supported and which of those require you to install integration services, see About Virtual Machines and Guest Operating Systems (http://go.microsoft.com/fwlink/?LinkID=128037).
  • A legacy network adapter works without installing a virtual machine driver. The legacy network adapter emulates a physical network adapter, multiport DEC 21140 10/100TX 100 MB. A legacy network adapter also supports network-based installations because it includes the ability to boot to the Pre-Execution Environment (PXE boot). However, the legacy network adapter is not supported in the 64-bit edition of Windows Server 2003. 
  • And now think about, for which type of network adapter the standard kernel has a kernel modul (or you can get sources for)... Right. Only the legacy adapter.
    So you have to delete the network adapter and add a legacy network adapter. After this step, your virtual machine should look like:
    The bandwidth management is gone, but your kernel can use the tulip module and your network is working... Here you have to choose a virtual switch, which you can create like described in these two postings:  how to configure a virtual switch with internet connectivity or how to configure internet connectivity with NAT.

    It is not really suprising, that Microsoft adds as default to each new virtual machine a network adapter, which only works on a few linux distributions. You can download drivers from microsoft via this page (scroll down to "integration services"). But to add the default each for every new virtual machine, so that you have to delete this one and add the "legacy" adapter.

    But after knowing this, it is no problem to install debian linux (or any other linux) onto your Hyper-V.

    Hyper-V: Howto configure NAT for virtual machines

    Dietrich Schroff - Sun, 2013-03-17 04:03
    In my last posting i explained how to configure a vEthernet adapter to get connectivity to the internet. But there was one "problem": You had to provide one seperate IP for each virtual host, you want to connect to the internet.
    But there is a solution (NAT) for this problem and it is easy to configure this with Hyper-V on Windows:
    [If you have not configured the "brigde"-solution i explained the last posting, then skip step 1 and start with step number 2]
    1. Unbridge your VSwitchExternal from Wifi
      (select both adapter in network adapters and do a right click an use "remove bridge")
    2. Create a new internal virtual switch via Hyper-V's virtual switch manager (look here, how to do this) and name it VSwitchNAT
    3. Edit properties of your Wifi adapter
      (right click and then properties)
    4. Open the tab "Sharing" and enable both Checkboxes.
      Choose "VSwitchNAT" for Home networking connection
    And after that your virtual machines are using a private subnet which will be NATted by your laptop. This private subnet can be configured via VSwitchNAT:
    • Edit properties of VSwitchNAT vEthernet adpater
    • Edit properties of ipv4 and here you can edit the subnet

    New book

    alt.oracle - Sat, 2013-03-16 14:46
    Just a quick announcement that my second book is available from Packt Publishing.  OCA Oracle Database 11g: Database Administration I: A Real-World Certification Guide (again with the long title) is designed to be a different kind of certification guide.  Generally, it seems to me that publishers of Oracle certification guides assume that the only people who want to become certified are those with a certain level of experience, like a working DBA with several years on the job.  So, these guides make a lot of assumptions about the reader.  They end up being more about a lot of facts for the test rather than a cohesive learning experience.  My book attempts to target to a different kind of reader.  I've observed in the last several years that many people from non-database backgrounds are setting out to get their OCA or OCP certifications.  These folks don't necessarily bring a lot of knowledge or experience to this attempt, just a strong drive to learn.  My two books are designed to start from the beginning, then take the reader through all of the subjects needed for the certification test.  They're designed to be read straight through, completing the examples along the way.  In a sense, I'm attempting to recreate the experience of one of my Oracle classes in book form. 

    You'll find the book at these fine sellers of books.

    Packt Publishing
    Barnes and Noble

    Categories: DBA Blogs

    Getting it Right: 100KM, Team of 4 and 48 Hours

    TalentedApps - Thu, 2013-03-14 23:50

    It’s about an endeavor undertaken by our team of four people to raise funds for charity and to walk 100KM within 48 hours to meet the challenge set by Oxfam Trailwalker.  This post highlights our journey, the outcome and re-emphasizes some well-known facts.

    We started with goal setting; success was the obvious goal so success criteria were defined at the start in consultation with all stakeholders. Key Success Indicators (KSIs) were to raise funds to qualify for the event (i.e. 50K INR) and to complete 100KM walk within 48 hours with all four members. We did identify stretch goals at the initiation phase itself and those were to raise funds of 150K+ INR for charity and to complete 100KM walk within 40 hours with all four members.

    Getting it RightPlanning for the event went through a progressive elaboration process. As a team, we had to cross nine check points to register the entry and exit of the full team. Being a team building exercise, it was required that the team of four, walk together, supporting each other, fastest member walking with the slowest member of the team and completing the event as a team. As activities (aka check points) were already identified and sequenced, we had estimated duration for each activity to develop time management schedule in accordance with our team goal.

    Communication among team members was planned thoroughly. Similarly, we planned how to communicate with stakeholders (family members, well-wishers, friends who donated for the cause etc) before and during the event. We performed SWOT analysis for the risks and prepared risk response strategy accordingly. We planned and conducted procurement as per the team needs for the event.

    Finally on the D-Day, we first timers were at the event venue with almost a month of preparation. We started almost 10 minutes late from the starting point for 100KM walk of energy, determination and courage. We arrived at finish point exactly 39 hours and 38 seconds after the event starting time. It might not be an exceptional achievement from an outsider’s point of view but as our team could achieve predefined KSIs; this endeavor was a success for us.

    It was a fun-filled memorable walk where confrontation was used as a technique to overcome difference of opinions and group decision-making was practiced for team decisions.

    Four takeaway from this endeavor which are also keys for a successful project management are:

    • Success criteria must be defined at the beginning in consultation with all stakeholders.
    • Communication breeds success. A well-planned communication strategy is vital for project’s success.
    • Change is inevitable. You need to foresee challenges, risks and always need to have a change management plan in place.
    • Working together works. Remember the best team doesn’t win as often as the team that gets along best.

    Automatic Shared Memory Management problem ?

    Bas Klaassen - Thu, 2013-03-14 05:30
    From time to time one of out 10g databases ( seems to 'hang' Our monitoring shows a 'time out' on different checks and when trying to connect using sql, the sql session is hanging. No connection is possible. A few days ago, something like this happened again. Instead of bouncing the database, I decided to look for clues to find out why the database was 'hanging'. The server itself did Bas Klaassenhttp://www.blogger.com/profile/04080547141637579116noreply@blogger.com1
    Categories: APPS Blogs

    OWB - Compressing Files in Parallel using Java Activity

    Antonio Romero - Wed, 2013-03-13 12:36

    Yesterday I posted a user function for compressing/decompressing files using parallel processes in ODI. The same code you can pick up and use from an OWB process flow. Invoking the java function from within a Java activity from within the flow.

    The JAR used in the example below can be downloaded here, from the process flow OWB invokes the main method within the ZipFile class for example - passing the parameters to the function for the input, output directories and also the number of threads. The parameters are passed as a string in OWB, each parameter is wrapped in ?, so we have a string like ?param1?param2?param3? and so on. In the example I pass the input directory d:\inputlogs as the first parameter and d:\outputzips as the second, the number of processes used is 4 - I have escaped my backslash in order to get this to work on Windows.

     The classpath has the JAR file with the class compiled in it and the classpath value can be specified specified on the activity, carefully escaping the path if on windows.

    Then you can define the actual class to use;

    That's it, pretty easy. The return value from the method will use the exit code from your java method - normally 0 is failure and other values are error (so if you exit the java using a specific error code value you can return this code into a variable in OWB or perform a complex transition condition). Any standard output/error is also capture from within the OWB activity log in the UI, for example below you can see an exception that was thrown and also messages output to the standard output/error;

     That's a quick insight to the java activity in OWB.

    Connecting to Oracle Database Even if Background Processes are Killed

    Asif Momen - Wed, 2013-03-13 06:28
    Yesterday, I received an email update from MOS Hot Topics Email alert regarding a knowledge article which discusses how to connect to an Oracle database whose background processes are killed.

    I bet every DBA must have encountered this situation at least once. When I am in this situation, I normally use "shutdown abort" to stop the database and then proceed with normal startup. 

    After receiving the email, I thought of reproducing the same. My database (TGTDB) is running on RHEL-5.5. The goal is to kill all Oracle background process and try to connect to the database.

    Of course you don't want to test this in your production databases. 

    SQL> select * from v$version;

    Oracle Database 11g Enterprise Edition Release - 64bit Production
    PL/SQL Release - Production
    CORE      Production
    TNS for Linux: Version - Production
    NLSRTL Version - Production


    Below is the list of background processes for my test database "TGTDB":

    [oracle@ogg2 ~]$ ps -ef|grep TGTDB
    oracle    8249     1  0 01:35 ?        00:00:00 ora_pmon_TGTDB
    oracle    8251     1  0 01:35 ?        00:00:00 ora_psp0_TGTDB
    oracle    8253     1  0 01:35 ?        00:00:00 ora_vktm_TGTDB
    oracle    8257     1  0 01:35 ?        00:00:00 ora_gen0_TGTDB
    oracle    8259     1  0 01:35 ?        00:00:00 ora_diag_TGTDB
    oracle    8261     1  0 01:35 ?        00:00:00 ora_dbrm_TGTDB
    oracle    8263     1  0 01:35 ?        00:00:00 ora_dia0_TGTDB
    oracle    8265     1  6 01:35 ?        00:00:02 ora_mman_TGTDB
    oracle    8267     1  0 01:35 ?        00:00:00 ora_dbw0_TGTDB
    oracle    8269     1  1 01:35 ?        00:00:00 ora_lgwr_TGTDB
    oracle    8271     1  0 01:36 ?        00:00:00 ora_ckpt_TGTDB
    oracle    8273     1  0 01:36 ?        00:00:00 ora_smon_TGTDB
    oracle    8275     1  0 01:36 ?        00:00:00 ora_reco_TGTDB
    oracle    8277     1  1 01:36 ?        00:00:00 ora_mmon_TGTDB
    oracle    8279     1  0 01:36 ?        00:00:00 ora_mmnl_TGTDB
    oracle    8281     1  0 01:36 ?        00:00:00 ora_d000_TGTDB
    oracle    8283     1  0 01:36 ?        00:00:00 ora_s000_TGTDB
    oracle    8319     1  0 01:36 ?        00:00:00 ora_p000_TGTDB
    oracle    8321     1  0 01:36 ?        00:00:00 ora_p001_TGTDB
    oracle    8333     1  0 01:36 ?        00:00:00 ora_arc0_TGTDB
    oracle    8344     1  1 01:36 ?        00:00:00 ora_arc1_TGTDB
    oracle    8346     1  0 01:36 ?        00:00:00 ora_arc2_TGTDB
    oracle    8348     1  0 01:36 ?        00:00:00 ora_arc3_TGTDB
    oracle    8351     1  0 01:36 ?        00:00:00 ora_qmnc_TGTDB
    oracle    8366     1  0 01:36 ?        00:00:00 ora_cjq0_TGTDB
    oracle    8368     1  0 01:36 ?        00:00:00 ora_vkrm_TGTDB
    oracle    8370     1  0 01:36 ?        00:00:00 ora_j000_TGTDB
    oracle    8376     1  0 01:36 ?        00:00:00 ora_q000_TGTDB
    oracle    8378     1  0 01:36 ?        00:00:00 ora_q001_TGTDB
    oracle    8402  4494  0 01:36 pts/1    00:00:00 grep TGTDB
    [oracle@ogg2 ~]$ 

    Let us kill all these processes at once as shown below: 

    [oracle@ogg2 ~]$ kill -9 `ps -ef|grep TGTDB | awk '{print ($2)}'`
    bash: kill: (8476) - No such process
    [oracle@ogg2 ~]$ 

    Make sure no processes are running for our database:

    [oracle@ogg2 ~]$ ps -ef|grep TGTDB
    oracle    8520  4494  0 01:37 pts/1    00:00:00 grep TGTDB
    [oracle@ogg2 ~]$ 

    Now, try to connect to the database using SQL*Plus:

    [oracle@ogg2 ~]$ sqlplus "/as sysdba"

    SQL*Plus: Release Production on Wed Mar 13 01:38:12 2013

    Copyright (c) 1982, 2011, Oracle.  All rights reserved.

    Connected to:
    Oracle Database 11g Enterprise Edition Release - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options


    Voila, I am connected. Not only you get connected to the database but you can query V$*, DBA* and other application schema views/tables. Let's give a try: 

    SQL> select name from v$database;


    SQL> select name from v$tablespace;


    6 rows selected.

    SQL> select count(*) from dba_tables;


    SQL> select count(*) from test.emp;



    Let us try to update a record. 

    SQL> update test.emp  set ename = 'test' where eno = 2;

    1 row updated.


    Wow, one record was updated. But when you try to commit/rollback, the instance gets terminated. And it makes sense as the background processes responsible for carrying out the change have all died.

    SQL> commit;
    ERROR at line 1:
    ORA-03113: end-of-file on communication channel
    Process ID: 8917
    Session ID: 87 Serial number: 7


    Following is the error message recorded in the database alert log:

    Wed Mar 13 01:41:44 2013
    USER (ospid: 8917): terminating the instance due to error 472
    Instance terminated by USER, pid = 8917

    The user (client) session was able to retrieve data from the database as the shared memory was still available and the client session does not need background processes for this task.

    Below mentioned MOS article discusses on how to identify and kill the shared memory segment(s) allocated to "oracle" user through UNIX/Linux commands. 


    1. Successfully Connect to Database Even if Background Processes are Killed [ID 166409.1]


    Chet Justice - Tue, 2013-03-12 22:35
    Back in September, I was asked, and agreed, to become to Content Chair for "The Traditional" track at Kscope 13. Like I mentioned there, I had been involved for the past couple of years and it seemed like a natural fit. Plus, I get to play with some really fun people. If you are ready to take advantage of Early Bird Registration, go here. (save $300)

    Over the past few weeks we've finalized (mostly) the Sunday Symposium schedule. We're currently working on finalizing Hands-on-Labs (HOL).

    Beginning last year, we've had the Oracle product teams running the Sunday Symposia. This gives them an opportunity to showcase their wares and (hopefully) provide a bit of a road map for the future of said wares. This year, we have three symposia: APEX, ADF and Fusion Development and The Database and Developer's Toolbox.

    ADF and Fusion Development

    - Oracle Development Tools – Where are We and What’s Next - Bill Patakay, Oracle
    - How to Get Started with Oracle ADF – What Resources are Out There? - Shay Shmeltzer and Lynn Munsinger, Oracle
    - The Cloud and What it Means to Oracle ADF and Java Developers - Dana Singleterry, Oracle
    - Going Mobile – What to Consider Before Starting a Mobile Project - Joe Huang, Oracle
    - Understanding Fusion Middleware and ADF Integration - Frederic Desbiens, Lynn Munsinger, and Shay Shmeltzer, Oracle
    - Open Q&A with the ADF Product Management

    I love that they are opening up the floor to questions from their users. I wish more product teams would do that.

    Application Express

    - Oracle Database Tools - Mike Hichwa, Oracle
    - Technology for the Database Cloud - Rick Greenwald, Oracle
    - Developing Great User Interfaces with Application Express - Shakeeb Rahman, Oracle
    - How Do We Build the APEX Builder? - Vlad Uvarov, Oracle
    - How to Fully Utilize RESTful Web Services with Application Express - John Snyders, Oracle
    - Update from APEX Development - Joel Kallman, Oracle

    (If you see Joel Kallman out and about, make sure you you mispronounce APEX). This is a fantastic group of people (minus Joel of course). Not mentioned above is the affable David Peake who helps put all this together. The community surrounding APEX is second-to-none.

    Finally, The Database and Developer's Toolkit. I'm partial to this one because I've been involved in the database track for the past couple of years. Like last year, this one is being put together by Kris Rice of Oracle. There are no session or abstract details for this one as it will be based mainly on the upcoming 12c release of the database. However, we do have the list of speakers lined up. If you could only come for one day of this conference, Sunday would be the day and this symposium would be the one you would attend.

    This symposium will start off with Mike Hichwa (above) and then transition to the aforementioned (too many big words tonight) Mr. Rice. He'll be accompanied by Jeff Smith of SQL Developer fame, Maria Colgan from the Optimzer team and Tom Kyte.

    How'd we do? I think pretty darn good.

    Don't forget to sign up. Early Bird Registration ends on March 25, 2013. Save $300.
    Categories: BI & Warehousing

    Starbucks 1TB cube in production

    Keith Laker - Tue, 2013-03-12 14:41
    Check out the customer snapshot Oracle has published which describes the success Starbucks Coffee has achieved by moving their data warehouse to the Exadata platform, leveraging the Oracle Database OLAP Option and Oracle BIEE at the front end.    10,000 users in HQ and across thousands of store locations now have timely accurate and calculation rich information at their fingertips.

    Starbucks Coffee Company Delivers Daily, Actionable Information to Store Managers, Improves Business Insight with High Performance Data Warehouse
    ( http://www.oracle.com/us/corporate/customers/customersearch/starbucks-coffee-co-1-exadata-ss-1907993.html )

    By delivering extreme performance combined with the architectural simplicity and sophisticated multidimensional calculation power of the in-database analytics of the Database, Starbucks use of OLAP has enabled some outstanding results. Together with the power of other Oracle Database and Exadata benefits such as Partitioning, Hybrid Columnar Compression, Storage Indexes and Flash Memory, Starbucks is able to handle the constant growth in data volumes and end-user demands with ease.

    A great example of the power of the "Disk To Dashboard" capability of Oracle Business Analytics.
    Categories: BI & Warehousing

    OER for Fusion Application

    Oracle e-Business Suite - Mon, 2013-03-11 12:47

    Replacement of ETRM/IREP for Fusion Application is Oracle Enterprise Repository. You can access this using following link.

    What Is OER?

    Very simply this is a standalone catalog of technical information about Oracle’s Application products.  For E-Business Suite users it equates to the iRepository tool (http://irep.oracle.com/index.html), or for PeopleSoft its similar to the PeopleSoft Interactive Services Repository.

    It contains a wealth of information, with the primary purpose of facilitating the creation of Application to Application integrations, and creating extensions and customizations. With this detailed technical knowledge of the inner workings and API’s available for Oracle Applications a better level of code reuse and overall accuracy can be achieved.

    Accessing OER

    Access is available either from Oracle’s globally shared public OER instance, or as part of your local Fusion Application instance deployment. Detail on creating a local OER installation is found in Oracle Fusion Middleware Installation Guide for Oracle Enterprise Repository (E15745-07). The URL’s for OER will be:

    An OER login may be required, although Oracle’s public instance also supports guest access at this time.

    OER catalogs technical components by various attributes, with the key ones being NameType, and Logical Business Area (LBA).  LBA is the lower level of the Fusion Applications Taxonomy and is used to tag each technical object with the feature and product that it is owned by and associated with.

    The general keyword search actually uses indexes of all the fields/attributes associated with a entry.

    Whilst the basic Asset Search should suffice in most cases, and is a simpler UI, the Browse feature (IE required) provides many powerful features and graphical views, including an object hierarchy and the Navigator to display objects related to each other


    Oracle Enterprise Repository

    Oracle Enterprise Repository

    How To Get The Most From Oracle Enterprise Repository For Troubleshooting Fusion Applications [ID 1399910.1]

    Categories: APPS Blogs

    7 things that can go wrong with Ruby 1.9 string encodings

    Raimonds Simanovskis - Sun, 2013-03-10 17:00

    Good news, I am back in blogging :) In recent years I have spent my time primarily on eazyBI business intelligence application development where I use JRuby, Ruby on Rails, mondrian-olap and many other technologies and libraries and have gathered new experience that I wanted to share with others.

    Recently I did eazyBI migration from JRuby 1.6.8 to latest JRuby 1.7.3 version as well as finally migrated from Ruby 1.8 mode to Ruby 1.9 mode. Initial migration was not so difficult and was done in one day (thanks to unit tests which caught majority of differences between Ruby 1.8 and 1.9 syntax and behavior).

    But then when I thought that everything is working fine I got quite many issues related to Ruby 1.9 string encodings which unfortunately were not identified by test suite and also not by my initial manual tests. Therefore I wanted to share these issues which might help you to avoid these issues in your Ruby 1.9 applications.

    If you are new to Ruby 1.9 string encodings then at first read, for example, tutorials about Ruby 1.9 String and Ruby 1.9 Three Default Encodings, as well as Ruby 1.9 Encodings: A Primer and the Solution for Rails is useful.

    1. Encoding header in source files

    I will start with the easy one - if you use any Unicode characters in your Ruby source files then you need to add

    # encoding: utf-8

    magic comment line in the beginning of your source file. This was easy as it was caught by unit tests :)

    2. Nokogiri XML generation

    The next issues were with XML generation using Nokogiri gem when XML contains Unicode characters. For example,

    require "nokogiri"
    doc = Nokogiri::XML::Builder.new do |xml|
      xml.dummy :name => "āčē"
    puts doc.to_xml

    will give the following result when using MRI 1.9:

    <?xml version="1.0"?>
    <dummy name="&#x101;&#x10D;&#x113;"/>

    which might not be what you expect if you would like to use UTF-8 encoding also for Unicode characters in generated XML file. If you execute the same ruby code in JRuby 1.7.3 in default Ruby 1.9 mode then you get:

    <?xml version="1.0"?>
    <dummy name="āčē"/>

    which seems OK. But actually it is not OK if you look at generated string encoding:

    doc.to_xml.encoding # => #<Encoding:US-ASCII>
    doc.to_xml.inspect  # => "<?xml version=\"1.0\"?>\n<dummy name=\"\xC4\x81\xC4\x8D\xC4\x93\"/>\n"

    In case of JRuby you see that doc.to_xml encoding is US-ASCII (which is 7 bit encoding) but actual content is using UTF-8 8-bit encoded characters. As a result you might get ArgumentError: invalid byte sequence in US-ASCII exceptions later in your code.

    Therefore it is better to tell Nokogiri explicitly that you would like to use UTF-8 encoding in generated XML:

    doc = Nokogiri::XML::Builder.new(:encoding => "UTF-8") do |xml|
      xml.dummy :name => "āčē"
    doc.to_xml.encoding # => #<Encoding:UTF-8>
    puts doc.to_xml
    <?xml version="1.0" encoding="UTF-8"?>
    <dummy name="āčē"/>
    3. CSV parsing

    If you do CSV file parsing in your application then the first thing you have to do is to replace FasterCSV gem (that you probably used in Ruby 1.8 application) with standard Ruby 1.9 CSV library.

    If you process user uploaded CSV files then typical problem is that even if you ask to upload files in UTF-8 encoding then quite often you will get files in different encodings (as Excel is quite bad at producing UTF-8 encoded CSV files).

    If you used FasterCSV library with non-UTF-8 encoded strings then you get ugly result but nothing will blow up:

    FasterCSV.parse "\xE2"
    # => [["\342"]]

    If you do the same in Ruby 1.9 with CSV library then you will get ArgumentError exception.

    CSV.parse "\xE2"
    # => ArgumentError: invalid byte sequence in UTF-8

    It means that now you need to rescue and handle ArgumentError exceptions in all places where you try to parse user uploaded CSV files to be able to show user friendly error messages.

    The problem with standard CSV library is that it is not handling ArgumentError exceptions and is not wrapping them in MalformedCSVError exception with information in which line this error happened (as it is done with other CSV format exceptions) which makes debugging very hard. Therefore I also "monkey patched" CSV#shift method to add ArgumentError exception handling.

    4. YAML serialized columns

    ActiveRecord has standard way how to serialize more complex data types (like Array or Hash) in database text column. You use serialize method to declare serializable attributes in your ActiveRecord model class definition. By default YAML format (using YAML.dump method for serialization) is used to serialize Ruby object to text that is stored in database.

    But you can get big problems if your data contains string with Unicode characters as YAML implementation significantly changed between Ruby 1.8 and 1.9 versions:

    • Ruby 1.8 used so-called Syck library
    • JRuby in 1.8 mode used Java based implementation that tried to ack like Syck
    • Ruby 1.9 and JRuby in 1.9 mode use new Psych library

    Lets try to see results what happens with YAML serialization of simple Hash with string value which contains Unicode characters.

    On MRI 1.8:

    YAML.dump({:name => "ace āčē"})
    # => "--- \n:name: !binary |\n  YWNlIMSBxI3Ekw==\n\n"

    On JRuby 1.6.8 in Ruby 1.8 mode:

    YAML.dump({:name => "ace āčē"})
    # => "--- \n:name: \"ace \\xC4\\x81\\xC4\\x8D\\xC4\\x93\"\n"

    On MRI 1.9 or JRuby 1.7.3 in Ruby 1.9 mode:

    YAML.dump({:name => "ace āčē"})
    # => "---\n:name: ace āčē\n"

    So as we see all results are different. But now lets see what happens after we migrated our Rails application from Ruby 1.8 to Ruby 1.9. All our data in database is serialized using old YAML implementations but now when loaded in our application they are deserialized back using new Ruby 1.9 YAML implementation.

    When using MRI 1.9:

    YAML.load("--- \n:name: !binary |\n  YWNlIMSBxI3Ekw==\n\n")
    # => {:name=>"ace \xC4\x81\xC4\x8D\xC4\x93"}
    YAML.load("--- \n:name: !binary |\n  YWNlIMSBxI3Ekw==\n\n")[:name].encoding
    # => #<Encoding:ASCII-8BIT>

    So the string that we get back from database is no more in UTF-8 encoding but in ASCII-8BIT encoding and when we will try to concatenate it with UTF-8 encoded strings we will get Encoding::CompatibilityError: incompatible character encodings: ASCII-8BIT and UTF-8 exceptions.

    When using JRuby 1.7.3 in Ruby 1.9 mode then result again will be different:

    YAML.load("--- \n:name: \"ace \\xC4\\x81\\xC4\\x8D\\xC4\\x93\"\n")
    # => {:name=>"ace Ä\u0081Ä\u008DÄ\u0093"}
    YAML.load("--- \n:name: \"ace \\xC4\\x81\\xC4\\x8D\\xC4\\x93\"\n")[:name].encoding
    # => #<Encoding:UTF-8>

    So now result string has UTF-8 encoding but the actual string is damaged. It means that we will not even get exceptions when concatenating result with other UTF-8 strings, we will just notice some strange garbage instead of Unicode characters.

    The problem is that there is no good solution how to convert your database data from old YAML serialization to new one. In MRI 1.9 at least it is possible to switch back YAML to old Syck implementation but in JRuby 1.7 when using Ruby 1.9 mode it is not possible to switch to old Syck implementation.

    Current workaround that I did is that I made modified serialization class that I used in all model class definitions (this works in Rails 3.2 and maybe in earlier Rails 3.x versions as well):

    serialize :some_column, YAMLColumn.new

    YAMLColumn implementation is a copy from original ActiveRecord::Coders::YAMLColumn implementation. I modified load method to the following:

    def load(yaml)
      return object_class.new if object_class != Object && yaml.nil?
      return yaml unless yaml.is_a?(String) && yaml =~ /^---/
        # if yaml sting contains old Syck-style encoded UTF-8 characters
        # then replace them with corresponding UTF-8 characters
        # FIXME: is there better alternative to eval?
        if yaml =~ /\\x[0-9A-F]{2}/
          yaml = yaml.gsub(/(\\x[0-9A-F]{2})+/){|m| eval "\"#{m}\""}.force_encoding("UTF-8")
        obj = YAML.load(yaml)
        unless obj.is_a?(object_class) || obj.nil?
          raise SerializationTypeMismatch,
            "Attribute was supposed to be a #{object_class}, but was a #{obj.class}"
        obj ||= object_class.new if object_class != Object
      rescue *RESCUE_ERRORS

    Currently this patched version will work on JRuby where just non-ASCII characters are replaced by \xNN style fragments (byte with hex code NN). When loading existing data from database we check if it has any such \xNN fragment and if yes then these fragments are replaced with corresponding UTF-8 encoded characters. If anyone has better suggestion for implementation without using eval then please let me know in comments :)

    If you need to create something similar for MRI then you would probably need to search if database text contains !binary | fragment and if yes then somehow transform it to corresponding UTF-8 string. Anyone has some working example for this?

    5. Sending binary data with default UTF-8 encoding

    I am using spreadsheet gem to generate dynamic Excel export files. The following code was used to get generated spreadsheet as String:

    book = Spreadsheet::Workbook.new
    # ... generate spreadsheet ...
    buffer = StringIO.new
    book.write buffer

    And then this string was sent back to browser using controller send_data method.

    The problem was that in Ruby 1.9 mode by default StringIO will generate strings with UTF-8 encoding. But Excel format is binary format and as a result send_data failed with exceptions that UTF-8 encoded string contains non-UTF-8 byte sequences.

    The fix was to set StringIO buffer encoding to ASCII-8BIT (or you can use alias BINARY):

    buffer = StringIO.new

    So you need to remember that in all places where you handle binary data you cannot use strings with default UTF-8 encoding but need to specify ASCII-8BIT encoding.

    6. JRuby Java file.encoding property

    Last two issues were JRuby and Java specific. Java has system property file.encoding which is not related just to file encoding but determines default character set and string encoding in many places.

    If you do not specify file.encoding explicitly then Java VM on startup will try to determine its default value based on host operating system "locale". On Linux it might be that it will be set to UTF-8, on Mac OS X by default it will be MacRoman, on Windows it will depend on Windows default locale setting (which will not be UTF-8). Therefore it is always better to set explicitly file.encoding property for Java applications (e.g. using -Dfile.encoding=UTF-8 command line flag).

    file.encoding will determine which default character set java.nio.charset.Charset.defaultCharset() method call will return. And even if you change file.encoding property during runtime it will not change java.nio.charset.Charset.defaultCharset() result which is cached during startup.

    JRuby uses java.nio.charset.Charset.defaultCharset() in very many places to get default system encoding and uses it in many places when constructing Ruby strings. If java.nio.charset.Charset.defaultCharset() will not return UTF-8 character set then it might result in problems when using Ruby strings with UTF-8 encoding. Therefore in JRuby startup scripts (jruby, jirb and others) file.encoding property is always set to UTF-8.

    So if you start your JRuby application in standard way using jruby script then you should have file.encoding set to UTF-8. You can check it in your application using ENV_JAVA['file.encoding'].

    But if you start your JRuby application in non-standard way (e.g. you have JRuby based plugin for some other Java application) then you might not have file.encoding set to UTF-8 and then you need to worry about it :)

    7. JRuby Java string to Ruby string conversion

    I got file.encoding related issue in eazyBI reports and charts plugin for JIRA. In this case eazyBI plugin is OSGi based plugin for JIRA issue tracking system and JRuby is running as a scripting container inside OSGi bundle.

    JIRA startup scripts do not specify file.encoding default value and as a result it typically is set to operating system default value. For example, on my Windows test environment it is set to Windows-1252 character set.

    If you call Java methods of Java objects from JRuby then it will automatically convert java.lang.String objects to Ruby String objects but Ruby strings in this case will use encoding based on java.nio.charset.Charset.defaultCharset(). So even when Java string (which internally uses UTF-16 character set for all strings) can contain any Unicode character it will be returned to Ruby not as string with UTF-8 encoding but in my case will return with Windows-1252 encoding. As a result all Unicode characters which are not in this Windows-1252 character set will be lost.

    And this is very bad because everywhere else in JIRA it does not use java.nio.charset.Charset.defaultCharset() and can handle and store all Unicode characters even when file.encoding is not set to UTF-8.

    Therefore I finally managed to create a workaround which forces that all Java strings are converted to Ruby strings using UTF-8 encoding.

    I created custom Java string converter based on standard one in org.jruby.javasupport.JavaUtil class:

    package com.eazybi.jira.plugins;
    import org.jruby.javasupport.JavaUtil;
    import org.jruby.Ruby;
    import org.jruby.RubyString;
    import org.jruby.runtime.builtin.IRubyObject;
    public class RailsPluginJavaUtil {
        public static final JavaUtil.JavaConverter JAVA_STRING_CONVERTER = new JavaUtil.JavaConverter(String.class) {
            public IRubyObject convert(Ruby runtime, Object object) {
                if (object == null) return runtime.getNil();
                // PATCH: always convert Java string to Ruby string with UTF-8 encoding
                // return RubyString.newString(runtime, (String)object);
                return RubyString.newUnicodeString(runtime, (String)object);
            public IRubyObject get(Ruby runtime, Object array, int i) {
                return convert(runtime, ((String[]) array)[i]);
            public void set(Ruby runtime, Object array, int i, IRubyObject value) {
                ((String[])array)[i] = (String)value.toJava(String.class);

    Then in my plugin initialization Ruby code I dynamically replaced standard Java string converter to my customized converter:

    java_converters_field = org.jruby.javasupport.JavaUtil.java_class.declared_field("JAVA_CONVERTERS")
    java_converters_field.accessible = true
    java_converters = java_converters_field.static_value.to_java
    java_converters.put(java.lang.String.java_class, com.eazybi.jira.plugins.RailsPluginJavaUtil::JAVA_STRING_CONVERTER)

    And as a result now all Java strings that were returned by Java methods were converted to Ruby strings using UTF-8 encoding and not using encoding from file.encoding Java property.

    Final thoughts

    My main conclusions from solving all these string encoding issues are the following:

    • Use UTF-8 encoding as much as possible. Handling conversions between different encodings will be much more harder than you will expect.
    • Use example strings with Unicode characters in your tests. I didn't identify all these issues initially when running tests after migration because not all tests were using example strings with Unicode characters. So next time instead of using "dummy" string in your test use "dummy āčē" everywhere :)

    And please let me know (in comments) if you have better or alternative solutions for the issues that I described here.

    Categories: Development

    Approvals in Fusion Procurement

    Oracle e-Business Suite - Fri, 2013-03-08 02:08
    Key features exist in Fusion Procurement approvals

    There are many useful features that can be used with Fusion Procurement. Here are just some of the more significant examples:

    • Both Serial and Parallel Approval for all document types.
    • Various ways to configure responses, including features like first responder wins to help avoid lengthy processing times.
    • Notification by email as well as several rich dashboard components (e.g. worklist) to show items currently awaiting action.
    • Expiration, reminder and escalation features on pending actions.
    • Delegation and vacation rules to forward actions to dedicated proxies as needed
    • Rich notification content including clickable links to go directly to document details.
    • Wide range of attribute values to use in the creation of custom approval processing rules.
    What hierarchies can be used to generate approval lists?

    Fusion Applications provides standard support for the following methods to derive the list of approvers

    • Supervisor Hierarchy. This uses the HCM employee definition, leveraging the specific assignment of a supervisor person to each employee. AMX engine calls HCM to request the users that are in the hierarchy and passes in a starting position (normally the person submitting the purchasing document) and the maximum number of levels in the hierarchy to climb up.
    • Position Hierarchy. This uses the Job definitions in HCM and selects all employees tied to the position that gets included in the selected hierarchy. Again this accepts the starting position (person) and the top job level to climb to before completing.
    • Approval Group is simple a group of predefined people. This can be a static list or may be custom to generate at run-time based on approval document attribute data.
    • Job Level. This works very similarly to the Position Hierarchy whereby it uses start and end Job definitions to traverse the hierarchy and select approvers.
    What is Approvals Management (AMX)

    Approvals Management (AMX) is an independent product that comes from the Fusion Middleware SOA Server, and provides general approval services to any product in Fusion Applications. AMX can be considered the meeting point between the powerful and flexible Oracle Business Rules capability and the advanced process control capabilities of Oracle Human Workflow.
    The BPEL process that controls approvals has points at which it invokes the AMX services to initiate the approval process. The BPEL process controls the procurement process around approvals, however AMX and Human Workflow are responsible for the entire approval process.

    AMX features include:

    • The execution of rules that govern the selection and generation of approver lists.
    • The sending of notifications to the participants on the generated list.
    • The processing of responses from those approvers, and selection of appropriate next approval action.
    • The return of the completion status back to the Procurement BPEL process for actioning.

    Still eRecords and eSignatures are not supported by AMX in version 1.0.

    For Complete Details Please refer to Document Approval in Fusion Procurement Products [ID 1319614.1]

    Categories: APPS Blogs

    OWB Repository Install on RAC using OMBPlus

    Antonio Romero - Thu, 2013-03-07 17:33

    There are few documents on the Oracle Support site http://support.oracle.com  to check if OWB is installed correctly on RAC and Exadata (Doc ID 455999.1) and How to Install a Warehouse Builder Repository on a RAC (Doc ID 459961.1).

     This blog will just show you how to install the OWB repository on RAC using OMBPlus.

    The steps are:

    • Database preparation
    • Repository installation on the first Node
    • Copy of the rtrepos.properties File on all Nodes
    • Registration of all the other Nodes
    • Check the installation
    Step 1: Database preparation

    UNIQUE Service Name
    Make sure that EACH Node in the RAC has a UNIQUE service_name. If this is not the case, then add a unique service_name with the following command:

    srvctl add service -d dbname -s instn -r instn

    The resulting service name is instn.clusterdomainname. For example, if the instance name is racsrvc1,then the service name could be racsrvc1.us.oracle.com.

    "srvctl" can be used to manage the RAC services:
    srvctl [status|stop|start] service -d <db> -s <service-name>

    Details are described in the OWB Installation Guide:
    Paragraph "Ensuring the Availability of Service Names for Oracle RAC Nodes"

    LISTENER Configuration
    Make sure that EACH Node has the LISTENER configured correctly. The listener on each Node should be able to manage connections to the unique database service of each Node.

    Step 2: Repository installation on the first Node

    We assume that RAC has 2 Nodes: NODE1 and NODE2, the database instance is setup and the OWB software has been installed on all Nodes of the RAC. 

    Start the OMBPlus shell on the primary node say Node 1 from <OWB_HOME>/owb/bin/unix/OMBPlus.sh

    Execute the following command

    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

    /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}


    OWB repository seeding completed.

    OMB+> exit

     Step 3: Copy of the rtrepos.properties File on all Nodes

     During the Repository seeding, a file rtrepos.properties is created/updated on Node 1 at location  <OWB_HOME>\ owb\bin\admin directory. This file should be copied to all RAC Nodes to the same location. In this case to Node 2 at  <OWB_HOME>\ owb\bin\admin.

    Step 4: Registration of all the other Nodes

    After the Repository installation, all RAC Nodes should be registered. This to enable the OWB Runtime Service to fail over to one of the other Nodes when required (e.g. because of a node crash). This registration process consists of an update in tables OWBRTPS and WB_RT_SERVICE_NODES. These tables will be updated with Node specific details like the Oracle_Home where the OWB software has been installed on the Node and and host, port, service connection details for the instance running on the Node.  


    RAC instance has beeb registered.

    Step 5: Check the installation

    Check the owb home values in the following tables are correct.

    Select * from owbsys.owbrtps

    Select * from owbsys.wb_rt_service_nodes.

    Connect as the OWBSYS to the unique service net_service on each node and execute the script located in the <OWB_HOME>\ owb\rtp\sql directory

    PL/SQL procedure successfully completed. 

    If the service is not available start the service using the following script


    Your installation of the OWB repository is now complete.

    You can also use the following OMBplus commands to create a OWB WORKSPACE and workspace owner.

    In SQL*Plus as sysdba

    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

    /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    create user WORKSPACE_OWNER identified by PASSWORD;

    grant resource, connect to WORKSPACE_OWNER;


    grant create session to WORKSPACE_OWNER;

    In OMBPlus


    Workspace has been created.

    OMB+> exit

    OWB - Securing your data with Transparent Data Encryption

    Antonio Romero - Thu, 2013-03-07 12:40

    Oracle provides a secure and convenient functionality for securing data in your datawarehouse, tables can be designed in OWB utilizing the Transparent Data Encryption capability. This is done by configuring specific columns in a table to use encryption.

    When users insert data, the Oracle database transparently encrypts it and stores it in the column.  Similarly, when users select the column, the database automatically decrypts it.  Since all this is done transparently without any change the application code, the feature has an appropriate name: Transparent Data Encryption. 

    Encryption requires users to apply an encryption algorithm and an encryption key to the clear-text input data. And to successfully decrypt an encrypted value, users must know the value of the same algorithm and key. In Oracle database, users can specify an entire tablespace to be encrypted, or selected columns of a table. From OWB we support column encryption that can be applied to tables and external tables.

    We secure the capture of the password for encryption in an OWB location, just like other credentials. This is then used later in the configuration of the table.

    We can configure a table and for columns define any encryption, including the encryption algorithm, integrity algorithm and the password.

     Then when the table is deployed from OWB, the TDE information is incorporated into the DDL for the table;

    When data is written to this column it is encrypted on disk. Read more about this area in the Oracle Advanced Security white paper on Transparent Data Encryption Best Practices here.

    Oracle Linux 6.4 Announced

    Asif Momen - Thu, 2013-03-07 10:15
    The Oracle Linux team has announced the availability of Oracle Enterprise Linux (OL) 6.4. You can download OEL-6.4 from Oracle's EDelivery website (the link is below):


    To learn more about OL-6.4 click on the below link.


    Happy downloading!!! 

    How to Find Software Versions and Patches in an Oracle Business Intelligence Applications Environment

    Oracle e-Business Suite - Thu, 2013-03-07 01:20

    This MOS Note will help consultants to find the exact Version and Patch level for installed components. This is very useful when you are logging an Oracle Service Request.  


    OBIA: How to Find Software Versions and Patches in an Oracle Business Intelligence Applications Environment? [ID 1519745.1]

    Categories: APPS Blogs

    Easy application development with Couchbase, Angular and Node

    Tugdual Grall - Wed, 2013-03-06 04:35
    Note : This article has been written in March 2013, since Couchbase and its drivers have a changed a lot. I am not working with/for Couchbase anymore, with no time to udpate the code. A friend of mine wants to build a simple system to capture ideas, and votes. Even if you can find many online services to do that, I think it is a good opportunity to show how easy it is to develop new Tugdual Grallhttps://plus.google.com/103667961621022786141noreply@blogger.com2


    Subscribe to Oracle FAQ aggregator