Feed aggregator

Using Virtualbox images with Hyper-V

Dietrich Schroff - Thu, 2013-03-21 15:39
In 2008 i wrote about using vmware images with virtualbox. To migrate a host from virutalbox to Hyper-V, you have to do nearly the same, but you have to convert the hdd:
C:\Users\schroff>"\Program Files\Oracle\VirtualBox\VBoxManage.exe" clonehd
"c:\Users\schroff\VirtualBox VMs\Debian64-DS\Debian64-DS.vdi"
"c:\Users\schroff\VirtualBox VMs\Debian64-DS-Hyper-V"\Debian-DS.vhd
-format vhd0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Clone hard disk created in format 'vhd'. UUID: c42129a8-c145-4a50-908c-023c8ed2b711

      Hyper-V: Compile Linux Kernel with Microsoft device drivers

      Dietrich Schroff - Thu, 2013-03-21 14:41
      After knowing, that running a linux inside Hyper-V is only working with the "legacy" network adapter (look here), i was trying to build a kernel with the drivers (Microsoft has added the drivers into the Linux kernel versions >2.6.32).
      There is one nice tutorial out there:
      • IT FROM ALL ANGLES: Hyper-V Guests: Compile Linux Kernel 2.6.32 on Debian 
      But the menus of menuconfig have changed with kernel version 3.0. Microsofts Hyper-V kernel modules are no longer located inside the staging section. They can be found here:
      Device Drivers --> Network device support -->
      Device Drivers -->  Microsoft Hyper-V guest support -->
      Device Drivers --> HID Support --> Special HID Drivers -->

      All other steps work like described in IT FROM ALL ANGLES: Hyper-V Guests: Compile Linux Kernel 2.6.32 on Debian.
      Microsoft offers a ISO-image for installing the kernel modules for some special kernel version for this linux distributions:
      • Red Hat Enterprise Linux 5.7, 5.8, 6.0-6.3 x86 and x64
      • CentOS 5.7, 5.8, 6.0-6.3 x86 and x64

      The ISO-image can be downloaded from this location.

      Note: If you try to build the kernel in you virtual machine, you need at least 6GB in /usr/src for compiling your kernel...

      The Part-timer’s Dilemma

      TalentedApps - Thu, 2013-03-21 13:50

      jobsIt struck me as I was talking to a dynamite woman, who has chosen to work part-time. She was looking for something that would be challenging and engaging, but not critical enough that deadlines loomed large on her. She was struggling to find it.

      Truth? It does not exist.

      If it is not critical, if it is not showstopper important, if things will chug along just fine without it, …  it is probably not engaging enough for someone of her capability and intellect.

      What she needs is something engaging, something important, something critical – just in smaller chunks.

      We are no longer paid for our time – we are paid for outcomes, and if our outcome is not important, it really does not matter how long or little we worked.

      When you go part-time, lean in.

      The Internet

      Chet Justice - Wed, 2013-03-20 20:56
      Have you seen this State Farm ad?

      I think it's hilarious.

      Riding to batting practice with LC, he starts up with me...

      LC: (in response to some statement I made) "Where'd you hear that?"
      Me: "The Internet"
      LC: "And you believed it?"
      Me: "Yeah, they can't put anything on the internet that isn't true."
      LC: "Where'd you hear that?"
      Together: "The Internet"

      We also do the "And then...?" skit from Dude, Where's My Car?. He used to be able to rattle off the saying from Tommy Boy, "You can get a good look at a t-bone by sticking your head up a bull's..." - I'm pretty sure this is better than that.
      Categories: BI & Warehousing

      Paper "Professional Software Development using APEX"

      Rob van Wijk - Wed, 2013-03-20 12:58
      As announced on Twitter yesterday, my paper titled "Professional Software Development Using Oracle Application Express" has been put online. I'm copying the first two paragraphs of the summary here, so you can decide if you want to read the rest as well: Software development involves much more than just producing lines of code. It is also about version control, deployment to other environments, Rob van Wijkhttp://www.blogger.com/profile/00499478359372903250noreply@blogger.com2

      InfoQ : Running the Largest Hadoop DFS Cluster

      Karl Reitschuster - Wed, 2013-03-20 08:46

      Since I joined a Big Data Event : Frankfurter Datenbanktage 2013 - I started to take also a look to non-relational technics too. The RDBMS is not for every asepct the correct and fitting and fulfilling answer to all data related IT challenges. 

      Frequently I wondered about how facebook could handle such an dramatic amount of users and data growth. I found an interesting presentation from the facebooks HDFS - Development-Lead Hairong Kuang optimizing HDFS (Hadoop DFS) for Scalability, Storage Effiency and Availability.

      An RDBMS would not scale to that amount of load - reasons for that is the explained in theory with the CAP-Theorem which I will post about later;

      Now to the presentation on InfoQ :  http://www.infoq.com/presentations/Hadoop-HDFS-Facebook





      Chet Justice - Tue, 2013-03-19 21:08
      I've been scratching my eyes out lately trying to reverse engineer some lots of PL/SQL.

      One thing I've seen a lot of is calls to dbms_output.put_line. Fortunately, I've seen some dbms_application_info.set_module and other system calls too. But back to that first one.

      1. When I used dbms_output, I would typically only use it in development. Once done, I would remove all calls to it, test and promote to QA. It would never survive the trip to production.
      2. Typically, when I used it in development, I would tire of typing out d b m s _ o u t p u t . p u t _ l i n e so I would either a, create a standalone procedure or create a private procedure inside the package, something like this (standalone version).
      PROCEDURE p( p_text IN VARCHAR2 )
      dbms_output.put_line( p_text );
      END p;
      Easy. Then, in the code, I would simply use the procedure p all over the place...like this:
        l_start_time date;
      l_end_time date;
      l_start_time := sysdate;
      p( 'l_start_time: ' || l_start_time );

      --do some stuff here
      --maybe add some more calls to p

      l_end_time := sysdate;
      p( 'l_end_time: ' || l_start_time );

      Since the procedure is 84 characters long, I only have to use the p function 4 times to get the benefit. Yay for me...I think. Wait, I like typing.
      Categories: BI & Warehousing

      PeopleSoft 9.2 to be released March 22

      Brent Martin - Mon, 2013-03-18 15:33

      Oracle announced today at the Alliance '13 conference that the PeopleSoft 9.2 will be generally available on March 22.

      Here's the link to the press release:  http://www.oracle.com/us/corporate/press/1920557

      What and where is your concealed talent?

      TalentedApps - Mon, 2013-03-18 12:21

      Now that’s an expression for a skeptic!

      The global workforce is loaded with concealed talent, resulting in lost value and opportunities for both business and workers.

      Why is talent concealed? Two things really:

      1. We only see what we are looking for.
      2. We aren’t using reputation effectively.

      The first causes the problem. The second is why it hasn’t been solved.

      Concealed talent brings no reputation. – Desiderius Erasmus 1466/7/9?-1536

      Who knew that a famous Renaissance humanist had such insight into two important 21st century concepts: talent and reputation?

      The dirty lens of requirements

      We can spend a lot of time coming up with job requirements and descriptions that don’t perform either function very well. Worse yet, they cause us to look at people in those roles solely through the lens of those requirements.

      Anything else they might be able to add value with is ignored or overlooked most of the time, leading to lost value for the business and lost opportunity for the employee. This other talent is concealed.

      “I’m an excellent driver.” – Rain Man

      So what’s the answer? How do we make sure we know what concealed talent they have?

      Is it self-identified skills? That’s a start, but it comes with its own set of problems, e.g. the “Lake Wobegon Syndrome” where everyone is above average.

      Is it endorsements? That’s slightly better in that at least it’s someone else (we hope) saying you are good at something. Recent experience on a certain professional social networking service has led many to conclude that it’s a bit devalued.

      What is it that’s missing from endorsements? It’s the validity of the endorsement.

      Says who?

      The answer is reputation. Sounds simple. But you have to do it right.

      Your reputation is built on the perceptions of a wide array of perspectives of people who have worked with you, experienced your work, or heard about it from others. That’s both good and bad because sometimes reputation can be very different from reality.

      The trick is to find out whose perspectives and which perceptions lead to more valid endorsements of talent. For instance, it doesn’t count so much if your 24-hour fitness instructor endorses your carbon fiber-based fuselage design skills, but maybe it’s someone well-respected in carbon fiber-based fuselage design (or perhaps just design around carbon-fiber materials or fuselages in general) who does. And your instructor might be better suited to endorse your self-discipline and ability to stay focused on goals.

      In other words, those whose reputation is strong in an area are likely to be a more valid judge of talent in that area. So use that. It’s the gift that keeps giving, because those who get high marks by valid judges are themselves likely to be valid judges of others. Furthermore, reputation backlash can put some restraint on gratuitous endorsements. This isn’t earth-shattering news, but it’s not being used enough.

      If only we knew what we know…

      “If only HP knew what HP knows, we would be three-times more productive.” – Lew Platt

      Find out what your company knows. Use reputation as a tool to discover the concealed talent in your workforce.

      Picture from Wikimedia Commons.

      Hyper-V: Installing a debian linux in a virtual machine - trouble with the (non legacy) network adapter

      Dietrich Schroff - Sun, 2013-03-17 15:03
      First try with a virtualization solution like Hyper-V is to install a guest. So let's try debian linux.
      The installer runs through this points:
      1. Name & path of the virtual machine
      2. RAM
      3. Network (how to configure a virtual switch with internet connectivity or how to configure internet connectivity with NAT)
        at this point you have to choose no connection (i will explain this later)
      4. Create a hdd
      5. The summary should look like this:
      Next step you have to open the configuration of this virtual machine. There you can see a network adapter with the following properties:
      Bandwith management? This sounds really good. There are two types of network adapters:

    1. A network adapter requires a virtual machine driver in order to work, but offers better performance. This driver is included with some newer versions of Windows. On all other supported operating systems, install integration services in the guest operating system to install the virtual machine driver. For instructions, see Install a Guest Operating System. For more information about which operating systems are supported and which of those require you to install integration services, see About Virtual Machines and Guest Operating Systems (http://go.microsoft.com/fwlink/?LinkID=128037).
    2. A legacy network adapter works without installing a virtual machine driver. The legacy network adapter emulates a physical network adapter, multiport DEC 21140 10/100TX 100 MB. A legacy network adapter also supports network-based installations because it includes the ability to boot to the Pre-Execution Environment (PXE boot). However, the legacy network adapter is not supported in the 64-bit edition of Windows Server 2003. 
    3. And now think about, for which type of network adapter the standard kernel has a kernel modul (or you can get sources for)... Right. Only the legacy adapter.
      So you have to delete the network adapter and add a legacy network adapter. After this step, your virtual machine should look like:
      The bandwidth management is gone, but your kernel can use the tulip module and your network is working... Here you have to choose a virtual switch, which you can create like described in these two postings:  how to configure a virtual switch with internet connectivity or how to configure internet connectivity with NAT.

      It is not really suprising, that Microsoft adds as default to each new virtual machine a network adapter, which only works on a few linux distributions. You can download drivers from microsoft via this page (scroll down to "integration services"). But to add the default each for every new virtual machine, so that you have to delete this one and add the "legacy" adapter.

      But after knowing this, it is no problem to install debian linux (or any other linux) onto your Hyper-V.

      Hyper-V: Howto configure NAT for virtual machines

      Dietrich Schroff - Sun, 2013-03-17 04:03
      In my last posting i explained how to configure a vEthernet adapter to get connectivity to the internet. But there was one "problem": You had to provide one seperate IP for each virtual host, you want to connect to the internet.
      But there is a solution (NAT) for this problem and it is easy to configure this with Hyper-V on Windows:
      [If you have not configured the "brigde"-solution i explained the last posting, then skip step 1 and start with step number 2]
      1. Unbridge your VSwitchExternal from Wifi
        (select both adapter in network adapters and do a right click an use "remove bridge")
      2. Create a new internal virtual switch via Hyper-V's virtual switch manager (look here, how to do this) and name it VSwitchNAT
      3. Edit properties of your Wifi adapter
        (right click and then properties)
      4. Open the tab "Sharing" and enable both Checkboxes.
        Choose "VSwitchNAT" for Home networking connection
      And after that your virtual machines are using a private subnet which will be NATted by your laptop. This private subnet can be configured via VSwitchNAT:
      • Edit properties of VSwitchNAT vEthernet adpater
      • Edit properties of ipv4 and here you can edit the subnet

      New book

      alt.oracle - Sat, 2013-03-16 14:46
      Just a quick announcement that my second book is available from Packt Publishing.  OCA Oracle Database 11g: Database Administration I: A Real-World Certification Guide (again with the long title) is designed to be a different kind of certification guide.  Generally, it seems to me that publishers of Oracle certification guides assume that the only people who want to become certified are those with a certain level of experience, like a working DBA with several years on the job.  So, these guides make a lot of assumptions about the reader.  They end up being more about a lot of facts for the test rather than a cohesive learning experience.  My book attempts to target to a different kind of reader.  I've observed in the last several years that many people from non-database backgrounds are setting out to get their OCA or OCP certifications.  These folks don't necessarily bring a lot of knowledge or experience to this attempt, just a strong drive to learn.  My two books are designed to start from the beginning, then take the reader through all of the subjects needed for the certification test.  They're designed to be read straight through, completing the examples along the way.  In a sense, I'm attempting to recreate the experience of one of my Oracle classes in book form. 

      You'll find the book at these fine sellers of books.

      Packt Publishing
      Barnes and Noble

      Categories: DBA Blogs

      Getting it Right: 100KM, Team of 4 and 48 Hours

      TalentedApps - Thu, 2013-03-14 23:50

      It’s about an endeavor undertaken by our team of four people to raise funds for charity and to walk 100KM within 48 hours to meet the challenge set by Oxfam Trailwalker.  This post highlights our journey, the outcome and re-emphasizes some well-known facts.

      We started with goal setting; success was the obvious goal so success criteria were defined at the start in consultation with all stakeholders. Key Success Indicators (KSIs) were to raise funds to qualify for the event (i.e. 50K INR) and to complete 100KM walk within 48 hours with all four members. We did identify stretch goals at the initiation phase itself and those were to raise funds of 150K+ INR for charity and to complete 100KM walk within 40 hours with all four members.

      Getting it RightPlanning for the event went through a progressive elaboration process. As a team, we had to cross nine check points to register the entry and exit of the full team. Being a team building exercise, it was required that the team of four, walk together, supporting each other, fastest member walking with the slowest member of the team and completing the event as a team. As activities (aka check points) were already identified and sequenced, we had estimated duration for each activity to develop time management schedule in accordance with our team goal.

      Communication among team members was planned thoroughly. Similarly, we planned how to communicate with stakeholders (family members, well-wishers, friends who donated for the cause etc) before and during the event. We performed SWOT analysis for the risks and prepared risk response strategy accordingly. We planned and conducted procurement as per the team needs for the event.

      Finally on the D-Day, we first timers were at the event venue with almost a month of preparation. We started almost 10 minutes late from the starting point for 100KM walk of energy, determination and courage. We arrived at finish point exactly 39 hours and 38 seconds after the event starting time. It might not be an exceptional achievement from an outsider’s point of view but as our team could achieve predefined KSIs; this endeavor was a success for us.

      It was a fun-filled memorable walk where confrontation was used as a technique to overcome difference of opinions and group decision-making was practiced for team decisions.

      Four takeaway from this endeavor which are also keys for a successful project management are:

      • Success criteria must be defined at the beginning in consultation with all stakeholders.
      • Communication breeds success. A well-planned communication strategy is vital for project’s success.
      • Change is inevitable. You need to foresee challenges, risks and always need to have a change management plan in place.
      • Working together works. Remember the best team doesn’t win as often as the team that gets along best.

      Automatic Shared Memory Management problem ?

      Bas Klaassen - Thu, 2013-03-14 05:30
      From time to time one of out 10g databases ( seems to 'hang' Our monitoring shows a 'time out' on different checks and when trying to connect using sql, the sql session is hanging. No connection is possible. A few days ago, something like this happened again. Instead of bouncing the database, I decided to look for clues to find out why the database was 'hanging'. The server itself did Bas Klaassenhttp://www.blogger.com/profile/04080547141637579116noreply@blogger.com1
      Categories: APPS Blogs

      OWB - Compressing Files in Parallel using Java Activity

      Antonio Romero - Wed, 2013-03-13 12:36

      Yesterday I posted a user function for compressing/decompressing files using parallel processes in ODI. The same code you can pick up and use from an OWB process flow. Invoking the java function from within a Java activity from within the flow.

      The JAR used in the example below can be downloaded here, from the process flow OWB invokes the main method within the ZipFile class for example - passing the parameters to the function for the input, output directories and also the number of threads. The parameters are passed as a string in OWB, each parameter is wrapped in ?, so we have a string like ?param1?param2?param3? and so on. In the example I pass the input directory d:\inputlogs as the first parameter and d:\outputzips as the second, the number of processes used is 4 - I have escaped my backslash in order to get this to work on Windows.

       The classpath has the JAR file with the class compiled in it and the classpath value can be specified specified on the activity, carefully escaping the path if on windows.

      Then you can define the actual class to use;

      That's it, pretty easy. The return value from the method will use the exit code from your java method - normally 0 is failure and other values are error (so if you exit the java using a specific error code value you can return this code into a variable in OWB or perform a complex transition condition). Any standard output/error is also capture from within the OWB activity log in the UI, for example below you can see an exception that was thrown and also messages output to the standard output/error;

       That's a quick insight to the java activity in OWB.

      Connecting to Oracle Database Even if Background Processes are Killed

      Asif Momen - Wed, 2013-03-13 06:28
      Yesterday, I received an email update from MOS Hot Topics Email alert regarding a knowledge article which discusses how to connect to an Oracle database whose background processes are killed.

      I bet every DBA must have encountered this situation at least once. When I am in this situation, I normally use "shutdown abort" to stop the database and then proceed with normal startup. 

      After receiving the email, I thought of reproducing the same. My database (TGTDB) is running on RHEL-5.5. The goal is to kill all Oracle background process and try to connect to the database.

      Of course you don't want to test this in your production databases. 

      SQL> select * from v$version;

      Oracle Database 11g Enterprise Edition Release - 64bit Production
      PL/SQL Release - Production
      CORE      Production
      TNS for Linux: Version - Production
      NLSRTL Version - Production


      Below is the list of background processes for my test database "TGTDB":

      [oracle@ogg2 ~]$ ps -ef|grep TGTDB
      oracle    8249     1  0 01:35 ?        00:00:00 ora_pmon_TGTDB
      oracle    8251     1  0 01:35 ?        00:00:00 ora_psp0_TGTDB
      oracle    8253     1  0 01:35 ?        00:00:00 ora_vktm_TGTDB
      oracle    8257     1  0 01:35 ?        00:00:00 ora_gen0_TGTDB
      oracle    8259     1  0 01:35 ?        00:00:00 ora_diag_TGTDB
      oracle    8261     1  0 01:35 ?        00:00:00 ora_dbrm_TGTDB
      oracle    8263     1  0 01:35 ?        00:00:00 ora_dia0_TGTDB
      oracle    8265     1  6 01:35 ?        00:00:02 ora_mman_TGTDB
      oracle    8267     1  0 01:35 ?        00:00:00 ora_dbw0_TGTDB
      oracle    8269     1  1 01:35 ?        00:00:00 ora_lgwr_TGTDB
      oracle    8271     1  0 01:36 ?        00:00:00 ora_ckpt_TGTDB
      oracle    8273     1  0 01:36 ?        00:00:00 ora_smon_TGTDB
      oracle    8275     1  0 01:36 ?        00:00:00 ora_reco_TGTDB
      oracle    8277     1  1 01:36 ?        00:00:00 ora_mmon_TGTDB
      oracle    8279     1  0 01:36 ?        00:00:00 ora_mmnl_TGTDB
      oracle    8281     1  0 01:36 ?        00:00:00 ora_d000_TGTDB
      oracle    8283     1  0 01:36 ?        00:00:00 ora_s000_TGTDB
      oracle    8319     1  0 01:36 ?        00:00:00 ora_p000_TGTDB
      oracle    8321     1  0 01:36 ?        00:00:00 ora_p001_TGTDB
      oracle    8333     1  0 01:36 ?        00:00:00 ora_arc0_TGTDB
      oracle    8344     1  1 01:36 ?        00:00:00 ora_arc1_TGTDB
      oracle    8346     1  0 01:36 ?        00:00:00 ora_arc2_TGTDB
      oracle    8348     1  0 01:36 ?        00:00:00 ora_arc3_TGTDB
      oracle    8351     1  0 01:36 ?        00:00:00 ora_qmnc_TGTDB
      oracle    8366     1  0 01:36 ?        00:00:00 ora_cjq0_TGTDB
      oracle    8368     1  0 01:36 ?        00:00:00 ora_vkrm_TGTDB
      oracle    8370     1  0 01:36 ?        00:00:00 ora_j000_TGTDB
      oracle    8376     1  0 01:36 ?        00:00:00 ora_q000_TGTDB
      oracle    8378     1  0 01:36 ?        00:00:00 ora_q001_TGTDB
      oracle    8402  4494  0 01:36 pts/1    00:00:00 grep TGTDB
      [oracle@ogg2 ~]$ 

      Let us kill all these processes at once as shown below: 

      [oracle@ogg2 ~]$ kill -9 `ps -ef|grep TGTDB | awk '{print ($2)}'`
      bash: kill: (8476) - No such process
      [oracle@ogg2 ~]$ 

      Make sure no processes are running for our database:

      [oracle@ogg2 ~]$ ps -ef|grep TGTDB
      oracle    8520  4494  0 01:37 pts/1    00:00:00 grep TGTDB
      [oracle@ogg2 ~]$ 

      Now, try to connect to the database using SQL*Plus:

      [oracle@ogg2 ~]$ sqlplus "/as sysdba"

      SQL*Plus: Release Production on Wed Mar 13 01:38:12 2013

      Copyright (c) 1982, 2011, Oracle.  All rights reserved.

      Connected to:
      Oracle Database 11g Enterprise Edition Release - 64bit Production
      With the Partitioning, OLAP, Data Mining and Real Application Testing options


      Voila, I am connected. Not only you get connected to the database but you can query V$*, DBA* and other application schema views/tables. Let's give a try: 

      SQL> select name from v$database;


      SQL> select name from v$tablespace;


      6 rows selected.

      SQL> select count(*) from dba_tables;


      SQL> select count(*) from test.emp;



      Let us try to update a record. 

      SQL> update test.emp  set ename = 'test' where eno = 2;

      1 row updated.


      Wow, one record was updated. But when you try to commit/rollback, the instance gets terminated. And it makes sense as the background processes responsible for carrying out the change have all died.

      SQL> commit;
      ERROR at line 1:
      ORA-03113: end-of-file on communication channel
      Process ID: 8917
      Session ID: 87 Serial number: 7


      Following is the error message recorded in the database alert log:

      Wed Mar 13 01:41:44 2013
      USER (ospid: 8917): terminating the instance due to error 472
      Instance terminated by USER, pid = 8917

      The user (client) session was able to retrieve data from the database as the shared memory was still available and the client session does not need background processes for this task.

      Below mentioned MOS article discusses on how to identify and kill the shared memory segment(s) allocated to "oracle" user through UNIX/Linux commands. 


      1. Successfully Connect to Database Even if Background Processes are Killed [ID 166409.1]


      Chet Justice - Tue, 2013-03-12 22:35
      Back in September, I was asked, and agreed, to become to Content Chair for "The Traditional" track at Kscope 13. Like I mentioned there, I had been involved for the past couple of years and it seemed like a natural fit. Plus, I get to play with some really fun people. If you are ready to take advantage of Early Bird Registration, go here. (save $300)

      Over the past few weeks we've finalized (mostly) the Sunday Symposium schedule. We're currently working on finalizing Hands-on-Labs (HOL).

      Beginning last year, we've had the Oracle product teams running the Sunday Symposia. This gives them an opportunity to showcase their wares and (hopefully) provide a bit of a road map for the future of said wares. This year, we have three symposia: APEX, ADF and Fusion Development and The Database and Developer's Toolbox.

      ADF and Fusion Development

      - Oracle Development Tools – Where are We and What’s Next - Bill Patakay, Oracle
      - How to Get Started with Oracle ADF – What Resources are Out There? - Shay Shmeltzer and Lynn Munsinger, Oracle
      - The Cloud and What it Means to Oracle ADF and Java Developers - Dana Singleterry, Oracle
      - Going Mobile – What to Consider Before Starting a Mobile Project - Joe Huang, Oracle
      - Understanding Fusion Middleware and ADF Integration - Frederic Desbiens, Lynn Munsinger, and Shay Shmeltzer, Oracle
      - Open Q&A with the ADF Product Management

      I love that they are opening up the floor to questions from their users. I wish more product teams would do that.

      Application Express

      - Oracle Database Tools - Mike Hichwa, Oracle
      - Technology for the Database Cloud - Rick Greenwald, Oracle
      - Developing Great User Interfaces with Application Express - Shakeeb Rahman, Oracle
      - How Do We Build the APEX Builder? - Vlad Uvarov, Oracle
      - How to Fully Utilize RESTful Web Services with Application Express - John Snyders, Oracle
      - Update from APEX Development - Joel Kallman, Oracle

      (If you see Joel Kallman out and about, make sure you you mispronounce APEX). This is a fantastic group of people (minus Joel of course). Not mentioned above is the affable David Peake who helps put all this together. The community surrounding APEX is second-to-none.

      Finally, The Database and Developer's Toolkit. I'm partial to this one because I've been involved in the database track for the past couple of years. Like last year, this one is being put together by Kris Rice of Oracle. There are no session or abstract details for this one as it will be based mainly on the upcoming 12c release of the database. However, we do have the list of speakers lined up. If you could only come for one day of this conference, Sunday would be the day and this symposium would be the one you would attend.

      This symposium will start off with Mike Hichwa (above) and then transition to the aforementioned (too many big words tonight) Mr. Rice. He'll be accompanied by Jeff Smith of SQL Developer fame, Maria Colgan from the Optimzer team and Tom Kyte.

      How'd we do? I think pretty darn good.

      Don't forget to sign up. Early Bird Registration ends on March 25, 2013. Save $300.
      Categories: BI & Warehousing

      Starbucks 1TB cube in production

      Keith Laker - Tue, 2013-03-12 14:41
      Check out the customer snapshot Oracle has published which describes the success Starbucks Coffee has achieved by moving their data warehouse to the Exadata platform, leveraging the Oracle Database OLAP Option and Oracle BIEE at the front end.    10,000 users in HQ and across thousands of store locations now have timely accurate and calculation rich information at their fingertips.

      Starbucks Coffee Company Delivers Daily, Actionable Information to Store Managers, Improves Business Insight with High Performance Data Warehouse
      ( http://www.oracle.com/us/corporate/customers/customersearch/starbucks-coffee-co-1-exadata-ss-1907993.html )

      By delivering extreme performance combined with the architectural simplicity and sophisticated multidimensional calculation power of the in-database analytics of the Database, Starbucks use of OLAP has enabled some outstanding results. Together with the power of other Oracle Database and Exadata benefits such as Partitioning, Hybrid Columnar Compression, Storage Indexes and Flash Memory, Starbucks is able to handle the constant growth in data volumes and end-user demands with ease.

      A great example of the power of the "Disk To Dashboard" capability of Oracle Business Analytics.
      Categories: BI & Warehousing

      OER for Fusion Application

      Oracle e-Business Suite - Mon, 2013-03-11 12:47

      Replacement of ETRM/IREP for Fusion Application is Oracle Enterprise Repository. You can access this using following link.

      What Is OER?

      Very simply this is a standalone catalog of technical information about Oracle’s Application products.  For E-Business Suite users it equates to the iRepository tool (http://irep.oracle.com/index.html), or for PeopleSoft its similar to the PeopleSoft Interactive Services Repository.

      It contains a wealth of information, with the primary purpose of facilitating the creation of Application to Application integrations, and creating extensions and customizations. With this detailed technical knowledge of the inner workings and API’s available for Oracle Applications a better level of code reuse and overall accuracy can be achieved.

      Accessing OER

      Access is available either from Oracle’s globally shared public OER instance, or as part of your local Fusion Application instance deployment. Detail on creating a local OER installation is found in Oracle Fusion Middleware Installation Guide for Oracle Enterprise Repository (E15745-07). The URL’s for OER will be:

      An OER login may be required, although Oracle’s public instance also supports guest access at this time.

      OER catalogs technical components by various attributes, with the key ones being NameType, and Logical Business Area (LBA).  LBA is the lower level of the Fusion Applications Taxonomy and is used to tag each technical object with the feature and product that it is owned by and associated with.

      The general keyword search actually uses indexes of all the fields/attributes associated with a entry.

      Whilst the basic Asset Search should suffice in most cases, and is a simpler UI, the Browse feature (IE required) provides many powerful features and graphical views, including an object hierarchy and the Navigator to display objects related to each other


      Oracle Enterprise Repository

      Oracle Enterprise Repository

      How To Get The Most From Oracle Enterprise Repository For Troubleshooting Fusion Applications [ID 1399910.1]

      Categories: APPS Blogs

      7 things that can go wrong with Ruby 1.9 string encodings

      Raimonds Simanovskis - Sun, 2013-03-10 17:00

      Good news, I am back in blogging :) In recent years I have spent my time primarily on eazyBI business intelligence application development where I use JRuby, Ruby on Rails, mondrian-olap and many other technologies and libraries and have gathered new experience that I wanted to share with others.

      Recently I did eazyBI migration from JRuby 1.6.8 to latest JRuby 1.7.3 version as well as finally migrated from Ruby 1.8 mode to Ruby 1.9 mode. Initial migration was not so difficult and was done in one day (thanks to unit tests which caught majority of differences between Ruby 1.8 and 1.9 syntax and behavior).

      But then when I thought that everything is working fine I got quite many issues related to Ruby 1.9 string encodings which unfortunately were not identified by test suite and also not by my initial manual tests. Therefore I wanted to share these issues which might help you to avoid these issues in your Ruby 1.9 applications.

      If you are new to Ruby 1.9 string encodings then at first read, for example, tutorials about Ruby 1.9 String and Ruby 1.9 Three Default Encodings, as well as Ruby 1.9 Encodings: A Primer and the Solution for Rails is useful.

      1. Encoding header in source files

      I will start with the easy one - if you use any Unicode characters in your Ruby source files then you need to add

      # encoding: utf-8

      magic comment line in the beginning of your source file. This was easy as it was caught by unit tests :)

      2. Nokogiri XML generation

      The next issues were with XML generation using Nokogiri gem when XML contains Unicode characters. For example,

      require "nokogiri"
      doc = Nokogiri::XML::Builder.new do |xml|
        xml.dummy :name => "āčē"
      puts doc.to_xml

      will give the following result when using MRI 1.9:

      <?xml version="1.0"?>
      <dummy name="&#x101;&#x10D;&#x113;"/>

      which might not be what you expect if you would like to use UTF-8 encoding also for Unicode characters in generated XML file. If you execute the same ruby code in JRuby 1.7.3 in default Ruby 1.9 mode then you get:

      <?xml version="1.0"?>
      <dummy name="āčē"/>

      which seems OK. But actually it is not OK if you look at generated string encoding:

      doc.to_xml.encoding # => #<Encoding:US-ASCII>
      doc.to_xml.inspect  # => "<?xml version=\"1.0\"?>\n<dummy name=\"\xC4\x81\xC4\x8D\xC4\x93\"/>\n"

      In case of JRuby you see that doc.to_xml encoding is US-ASCII (which is 7 bit encoding) but actual content is using UTF-8 8-bit encoded characters. As a result you might get ArgumentError: invalid byte sequence in US-ASCII exceptions later in your code.

      Therefore it is better to tell Nokogiri explicitly that you would like to use UTF-8 encoding in generated XML:

      doc = Nokogiri::XML::Builder.new(:encoding => "UTF-8") do |xml|
        xml.dummy :name => "āčē"
      doc.to_xml.encoding # => #<Encoding:UTF-8>
      puts doc.to_xml
      <?xml version="1.0" encoding="UTF-8"?>
      <dummy name="āčē"/>
      3. CSV parsing

      If you do CSV file parsing in your application then the first thing you have to do is to replace FasterCSV gem (that you probably used in Ruby 1.8 application) with standard Ruby 1.9 CSV library.

      If you process user uploaded CSV files then typical problem is that even if you ask to upload files in UTF-8 encoding then quite often you will get files in different encodings (as Excel is quite bad at producing UTF-8 encoded CSV files).

      If you used FasterCSV library with non-UTF-8 encoded strings then you get ugly result but nothing will blow up:

      FasterCSV.parse "\xE2"
      # => [["\342"]]

      If you do the same in Ruby 1.9 with CSV library then you will get ArgumentError exception.

      CSV.parse "\xE2"
      # => ArgumentError: invalid byte sequence in UTF-8

      It means that now you need to rescue and handle ArgumentError exceptions in all places where you try to parse user uploaded CSV files to be able to show user friendly error messages.

      The problem with standard CSV library is that it is not handling ArgumentError exceptions and is not wrapping them in MalformedCSVError exception with information in which line this error happened (as it is done with other CSV format exceptions) which makes debugging very hard. Therefore I also "monkey patched" CSV#shift method to add ArgumentError exception handling.

      4. YAML serialized columns

      ActiveRecord has standard way how to serialize more complex data types (like Array or Hash) in database text column. You use serialize method to declare serializable attributes in your ActiveRecord model class definition. By default YAML format (using YAML.dump method for serialization) is used to serialize Ruby object to text that is stored in database.

      But you can get big problems if your data contains string with Unicode characters as YAML implementation significantly changed between Ruby 1.8 and 1.9 versions:

      • Ruby 1.8 used so-called Syck library
      • JRuby in 1.8 mode used Java based implementation that tried to ack like Syck
      • Ruby 1.9 and JRuby in 1.9 mode use new Psych library

      Lets try to see results what happens with YAML serialization of simple Hash with string value which contains Unicode characters.

      On MRI 1.8:

      YAML.dump({:name => "ace āčē"})
      # => "--- \n:name: !binary |\n  YWNlIMSBxI3Ekw==\n\n"

      On JRuby 1.6.8 in Ruby 1.8 mode:

      YAML.dump({:name => "ace āčē"})
      # => "--- \n:name: \"ace \\xC4\\x81\\xC4\\x8D\\xC4\\x93\"\n"

      On MRI 1.9 or JRuby 1.7.3 in Ruby 1.9 mode:

      YAML.dump({:name => "ace āčē"})
      # => "---\n:name: ace āčē\n"

      So as we see all results are different. But now lets see what happens after we migrated our Rails application from Ruby 1.8 to Ruby 1.9. All our data in database is serialized using old YAML implementations but now when loaded in our application they are deserialized back using new Ruby 1.9 YAML implementation.

      When using MRI 1.9:

      YAML.load("--- \n:name: !binary |\n  YWNlIMSBxI3Ekw==\n\n")
      # => {:name=>"ace \xC4\x81\xC4\x8D\xC4\x93"}
      YAML.load("--- \n:name: !binary |\n  YWNlIMSBxI3Ekw==\n\n")[:name].encoding
      # => #<Encoding:ASCII-8BIT>

      So the string that we get back from database is no more in UTF-8 encoding but in ASCII-8BIT encoding and when we will try to concatenate it with UTF-8 encoded strings we will get Encoding::CompatibilityError: incompatible character encodings: ASCII-8BIT and UTF-8 exceptions.

      When using JRuby 1.7.3 in Ruby 1.9 mode then result again will be different:

      YAML.load("--- \n:name: \"ace \\xC4\\x81\\xC4\\x8D\\xC4\\x93\"\n")
      # => {:name=>"ace Ä\u0081Ä\u008DÄ\u0093"}
      YAML.load("--- \n:name: \"ace \\xC4\\x81\\xC4\\x8D\\xC4\\x93\"\n")[:name].encoding
      # => #<Encoding:UTF-8>

      So now result string has UTF-8 encoding but the actual string is damaged. It means that we will not even get exceptions when concatenating result with other UTF-8 strings, we will just notice some strange garbage instead of Unicode characters.

      The problem is that there is no good solution how to convert your database data from old YAML serialization to new one. In MRI 1.9 at least it is possible to switch back YAML to old Syck implementation but in JRuby 1.7 when using Ruby 1.9 mode it is not possible to switch to old Syck implementation.

      Current workaround that I did is that I made modified serialization class that I used in all model class definitions (this works in Rails 3.2 and maybe in earlier Rails 3.x versions as well):

      serialize :some_column, YAMLColumn.new

      YAMLColumn implementation is a copy from original ActiveRecord::Coders::YAMLColumn implementation. I modified load method to the following:

      def load(yaml)
        return object_class.new if object_class != Object && yaml.nil?
        return yaml unless yaml.is_a?(String) && yaml =~ /^---/
          # if yaml sting contains old Syck-style encoded UTF-8 characters
          # then replace them with corresponding UTF-8 characters
          # FIXME: is there better alternative to eval?
          if yaml =~ /\\x[0-9A-F]{2}/
            yaml = yaml.gsub(/(\\x[0-9A-F]{2})+/){|m| eval "\"#{m}\""}.force_encoding("UTF-8")
          obj = YAML.load(yaml)
          unless obj.is_a?(object_class) || obj.nil?
            raise SerializationTypeMismatch,
              "Attribute was supposed to be a #{object_class}, but was a #{obj.class}"
          obj ||= object_class.new if object_class != Object
        rescue *RESCUE_ERRORS

      Currently this patched version will work on JRuby where just non-ASCII characters are replaced by \xNN style fragments (byte with hex code NN). When loading existing data from database we check if it has any such \xNN fragment and if yes then these fragments are replaced with corresponding UTF-8 encoded characters. If anyone has better suggestion for implementation without using eval then please let me know in comments :)

      If you need to create something similar for MRI then you would probably need to search if database text contains !binary | fragment and if yes then somehow transform it to corresponding UTF-8 string. Anyone has some working example for this?

      5. Sending binary data with default UTF-8 encoding

      I am using spreadsheet gem to generate dynamic Excel export files. The following code was used to get generated spreadsheet as String:

      book = Spreadsheet::Workbook.new
      # ... generate spreadsheet ...
      buffer = StringIO.new
      book.write buffer

      And then this string was sent back to browser using controller send_data method.

      The problem was that in Ruby 1.9 mode by default StringIO will generate strings with UTF-8 encoding. But Excel format is binary format and as a result send_data failed with exceptions that UTF-8 encoded string contains non-UTF-8 byte sequences.

      The fix was to set StringIO buffer encoding to ASCII-8BIT (or you can use alias BINARY):

      buffer = StringIO.new

      So you need to remember that in all places where you handle binary data you cannot use strings with default UTF-8 encoding but need to specify ASCII-8BIT encoding.

      6. JRuby Java file.encoding property

      Last two issues were JRuby and Java specific. Java has system property file.encoding which is not related just to file encoding but determines default character set and string encoding in many places.

      If you do not specify file.encoding explicitly then Java VM on startup will try to determine its default value based on host operating system "locale". On Linux it might be that it will be set to UTF-8, on Mac OS X by default it will be MacRoman, on Windows it will depend on Windows default locale setting (which will not be UTF-8). Therefore it is always better to set explicitly file.encoding property for Java applications (e.g. using -Dfile.encoding=UTF-8 command line flag).

      file.encoding will determine which default character set java.nio.charset.Charset.defaultCharset() method call will return. And even if you change file.encoding property during runtime it will not change java.nio.charset.Charset.defaultCharset() result which is cached during startup.

      JRuby uses java.nio.charset.Charset.defaultCharset() in very many places to get default system encoding and uses it in many places when constructing Ruby strings. If java.nio.charset.Charset.defaultCharset() will not return UTF-8 character set then it might result in problems when using Ruby strings with UTF-8 encoding. Therefore in JRuby startup scripts (jruby, jirb and others) file.encoding property is always set to UTF-8.

      So if you start your JRuby application in standard way using jruby script then you should have file.encoding set to UTF-8. You can check it in your application using ENV_JAVA['file.encoding'].

      But if you start your JRuby application in non-standard way (e.g. you have JRuby based plugin for some other Java application) then you might not have file.encoding set to UTF-8 and then you need to worry about it :)

      7. JRuby Java string to Ruby string conversion

      I got file.encoding related issue in eazyBI reports and charts plugin for JIRA. In this case eazyBI plugin is OSGi based plugin for JIRA issue tracking system and JRuby is running as a scripting container inside OSGi bundle.

      JIRA startup scripts do not specify file.encoding default value and as a result it typically is set to operating system default value. For example, on my Windows test environment it is set to Windows-1252 character set.

      If you call Java methods of Java objects from JRuby then it will automatically convert java.lang.String objects to Ruby String objects but Ruby strings in this case will use encoding based on java.nio.charset.Charset.defaultCharset(). So even when Java string (which internally uses UTF-16 character set for all strings) can contain any Unicode character it will be returned to Ruby not as string with UTF-8 encoding but in my case will return with Windows-1252 encoding. As a result all Unicode characters which are not in this Windows-1252 character set will be lost.

      And this is very bad because everywhere else in JIRA it does not use java.nio.charset.Charset.defaultCharset() and can handle and store all Unicode characters even when file.encoding is not set to UTF-8.

      Therefore I finally managed to create a workaround which forces that all Java strings are converted to Ruby strings using UTF-8 encoding.

      I created custom Java string converter based on standard one in org.jruby.javasupport.JavaUtil class:

      package com.eazybi.jira.plugins;
      import org.jruby.javasupport.JavaUtil;
      import org.jruby.Ruby;
      import org.jruby.RubyString;
      import org.jruby.runtime.builtin.IRubyObject;
      public class RailsPluginJavaUtil {
          public static final JavaUtil.JavaConverter JAVA_STRING_CONVERTER = new JavaUtil.JavaConverter(String.class) {
              public IRubyObject convert(Ruby runtime, Object object) {
                  if (object == null) return runtime.getNil();
                  // PATCH: always convert Java string to Ruby string with UTF-8 encoding
                  // return RubyString.newString(runtime, (String)object);
                  return RubyString.newUnicodeString(runtime, (String)object);
              public IRubyObject get(Ruby runtime, Object array, int i) {
                  return convert(runtime, ((String[]) array)[i]);
              public void set(Ruby runtime, Object array, int i, IRubyObject value) {
                  ((String[])array)[i] = (String)value.toJava(String.class);

      Then in my plugin initialization Ruby code I dynamically replaced standard Java string converter to my customized converter:

      java_converters_field = org.jruby.javasupport.JavaUtil.java_class.declared_field("JAVA_CONVERTERS")
      java_converters_field.accessible = true
      java_converters = java_converters_field.static_value.to_java
      java_converters.put(java.lang.String.java_class, com.eazybi.jira.plugins.RailsPluginJavaUtil::JAVA_STRING_CONVERTER)

      And as a result now all Java strings that were returned by Java methods were converted to Ruby strings using UTF-8 encoding and not using encoding from file.encoding Java property.

      Final thoughts

      My main conclusions from solving all these string encoding issues are the following:

      • Use UTF-8 encoding as much as possible. Handling conversions between different encodings will be much more harder than you will expect.
      • Use example strings with Unicode characters in your tests. I didn't identify all these issues initially when running tests after migration because not all tests were using example strings with Unicode characters. So next time instead of using "dummy" string in your test use "dummy āčē" everywhere :)

      And please let me know (in comments) if you have better or alternative solutions for the issues that I described here.

      Categories: Development


      Subscribe to Oracle FAQ aggregator