Feed aggregator

Sum to Parent Nodes in Hierarchy Queries: CONNECT_BY_ROOT

Duncan Mein - Sun, 2009-03-29 06:49
In one of our Applicaitons, the table DEPT contains a self referncing join (Pigs Ear) as it models our organsational department hierarchy.

For example:

CREATE TABLE DEPT
(
DEPT_ID NUMBER NOT NULL
,PARENT_ID NUMBER
,DEPT_NAME VARCHAR2 (100) NOT NULL
,EMPLOYEES NUMBER NOT NULL
,CONSTRAINT DEPT_PK PRIMARY KEY (DEPT_ID)
,CONSTRAINT DEPT_FK01 FOREIGN KEY (PARENT_ID)
REFERENCES DEPT (DEPT_ID)
);

INSERT INTO DEPT VALUES (1,NULL,'IT', 100);
INSERT INTO DEPT VALUES (2,1,'DEVELOPMENT', 12);
INSERT INTO DEPT VALUES (3,1,'SUPPORT', 15);
INSERT INTO DEPT VALUES (4,1,'TEST', 25);

INSERT INTO DEPT VALUES (5,2,'PL/SQL', 2);
INSERT INTO DEPT VALUES (6,2,'Java', 1);
INSERT INTO DEPT VALUES (7,2,'SQL', 11);
INSERT INTO DEPT VALUES (8,2,'C++', 3);

INSERT INTO DEPT VALUES (9,4,'Functional', 3);
INSERT INTO DEPT VALUES (10,4,'Non Functional', 5);

COMMIT;
A quick tree walk using CONNECT BY PRIOR shows you the Parent / Child relationships between all departments and the number of employees in each department:
SELECT rpad( ' ', 1*level, ' ' ) || dept_name dept_name
,employees
FROM dept
START WITH parent_id is null
CONNECT BY PRIOR dept_id = parent_id;

DEPT_NAME EMPLOYEES
-------------------- ----------
IT 100
DEVELOPMENT 12
PL/SQL 2
Java 1
SQL 11
C++ 3
SUPPORT 15
TEST 25
Functional 3
Non Functional 5

We had a requirment to work out the total number of employees at each parent level in the Organisational Hierarchy. For example, the report sum up all employees in the parent node and all its children.

Taking the department TEST as an example, the report should sum the figures 25 (employees in the TEST department), 3 and 5 (employees in the Functional / Non Functional child departments) to give a figure of 33.

This can easily be achieved by using CONNECT_BY_ROOT. Straight from the Oracle Documentation:

"CONNECT_BY_ROOT is a unary operator that is valid only in hierarchical queries. When you qualify a column with this operator, Oracle returns the column value using data from the root row. This operator extends the functionality of the CONNECT BY [PRIOR] condition of hierarchical queries.

Restriction on CONNECT_BY_ROOT You cannot specify this operator in the START WITH condition or the CONNECT BY condition.
"

To meet our requirement, CONNECT_BY_ROOT was utilised as follows:

select dept_name, employees, tot_employees
from (select
employees,
dept_name,
level lev,
sum(employees) over(partition by connect_by_root
(dept_id)
) tot_employees
from dept
connect by prior dept_id = parent_id)
where lev=1;

DEPT_NAME EMPLOYEES TOT_EMPLOYEES
-------------------- ---------- -------------
IT 100 177
DEVELOPMENT 12 29
SUPPORT 15 15
TEST 25 33
PL/SQL 2 2
Java 1 1
SQL 11 11
C++ 3 3
Functional 3 3
Non Functional 5 5

Book Review: Processing XML documents with Oracle JDeveloper 11g

Pawel Barut - Sun, 2009-03-29 05:55
Written by Paweł Barut
This time I would like to recommend a book Processing XML documents with Oracle JDeveloper 11g published by PacktPub.com. As I have some experience with XML processing inside Oracle DB (XMLDB) I was lacking the knowledge of XML manipulation in Java, especially using Oracle JDeveloper. This book is rather not for beginners - you should have basic knowledge of what is XML, at least theoretically. From the book you can learn how to manage XML Schemas in JDeveloper, then write XML documents based on those schema and finally validate them in JDeveloper or write Java code to validate it automatically. Chapters that I value much are those describing how to generate PDF documents. There are two methods described - one using Apache FOP API , and Oracle XML Publisher. As I was not generating PDF's this way so far, I found those 2 chapters very interesting. There is also chapter on generatin Excel by transformation of XML using Apache HSSF API .

Book is very practical. If you want to start with subjects described above then this book is for you. Of course it does not mean that you do not have to study documentation. However, it will be easier for you to make first steps. Beside describing Oracle JDeveloper, author also shows how to install and use Oracle Berkley XML DB to store, query and update XML Documents - using command line and XML DB API.

I especially recommend eBook version. All URLs in eBook version are active, so linking to web pages are very simple. Also coping sample code is easy. This book contains many practical and usefull tips on XML processing and tools as well. So if those XML aspects are in your interest, then it's definitelly good to invest in this book.

Keep reading,
Paweł

--
Related Articles on Paweł Barut blog:
    Categories: Development

    ODP.NET Tip: Bind Variables, the BindByName Property, and PL/SQL Function Return Values

    Mark A. Williams - Tue, 2009-03-24 20:25

    A question was posed as a follow-up to a previous post here:

    http://oradim.blogspot.com/2007/08/odpnet-tip-bind-variables-and.html

    The poster wanted to know if you have set BindByName to "true" for an OracleCommand object and the command text for that object calls a PL/SQL function (which of course has a return value but no name), what name should you give to the parameter object that will receive the return value? It's a good question since BindByName set to "true" implies that you've given names to your bind variables!

    The short answer is: It doesn't matter as it will be ignored anyway.

    As in The Highlander movies, with return values, there can be only one. You can set ParameterDirection to "ReturnValue" for multiple parameters, but it is an error to do so. Try it and see what errors are raised.

    Here's a quick little demo showing that it does not matter what name you give to the parameter that you declare as the return value.

    First create a simple PL/SQL function (so that there is a return value!) like this:

    create or replace function subtract_numbers(p1 in number, p2 in number) return number as
    begin
      return p1 - p2;
    end;
    /

    I've elected to do a simple bit of subtraction and this is for a reason. Since in subtraction the order of the parameters matters, it is easy to show that the parameters are being handled in the correct order. As you'll see in the sample code below, I use the values "48" and "2" so it would be obvious if the ordering were reversed.

    Now, here's some simple (and normal caveats about no error handling, etc. apply) bit of C# to illustrate the point:

    using System;
    using System.Data;
    using Oracle.DataAccess.Client;
    using Oracle.DataAccess.Types;

    class Program
    {
      static void Main(string[] args)
      {
        // adjust connect string to suit your environment
        string constr = "user id=hr;" +
                        "password=hr;" +
                        "data source=orademo;" +
                        "pooling=false;" +
                        "enlist=false";

        // create and open connection
        OracleConnection con = new OracleConnection(constr);
        con.Open();

        // create command object
        OracleCommand cmd = con.CreateCommand();
        cmd.CommandText = "subtract_numbers";
        cmd.CommandType = CommandType.StoredProcedure;

        // set bindbyname to true so parameters can be added
        // in any order and to illustrate what happens with a
        // return value
        cmd.BindByName = true;

        // the first input parameter to the pl/sql function
        OracleParameter p1 = new OracleParameter();
        p1.Direction = ParameterDirection.Input;
        p1.OracleDbType = OracleDbType.Int64;
        p1.ParameterName = "p1";
        p1.Value = 48;

        // the second input parameter to the pl/sql function
        OracleParameter p2 = new OracleParameter();
        p2.Direction = ParameterDirection.Input;
        p2.OracleDbType = OracleDbType.Int64;
        p2.ParameterName = "p2";
        p2.Value = 2;

        // the return parameter from the pl/sql function
        OracleParameter p3 = new OracleParameter();
        p3.Direction = ParameterDirection.ReturnValue;
        p3.OracleDbType = OracleDbType.Int64;
        p3.ParameterName = "ignored";

        // add the parameters in a "wrong" order since
        // bindbyname is true -- this is key
        cmd.Parameters.Add(p1);  // input #1
        cmd.Parameters.Add(p3);  // return value
        cmd.Parameters.Add(p2);  // input #2

        // execute the stored pl/sql code
        cmd.ExecuteNonQuery();

        // write the result to the console window
        Console.WriteLine("The difference of {0} and {1} is {2}",
          p1.Value, p2.Value, p3.Value);

        // prevent console window from closing when run from VS
        Console.WriteLine();
        Console.Write("ENTER to continue...");
        Console.ReadLine();
        // clean up
        p3.Dispose();
        p2.Dispose();
        p1.Dispose();
        cmd.Dispose();
        con.Dispose();
      }
    }

    Notice how the parameters are added to the parameter collection "out of order". This is OK since BindByName is set to "true". You can comment out the line that sets BindByName and see an exception raised if you want. Anyway, when I execute this code, I get the expected results as follows:

    The difference of 48 and 2 is 46

    ENTER to continue...

    So, even though the return value parameter was added to the collection in the second position everything works as desired. Yes, but I gave it a name of "ignored" so how do we know that name is not really being used? Well there are two easy ways to find out. One, just don't give the parameter a name at all and everything should continue to execute just fine. It's easy to ignore a name that's not there! Or, two, if you are an inquisitive type, enable SQL tracing (search the web for 10046 trace -- you'll find a million hits) and see what ODP.NET actually sent to the database. Here's a snippet of a trace file from the above code (with the parameter name of "ignored" in place):

    PARSING IN CURSOR #2 len=54 dep=0 uid=82 oct=47 lid=82 tim=10601980885 hv=1771638930 ad='7ff39775518' sqlid='gx0kvypntk44k'
    Begin :ret := subtract_numbers(p1=>:v0, p2=>:v2); End;
    END OF STMT
    PARSE #2:c=0,e=0,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=0,tim=10601980885
    BINDS #2:
    Bind#0
      oacdty=02 mxl=22(21) mxlc=00 mal=00 scl=00 pre=00
      oacflg=03 fl2=1000000 frm=00 csi=00 siz=72 off=0
      kxsbbbfp=1bb2dd60  bln=22  avl=00  flg=05
    Bind#1
      oacdty=02 mxl=22(21) mxlc=00 mal=00 scl=00 pre=00
      oacflg=03 fl2=1000000 frm=00 csi=00 siz=0 off=24
      kxsbbbfp=1bb2dd78  bln=22  avl=02  flg=01
      value=48
    Bind#2
      oacdty=02 mxl=22(21) mxlc=00 mal=00 scl=00 pre=00
      oacflg=03 fl2=1000000 frm=00 csi=00 siz=0 off=48
      kxsbbbfp=1bb2dd90  bln=22  avl=02  flg=01
      value=2

    See how ODP.NET generated a little anonymous block and substituted "ret" for the return value parameter? You should see the same thing even if there is no name given to the return parameter. You can also clearly see how the input parameter names are associated with the input values.

    Check All / Uncheck All Checkbox

    Duncan Mein - Tue, 2009-03-24 09:14
    There is a really cool JavaScript function in Apex called: $f_CheckFirstColumn that allows you to Check / Uncheck all checkboxes that exist in the 1st column position of a Tabular Form / Report.

    To implement this, all you need do is add the following HTML to the Column Heading of the 1st Column in your Tabular Form (i.e. the Checkbox Column):

    <input type="Checkbox" onclick="$f_CheckFirstColumn(this)">

    Check out an example here

    Data Mining Survey - Last Call

    Marcos Campos - Tue, 2009-03-24 08:41
    Rexer Analytics has just issued a last call for its annual data mining survey. This is a pretty nice survey that provides a great deal of valuable information about how data mining is used and who is doing it. To participate, please click on the link below and enter the access code in the space provided. The survey should take approximately 20 minutes to complete.  At the end of the survey you Marcoshttp://www.blogger.com/profile/14756167848125664628noreply@blogger.com2
    Categories: BI & Warehousing

    Ada Lovelace Day

    Stephen Booth - Tue, 2009-03-24 07:23
    Apparently today is Ada Lovelace Day. Ada Lovelace (aka Augusta Ada King, Countess of Lovelace) is widely recognised as the first computer programmer. The daughter of Lord Byron, she was encouraged away from her father's dissolute lifestyle by her mother, who encouraged her interest in mathematics. Ada excelled at mathematics and became a friend for Charles Babbage.When Babbage created his Stephen Boothhttps://plus.google.com/107526053475064059763noreply@blogger.com0

    ActiveRecord Oracle enhanced adapter also on JRuby and Ruby 1.9

    Raimonds Simanovskis - Sun, 2009-03-22 17:00

    So far if you wanted to use Ruby on Rails on Oracle database you needed to use different adapters depending on the Ruby platform that you wanted to use. If you used original MRI (Matz Ruby interpreter) 1.8.6 then hopefully you were using Oracle enhanced adapter. But if you wanted to use JRuby then you needed to use JDBC adapter that is maintained by JRuby team (and which sometimes might work differently than Oracle enhanced adapter). And if you wanted to use new Ruby 1.9.1 then you were out of luck as no adapter supported it.

    Therefore I wanted to announce great news that ActiveRecord Oracle enhanced adapter 1.2.0 is released and it supports all three major Ruby platforms!

    • Use Oracle enhanced adapter on MRI 1.8.6 with ruby-oci8 1.0.x library or gem
    • Use Oracle enhanced adapter on JRuby (so far tested with 1.1.6) with JDBC Oracle driver
    • Use Oracle enhanced adapter on Ruby/YARV 1.9.1 with ruby-oci8 2.0 library or gem

    This provides you with much more flexibility to develop on one Ruby platform but deploy on another and on all three platforms you can use the same additional functionality that Oracle enhanced adapter provides on top of standard ActiveRecord functionality.

    And during testing of Oracle enhanced adapter on all platforms additional milestone was achieved – Oracle enhanced adapter passes 100% ActiveRecord unit tests! But to be honest I need to tell that I needed to patch quite many unit tests for Oracle specifics as not all SQL that runs on MySQL is also valid on Oracle. I published my patched branch of ActiveRecord unit tests at my GitHub fork of Rails – you can clone the repository and verify by yourself.

    So please try out new version of Oracle enhanced adapter on any Ruby platform:

    gem install activerecord-oracle_enahnced-adapter

    If you have any questions please use discussion group or post comments here. In nearest future I will also add more instructions how to install Oracle enhanced adapter on JRuby and Ruby 1.9.1 at GitHub wiki page.

    Categories: Development

    Rollback segments explained

    Oracle WTF - Fri, 2009-03-20 02:57

    I recently read this in a book about data warehousing:

    Source System Rollback SegmentsWhen extracting from a relational source, extracts that take a long time can be problematic. If an extract asks for all records updated in the last 24 hours, the system must locate the appropriate set of records. This means that no user can change the updated_date field while your query is being processed. As transactions flow in during your query, they are queued up in a separate place called a rollback segment, to be applied once your request is finished. If your query takes too long, this queue gets too large and runs out of space. The system then kills your job and processes the transactions that have been queued up. In general, the folks responsible for the transaction system don't like this kind of behavior.

    Now to be fair, the book was published in 1998 and is not specifically about Oracle. Does anyone know whether there has ever been a commercial DBMS that worked anything like this, or did they just make it up?

    Core ADF 11: Designing the Page and Taskflow Structure

    JHeadstart - Fri, 2009-03-20 00:53

    JDeveloper's tagline is "productivity with choice". In previous releases this choice merely applied to the various technologies that could be used in each of the ADF layers. With release 11 this has changed. When choosing the "Fusion" technology stack within ADF (ADF Faces, ADF Task Flows and ADF Business Components), there are many new and powerful concepts and techniques available that allow you to build the required functionality in different ways. Concepts like (un)bounded taskflows, page fragments, (dynamic) regions, page templates, declarative components, declarative transaction support, XML Menu Model, ADF Faces Layout Components and ADF Libraries open up a wide range of new possibilities to build the view and controller layers.

    These new possibilities imply that new design decisions must be made on how to build the required functionality. This is not a trivial task. Many of the concepts relate to each other, and choosing how to use one concept can have impact on the usage of other concepts. I admit that's pretty vague, but in this new "Core ADF 11" series, we (the JHeadstart team) will try to make this more concrete. In this series we will share with you the experiences, issues, challenges, design considerations, design decisions and best practices that we faced/encountered/developed when building JHeadstart release 11.

    Note that we will assume that you have a basic understanding of the new ADF 11 concepts. If this is not the case, we recommend you take the ADF 11 course from Oracle University, or if you prefer self-study, take a look at the ADF Learning Center.

    In this first post, we will start at the beginning: you want to build your first "serious" ADF 11 page, and you immediately need to choose: do I create a page or a page fragment, do I base the page (fragment) on a template, and do I put the page in an unbounded or bounded task flow.

    Page Templates
    This one is easy: we recommend to always use a page template. Design the template whatever way you want but make sure you include all the UI elements that need to be displayed on each and every page. Typical content of a page template include a branding image/application logo, global actions like Home, Help, Log Out, Preferences, the menu of the application (either a tree menu, or various levels of navigation panes like tabs, buttons or lists) and a footer area with for example a copyright message. Use page template attributes to pass in properties that might differ between pages like the page title, and to conditionally show/hide parts of the page template for specific pages. In a later post, we will go into more details on designing page templates.
    Note that you can also dynamically change the page template associated with a page at runtime. Instead of hardcoding the page template path, you can use an EL expression that binds to a look and feel bean that contains the current page template path.

    Bounded or Unbounded Taskflows
    Each ADF 11 web application has one unbounded taskflow, and can have one or more bounded task flows. So, the choice is not so much about using bounded or unbounded task flows, but about putting a JSF page inside the unbounded taskflow, or inside a bounded task flow.
    When building transactional applications, bounded task flows have major advantages over unbounded task flows. For a start, bounded task flows support additional features that can be very useful when building transactional applications:

    • Save for Later
    • Declarative Transaction Support
    • Exception Handling
    • Own Memory Scope
    • Back button handling

    In addition, bounded task flows allow you to benefit from concepts well-known in the world of object oriented programming and component-based development. Using bounded task flows:

    • you can create truly reusable user interface components through page fragments and regions;
    • with a well-defined service contract; through taskflow parameters you can configure the behavior of the taskflow;
    • that are loosely coupled; to use a task flow, you only need to know about the available task flows parameters. The internal flow, structure of user interface components and data structures can be changed without any impact on the pages or other taskflows that are using/calling the taskflow.

    In short, well-designed bounded task flows have many advantages over the unbounded task flow. We recommend to only use the unbounded task flow for introductionary pages like the home page or a UI shell page when using regions (see below). All application uses cases should be implemented using a bounded task flow. In a later post in this series, we will discuss in more detail how to design your task flows for maximum reusability.

    Stand-alone Pages or Page Fragments with (Dynamic) Regions
    For your page design, you basically have three choices

    • Create stand-alone .jspx pages
    • Create .jsff page fragments within a region, and create a shell page for each region that contains the af:region tag.
    • Create .jsff page fragments within a region, and create one common UI shell .jspx page, that uses a dynamic region to display the requested region. This is the "One Page Application" model, where the UI shell page is the only .jspx page in the application, and a menu displayed in the shell page is used to set the current region displayed on the page.

    Some considerations and implications to take into account for each choice:
    Stand-alone pages:

    • Stand-alone pages are easiest to build and understand, it is like building your app the "10.1.3 way".
    • Stand-alone pages offer little reuse, they cannot be used inside regions
    • When you are building a menu-driven application, and each menu entry starts a bounded taskflow with stand-alone pages, there is no easy way to abandon the current task flow. When the user wants to leave the current task flow and clicks on another menu entry, nothing will happen! This is because the control flow case to start the taskflow associated with the other menu item is defined in the unbounded task flow, which is not reachable once a bounded taskflow is started. The easy solution is to not use bounded task flows at all. You will lose all the advantages of bounded task flows, but when simplicity is more important to you then reusability, this is a valid choice. We discuss a more sophisticated solution to the "abandon taskflow with stand-alone pages issue" in this post of the ADF 11 Core series.
    • When you are using the XMLMenuModel to create your menu, you can associate one focusViewId with each menu item. If you navigate to the page with this view id, the associated menu item will be highlighted automatically. However, if you then navigate to a detail page from the focusViewId page, you loose the menu item highlighting. In this post, we discuss how you can extend the XMLMenuModel to associate multiple pages with one menu item

    Static regions with shell page:

    • With this design, you leverage the reuse options of bounded task flows with page fragments
    • You do not run into the XMLMenuModel multi-page problem, since each menu item is associated with the shell page which remains the same while navigating between the page fragments within the ADF region embedded in the shell page
    • You do not run into the "abandon task flow" issue that occurs with stand-alone pages, since the shell pages are all located in the unbounded task flow.
    • If you navigate away to another shell page, you need to be aware that ADF will auto-execute some logic to clean up the abandoned task flow of the region in the previous shell page. The actual clean-up logic executed depends on the transaction settings of the task flow. For example, if the task flow started a new transaction, a rollback will be performed. Note that when using shared data controls, the rollback might have a performance impact since already executed queries in other task flows will need to be re-executed as well. In a later post, we will discuss taskflow transaction management in more detail.

    One-page application:
    With the previous design option, a shell page is created for each ADF region. Typically, all these shell pages will be very thin and very similar: they are based on the same page template that renders the menu and other common page components, the only difference is the name of the ADF region that is embedded in the shell page. In a one page application, we create one UI Shell page and use a dynamic region to display the correct region. Here are the implications:

    • Performance increases because partial page rendering can be used to replace the content of the dynamic region, the UI shell page does not need to be reloaded. In our experience, the performance difference is quite significant. However, according to the documentation there is a feature called Partial Page Navigation which would allow for a similar performance when using separate shell pages or stand-alone pages. I did not get this feature to work though.
    • The one-page application raises a problem with XMLMenuModel, opposite to the stand-alone pages design: there is only one page used by all menu items. This can be solved by creating a subclass of XMLMenuModel that highlights the correct menu item based on the currently selected region as set in the dynamic region bean. We will discuss this technique in more detail in a future post.
    • When using a dynamic region, the parameters for all region task flows can be defined against the dynamic task flow binding in the page definition of the UI shell page. This quickly becomes ugly and unmanageable: one long list of parameters for all task flows in the application. A better approach is to use the parameter map property of the dynamic task flow binding, and reference the parameter map of the current region through the dynamic region bean. This technique is discussed in more detail in this post.

    A variation of this implementation is the use of dynamic tabs rather than a dynamic region. This enables multi-tasking for the end user, see this post for more details on the implementation of dynamic tabs.  

    Summary
    While JDeveloper makes it easy to later on convert stand-alone pages to fragments, and unbounded task flows to bounded task flows, it is better to make these design decisions upfront. The choices you make have impact on the usage of XMLMenuModel, menu-driven navigation in general, reusability, transaction management and performance.
    We recommend to use bounded task flows with page fragments to maximize reusability. While traditional transactional applications are mostly menu-driven, composite, task-oriented user interfaces are likely to become mainstream in the future. When you start building your ADF 11 app today using (dynamic) regions accessible from a menu, you can reuse the same task flows when building more task-oriented user interfaces using the WebCenter and Human Workflow capabilities planned for JDeveloper 11 release 2.

    Categories: Development

    New Oracle Public Yum Server

    Sergio's Blog - Thu, 2009-03-19 09:01

    We've just launched a new public yum server for Oracle Enterprise Linux and Oracle VM. This yum server offers a free and convenient way to install packages from the Enterprise Linux and Oracle VM installation media via a yum client.
    To get started: public-yum.oracle.com

    Categories: DBA Blogs

    Corruption in redo log file when implementing Physical Standby

    Aviad Elbaz - Tue, 2009-03-17 10:55

    Lately I started implementing Data Guard - Physical Standby - as a DRP environment for our production E-Businsess Suite database and I must share with you one issue I encountered during implementation.

    I chose one of our test environments as a primary instance and I used a new server, which was prepared to the standby database in production, as the server for the standby database in test. Both are Red-Hat enterprise linux 4.

    The implementation process went fast with no special issues (at lease I thought so...), everything seems to work fine, archived logs were transmitted from the primary server to the standby server and successfully applied on the standby database. I even executed switchover to the standby server (both database and application tier), and switchover back to the primary server with no problems.

    The standby database was configured for maximum performance mode, I also created standby redo log files and LGWR was set to asynchronous (ASYNC) network transmission.

    The exact setting from init.ora file:
    log_archive_dest_2='SERVICE=[SERVICE_NAME] LGWR ASYNC=20480 OPTIONAL REOPEN=15 MAX_FAILURE=10 NET_TIMEOUT=30'

    At this stage, when the major part of the implementation had been done, I found some time to deal with some other issues, like interfaces to other systems, scripts, configure rsync for concurrent log files, etc... , and some modifications to the setup document I wrote during implementation.

    While doing those other issues, I left the physical standby instance active so archive log files are transmitted and applied on the standby instance. After a couple of hours I noticed the following error in the primary database alert log file:

    ARC3: Log corruption near block 146465 change 8181238407160 time ?
    Mon Mar  2 13:04:43 2009
    Errors in file [ORACLE_HOME]/admin/[CONTEXT_NAME]/bdump/[sid]_arc3_16575.trc:
    ORA-00354: corrupt redo log block header
    ORA-00353: log corruption near block 146465 change 8181238407160 time 02/03/2009 11:57:54
    ORA-00312: online log 3 thread 1: '[logfile_dir]/redolog3.ora'
    ARC3: All Archive destinations made inactive due to error 354
    Mon Mar  2 13:04:44 2009
    ARC3: Closing local archive destination LOG_ARCHIVE_DEST_1: '[archivelog_dir]/arch_[xxxxx].arc' (error 354)([SID])
    Committing creation of archivelog '[archivelog_dir]/arch_[xxxxx].arc' (error 354)
    ARCH: Archival stopped, error occurred. Will continue retrying
    Mon Mar  2 13:04:45 2009
    ORACLE Instance [SID] - Archival Error

    I don't remember if I've ever had a corruption in redo log file before... 
    What is wrong?! Is it something with the physical standby instance ?? Actually, if it's something with the standby instance I would have expected for a corruption in the standby redo log files not the primary's..

    The primary instance resides on a Netapp volume, so I checked the mount option in /etc/fstab but they were fine. I asked our infrastructure team to check if something went wrong with the network during the time I got the corruption, but they reported that there was no error or something unusual.

    Ok, I had no choice but to reconstruct the physical standby database, since when an archive log file is missing, the standby database is out of sync'. I set the 'log_archive_dest_state_2' to defer so no further archive log will be transferred to the standby server, cleared the corrupted redo log files (alter database clear unarchived logfile 'logfile.log') and reconstruct the physical standby database.

    Meanwhile (copy database files takes long...), I checked documentation again, maybe I missed something, maybe I configured something wrong.. I have read a lot and didn't find anything that can shed some light on this issue.

    At this stage, the standby was up and ready. First, I held up the redo transport service (log_archive_dest_state_2='defer') to see if I'll get a corruption when standby is off.  After one or two days with no corruption I activated the standby.

    Then I saw the following sentence in Oracle® Data Guard Concepts and Administration 10g Release 2 (10.2):
    "All members of a Data Guard configuration must run an Oracle image that is built for the same platform. For example, this means a Data Guard configuration with a primary database on a 32-bit Linux on Intel system can have a standby database that is configured on a 32-bit Linux on Intel system"

    One moment, I thought to myself, the standby server is based on AMD processors and the primary server is based on Intel's..    Is it the problem?!
    When talking about same platform, is the meaning same processors also? Isn't it sufficient to have same 32 bit OS on x86 machines?
    Weird but I had to check it...

    Meanwhile, I got a corruption in redo log file again which assured there is a real problem and it wasn't accidentally.

    So I used another AMD based server (identical to the standby server) and started all over again – primary and standby instances. After two or three days with no corruption I started to believe the difference in the processors was the problem. But one day later I got a corruption again (Oh no…)

    I must say that on the one hand I was very frustrated, but on the other hand it was a relief to know it's not the difference in the processors.
    It was so clear that when I'll find out the problem it will be something stupid..

    So it is not the processors, not the OS and not the network.  What else can it be?!

    And here my familiarity with the "filesystemio_option" initialization parameter begins (thanks to Oracle Support!). I don't know how I missed this note before, but all is written here - Note 437005.1: Redo Log Corruption While Using Netapps Filesystem With Default Setting of Filesystemio_options Parameter.

    When the redo log files are on a netapp volume, "filesystemio_options" must be set to "directio" (or "setall"). When "filesystemio_options" is set to "none" (like my instance before), read/writes to the redo log files are using the OS buffer cache. Since netapp storage is based on NFS (which is stateless protocol), when performing asynchronous writing over the network, the consistency of writes is not guaranteed. Some writes can be lost. By setting the "filesystemio_options" to "directio", writes bypasses the OS cache layer so no write will be lost.

    Needless to say that when I set it to "directio" everything was fine and I haven't gotten any corruption again.

    Aviad

    Categories: APPS Blogs

    SAPPU

    Moans Nogood - Tue, 2009-03-17 00:39
    SAP is a huge, mysterious, expensive animal.

    In my very private opinion it is probably the worst ERP system you can buy today. Hence, most whiteshirts will choose it.

    To compensate for the fact that it's old and silly technology, it's also exceedingly expensive. Introducing SAP to your company is the only reliable way to tell whether your company is so financially strong that it almost resembles a monopoly.

    But what I really hate about SAP is that it removes people from the Oracle database field. I think most of us have experienced the following scenario:

    A colleague or a bunch of colleagues are selected to help implement SAP. Until then they've been ordinary DBA's, fixing stuff, running databases and leading normal family lives.

    Then they go away for EXTENSIVE training over a LONG period of time. In between the 42 week-long classes they have to take (per year, of course), they usually rest with their families and might show up for short, social functions among their (still) colleagues. But they have these myserious, far-away eyes... you can't quite reach them.

    Then you get the famous message by mouth or email stating:

    "We're now almost ready to , so for the next transition period of , I'll be working half of my time with my old stuff and half of my time with SAP before moving to full-time SAP obligations."

    AND THAT'S THE LAST YOU EVER HEAR FROM THEM!

    I think, but I could be wrong, that they're sucked into a place and time in space that the rest of us can't see or in other ways sense.

    A Harry Potter-like parallel universe.

    From which, mind you, they never return.

    Numerous are the good Oracle DBA's who disappear from the real Oracle world this way.

    Now it appears that in their parallel universe (the SAPPU, it could be called) they're slowly corrupted into thinking about Oracle databases the way the real (the few, the remaining) Oracle DBA's thought about databases in the 80's.

    They're forced to unlearn all the right things they had learned. A kind of communist re-schooling or indoctrination. So sad.

    Here follows a real mail thread from some friends of mine that know more about Oracle than most. They shall, of course, remain nameless - we still have no idea about the powers and general abilities of our disappeared friends in the SAPPU, and so a certain degree of fear for the unknown make us cautious....

    ======================================================================
    Oracle Person One said:

    I find myself once again embroiled in an SAP/Oracle Issue.

    It seems I broke something in the production database by applying security patches, even though the 5 development and production servers have no such issues. (No, we can't afford a full system to test patches on - sigh...)

    There are a number of ORA-7445 and ORA-600 errors being issued when DML is attempted, along with requisite trace files. (Yes, there is a an active SR already)

    The SAP Basis team see's ORA-3113 and ORA-3114 errors.

    I think I finally have them convinced to stop trying to 'fix' the ORA-3113...

    Which brings me to the point: The usual method of troubleshooting SAP problems, as practiced by the SAP team, and nearly every SAP person I have worked with (not a terribly large sample - maybe I am just really lucky), goes something like this:

    1. Search the SAP support site for every note containing ORA-600, ORA-7445. There are a few as you can well imagine.

    2. Call a meeting to discuss which of the actions in these notes should be taken. Whether or not the contents of the note actually match the problem at hand seems to be irrelevant.

    3. Ask the DBA to run a script (recommended by SAP support) to check for some problem or another in the data. This script will launch full table scans for every table in the database... Using a for/next loop. Fortunately for me, the script is broken as is. ... and I can't seem to find the problem with it.

    4. Rinse and repeat - effectiveness is unimportant, only looking busy is important.

    To memorialize this method, I have created a link to the following short, but accurately portrayed method of this troubleshooting methodology:

    http://tinyurl.com/sap-troubleshooting-method
    ===============================================================
    For that Person Two commented:

    Make that, 'development and test'
    ===============================================================
    Person Three need this off his chest:

    So, regarding recent security patches that might cause havoc:

    The one that prevents old client retries in clear text when the encrypted handshake is rejected affects some connection attempts.

    “Repairing” the permissions of the archive log destination can ultimately get the archiver stuck far enough behind to toss a 7445 (I think) I didn’t look it up and since my databases never have problems I get rusty on the error messages. (tongue firmly in cheek). If memory serves (see previous sentence) one of the security patches suggests repairing the permissions on the archive log directory without telling you to make sure the ownership is correct.

    What are your vintages? I’ve only done SAP stuff ONCE (and walked away quickly and quietly having proved that a certain physical reordering solved all their stated performance issues on the load testing system only to be informed that any manipulation of the data outside of SAP was not allowed.)
    ===============================================================
    Person One felt he finally had someone to talk to who understood him:

    Yup, that is SAP SOP.

    So far I have refused to do that particular operation, except in a couple cases where it retrieved a lot of empty space due to archival of data.

    The 'make work' analogy from "The Longest Yard" (the old one with Burt Reynolds and Eddie Albert) was used in reference to the "SAP Reorg" mentality when they last asked me to do a db reorg.

    They didn't get it.

    For those of you that don't already know it, the 'make work' for the prison workers in the facility where Reynold's character was incarcerated consisted of the ollowing:

    Morning: shovel mud out of the swamp.
    Afternoon: shovel mud back into the swamp.

    By the classic definition of work, nothing was accomplished in the end.
    But the inmates were still sweaty, tired, thirsty and hungry.
    ===============================================================
    Person Five then finally could say this to a friendly crowd:

    I wholeheartedly agree with both of you.

    I spent almost 4 years as part of team that supported SAP. There were 3 dba's on the team, and both of the other two received their primary dba training from SAP. Their method was to manage EVERY database as if it were an SAP database.

    I volunteered to do every non-SAP upgrade and rearranged everything back Oracle standards when they weren't looking. :) Took about 2 years to get to them all but it was well worth the trouble.

    As for the SAP databases, patch application and upgrades always had different results on dev, test or prod. It was utterly baffling.
    ===============================================================

    Ah well, based on all this, it is really no wonder that SAP is so wildly popular and has won the whole upper(ERP market) over less complicated, cheaper, more technologically advanced - and way more agile - competitors.

    If you can sit in the local CEO club and claim: "We actually managed to pay for the WHOLE SAP implementation with our own money and we're still functioning in several departsments..." your fellow CEO's will know that you have more money than God or AIG's dealers.

    You will have the uttermost respect from them, as they scramble to try and explain why their predecessors in their respective companies chose to implement something different from SAP. Cheaper, of course.

    But one day, when they have gained enough financial strength....

    Here's one final salute to all those lost colleagues from the Oracle database space. We'll always remember you. You'll never be forgotten. Long live your memory.

    Wherever you are. In whatever lifeform.

    Multiplying texts and null effect

    Pawel Barut - Sun, 2009-03-15 16:39
    Written by Paweł Barut
    This time short notice on side effect of implicit type conversion ans short expression evaluation. Let me show this on example - try this statement:
    SQL> select 'a'*'b' from dual;
    select 'a'*'b' from dual
           *
    ERROR at line 1:
    ORA-01722: invalid number
    As expected we get error. It is due to fact that multiplexing operator (*) is expecting numbers on both sides, an implicit conversion takes place. However this statement:
    SQL> select ''*'b' from dual;

        ''*'B'
    ----------
    does not give error. How is it possible? In oracle empty string is in fact an null. null multiplied by any number gives null, so due to optimization oracle decided to evaluate this expression to null.
    As you know also 0 (zero) multiplied by any number gives zero. So let's test it:
    SQL> select '0'*'b' from dual;
    select '0'*'b' from dual
               *
    ERROR at line 1:
    ORA-01722: invalid number
    In that case Oracle did not optimized evaluation, and decided to make full calculation. I do not know the reason for different behaviour but is repeatable (at least on instances I've access to).
    My recommendation: always make sure that you provide data in proper data-type. Make explicit data conversion using to_number, to_date or to_char functions.

    Keep reading,
    Paweł

    --
    Related Articles on Paweł Barut blog:
    Categories: Development

    HotSos Symposium 2009 Materials

    Mark A. Williams - Sat, 2009-03-14 13:15

    The paper and all sample code from my HotSos Symposium 2009 presentation entitled "Instrumented Code Is Better Code" is available as a single .zip file. To download the .zip file, click here.

    The Consultant on Backups

    Oracle WTF - Sat, 2009-03-14 12:30

    Our correspondent overheard The Consultant sorting out the backup requirements for the new system:

    Consultant: You have a 6 hour window overnight, now as the queues get longer under heavy loading the end of day queue clearing will run into that 6 hours. Your backup window will start to get squeezed, so we need to know the minimum time to back-up this amount of data, including the time to shut down and start up the databases.

    Technical guy: Why?

    Consultant: Because it has to take place in that 6 hour window.

    Technical guy: Why is that?

    Consultant: Because then the users will come back on line and want to use the system.

    Technical guy: So?

    Consultant: They can't use the system if it's down to be backed up.

    Technical guy: We'll use an online backup and they can do whatever they like.

    Consultant: Well, if I was in auditing I'd fire you right now. You simply have to shut a database down to back it up. It's the only way you can get a consistent backup.

    Another One From the Archives: Easy Connection Identifier

    Sergio's Blog - Tue, 2009-03-10 08:30

    My good friend Joel showed me how you can connect to a (remote) database using an easy connect identifier. Here's how it works:

        sqlplus sergiodb/sergiodb@//127.0.0.1:1521/mvl
    

    It looks to be a way of constructing a connect identifier that's new since database 10g Release 1. The syntax is as follows:

       [//]host[:port][/service_name]
    

    The documentation sums it up nicely: The easy connection identifier can be used wherever you can use a full connection identifier, or a net service name. The easy syntax is less complex, and no tnsnames.ora entry is required.

    Categories: DBA Blogs

    Sand Storm in Saudi Arabia...Today.

    Sabdar Syed - Tue, 2009-03-10 07:57
    Hello,

    Today, there was a huge sand storm in Saudi Arabia. This is the first time I have seen such real and live sand storm covering the Riyadh City in Saudi Arabia.

    Here are few pics shared in this blog.















































    Regards,
    Sabdar Syed.

    JRE Plug-in “Next-Generation” – Part II

    Aviad Elbaz - Tue, 2009-03-10 04:22

    In my last post "JRE Plug-in “Next-Generation” – to migrate or not?" I wrote about a Forms launching issue in EBS right after upgrading JRE (Java Plug-in) to version 6 update 11 which works with the new next-generation Java Plug-in architecture. The problem happens inconsistently and it only works when I disable the "next-generation Java Plug-in".

    Following a SR I've opened to Oracle support about this issue, I was being asked to verify that the profile option "Self Service Personal Home Page Mode" is set to "Framework Only".

    We have this profile option set to "Personal Home Page" as our users prefer this way to the "Framework Only" way.

    It's important to note that "Personal Home Page" is not a supported value for the "Self Service Personal Home Page Mode" profile option and may cause unexpected issues.

    After setting the profile option to "Framework Only" the problem has resolved and the screen doesn't freezes anymore.

    So the solution in my case was to set the profile option "Self Service Personal Home Page Mode" to "Framework Only" (we are still testing it but it look fine so far), however there are two more options that seems to work, even when the profile option set to "Personal Home Page" and "next generation Java Plug-in" is enabled.

    1) Uncheck "Keep temporary files on my computer"
    - Navigate to Java console (start -> settings -> Control Panel -> Java,  or start -> run -> javacpl.cpl)
    - On General tab -> Temporary Internet Files -> Settings -> uncheck the "Keep temporary files on my computer".
    - Check the issue from a fresh IE session.
     

    I'm not sure how or why, but it solves the problem, no more freezing this way..

    2) Set “splashScreen” to null
    - Edit $FORMS60_WEB_CONFIG_FILE file in your Forms server node.
    - Change this line
    "splashScreen=oracle/apps/media/splash.gif"
    to
    "splashScreen="

    - No need to bounce any service.
    - Check the issue from a fresh IE session.

    Again, it's not so clear how or why, but it solves the problem as well.

    Now, we just need to convince our users to accept the "framework only" look and feel, and then we would consider upgrading all our clients to the new next-generation Java Plug-in.

    You are welcome to leave a comment or share your experience with the new Java Plug-in.

    Aviad

    Categories: APPS Blogs

    ESB and java.security.PrivilegedActionException

    Peeyush Tugnawat - Sat, 2009-03-07 03:34

    I was running into ESB Invocation failures and was not able to invoke ESB processes from BPEL/ESB processes with the following error

     "java.security.PrivilegedActionException: javax.xml.soap.SOAPException"

    It was hard to find the root cause of this error as we were not using any soap header security mechanism as the error might suggest or imply.


    If you are getting this exception and are not using any security mechanism, try to check for any unknown / special characters in the input message payload. In my case that was causing it to fail with the above error message, once I got rid of the special characters this error went away.

    Pages

    Subscribe to Oracle FAQ aggregator