Development

Out Now!! Application Express 3.2.1

Anthony Rayner - Wed, 2009-08-26 04:28
The Oracle Application Express 3.2.1 patch set is now available for download and provides not only fixes to the following bugs, but also some additional functionality and considerations as summarised by Joel and detailed in the patch set notes.

You can get hold of it by either:
  • Downloading the full version from OTN.

  • Download the patch set 8548651 from METALINK.

If you're upgrading from any APEX version pre-3.2, then you'll need to use the full OTN release. Otherwise if you're upgrading from 3.2, then you only need the patch set.

Also in this patch set, we have included an additional documentation chapter, entitled Accessibility in Oracle Application Express. This aims to provide information for users who are accessing Oracle Application Express utilizing only a keyboard or Freedom Scientific's screen reader JAWS. It details the current accessibility issues in APEX and shows workarounds where they are possible. (We hope to address a number of these issues in APEX 4.0.)

I would be very interested to hear from anyone who uses APEX with keyboard only, screen reader or other assistive technology to get feedback on how we can hopefully get better at being accessible to our users with disabilities. Also if you use APEX to build applications that have strict accessibility requirements and have feedback on your experiences then I would love to hear from you also.

Please drop me an email at the email address in my profile if you would like to talk about this.

Anthony.

Categories: Development

Oracle JHeadstart 11.1.1 Technical Preview 2 Now Available

JHeadstart - Tue, 2009-08-11 23:17

Oracle JHeadstart 11.1.1 Technical Preview 2 is now available for download (build 11.1.1.0.46).
This release is compatible with JDeveloper 11 release 1 (11.1.1.1.0), as well as with the two JDeveloper Boxer releases (11.1.1.0.1 and 11.1.1.0.2).
Customers who own a JHeadstart supplement option license can download it from the Consulting Supplement Option portal.

An evaluation version of JHeadstart 11.1.1 is not yet available. We plan to make an evaluation version available in the fall of calendar year 2009.

In addition to many small enhancements, the following features have been added to the second technical preview:


  • Support for Stretching Layouts: JHeadstart now fully supports stretching options for ADF Faces Rich Client components.
    Through a new group-level property Enable Stretching, you can stretch the generate group content to fill the available space. Whether or not stretching
    will actually occur depends on some additional conditions that must be met. See the help description of this new group property in the application definition editor, and the
    JHeadstart Developer's guide for more info.

  • New Table Overflow Styles: The Table Overflow Style group property has two new allowable values: 'Right with Splitter' and 'Below with Splitter', which are
    useful settings when you want to generate multiple panes with stretching layouts that adjust based on the position of the splitter.

  • New Allowable Values for Same Page Display Position: The Same Page Display Position group property has two new allowable values: 'At the Right of Parent Group with Splitter'
    and 'Below Parent Group with Splitter', which are useful settings when you want to generate multiple panes with stretching group layouts that adjust
    based on the position of the splitter.
  • New Allowable Values for Region Container Layout: The Layout property of a region container has two new allowable values: 'Horizontal with Splitter'
    and 'Vertical with Splitter', which are useful settings when you want to generate multiple panes with stretching region layouts that adjust
    based on the position of the splitter.
  • Ability to Add Bindings to PageDefinition using Velocity Templates: It is now possible to add bindings and iterators to the page definition using
    custom velocity templates. Here are some examples:

    ${JHS.pageDefGenerator.addBinding($JHS.page,"myBinding","#CUSTOM_BINDING()")}
    ${JHS.pageDefGenerator.addExecutable($JHS.page,"myIterator","#CUSTOM_BINDING()")}
    ${JHS.pageDefGenerator.addParameter($JHS.page,"myParam","#CUSTOM_BINDING()")}


    The custom binding macro simply contains the XML you want to add to the page definition. This macro can be defined within the same
    velocity template.
  • Flex items: The 10.1.3 functionality of Flex Items (items added dynamically at runtime) is now re-implemented for release 11. The region that
    contains the flex items is now a reusable task flow, providing you with more flexibility and re-use of code.

  • File up/download: File upload, file download and display of images is now supported in release 11. It supports Intermedia (ORDSYS) column types, as well
    as BLOB column types

  • XML Reformatting: There is a new application-level property Reformat all XML Files which will reformat all generated XML files (except for the page definitions),
    giving you more readable and consistent files accross your application.

  • Upgrading from 10.1.3: When opening a 10.1.3 application definition file, JHeadstart will migrate this to release 11 upon your approval.

Click here for a list of features added in the first technical preview:

For a complete list of all existing features in JHeadstart 10.1.3, use this link.

Categories: Development

Understanding DBMS_STATS.SET_*_PREFS procedures

Inside the Oracle Optimizer - Tue, 2009-08-11 21:25
In previous Database releases you had to use the DBMS_STATS.SET_PARM procedure to change the default value for the parameters used by the DBMS_STATS.GATHER_*_STATS procedures. The scope of any changes that were made was all subsequent operations. In Oracle Database 11g, the DBMS_STATS.SET_PARM procedure has been deprecated and it has been replaced with a set of procedures that allow you to set a preference for each parameter at a table, schema, database, and Global level. These new procedures are called DBMS_STATS.SET_*_PREFS and offer a much finer granularity of control.

However there has been some confusion around which procedure you should use when and what the hierarchy is among these procedures. In this post we hope to clear up the confusion. Lets start by looking at the list of parameters you can change using the DBMS_STAT.SET_*_PREFS procedures.

  • AUTOSTATS_TARGET (SET_GLOBAL_PREFS only)
  • CASCADE
  • DEGREE
  • ESTIMATE_PERCENT
  • METHOD_OPT
  • NO_INVALIDATE
  • GRANULARITY
  • PUBLISH
  • INCREMENTAL
  • STALE_PERCENT

As mentioned above there are four DBMS_STATS.SET_*_PREFS procedures.

  1. SET_TABLE_PREFS

  2. SET_SCHEMA_PREFS

  3. SET_DATABASE_PREFS

  4. SET_GLOBAL_PREFS


The DBMS_STATS.SET_TABLE_PREFS procedure allows you to change the default values of the parameters used by the DBMS_STATS.GATHER_*_STATS procedures for the specified table only.

The DBMS_STATS.SET_SCHEMA_PREFS procedure allows you to change the default values of the parameters used by the DBMS_STATS.GATHER_*_STATS procedures for all of the existing objects in the specified schema. This procedure actually calls DBMS_STATS.SET_TABLE_PREFS for each of the tables in the specified schema. Since it uses DBMS_STATS.SET_TABLE_PREFS calling this procedure will not affect any new objects created after it has been run. New objects will pick up the GLOBAL_PREF values for all parameters.

The DBMS_STATS.SET_DATABASE_PREFS procedure allows you to change the default values of the parameters used by the DBMS_STATS.GATHER_*_STATS procedures for all of the user defined schemas in the database. This procedure actually calls DBMS_STATS.SET_TABLE_PREFS for each of the tables in each of the user defined schemas. Since it uses DBMS_STATS.SET_TABLE_PREFS this procedure will not affect any new objects created after it has been run. New objects will pick up the GLOBAL_PREF values for all parameters. It is also possible to include the Oracle owned schemas (sys, system, etc) by setting the ADD_SYS parameter to TRUE.

The DBMS_STATS.SET_GLOBAL_PREFS procedure allows you to change the default values of the parameters used by the DBMS_STATS.GATHER_*_STATS procedures for any object in the database that does not have an existing table preference. All parameters default to the global setting unless there is a table preference set or the parameter is explicitly set in the DBMS_STATS.GATHER_*_STATS command. Changes made by this procedure will affect any new objects created after it has been run as new objects will pick up the GLOBAL_PREF values for all parameters.

With GLOBAL_PREFS it is also possible to set a default value for one additional parameter, called AUTOSTAT_TARGET. This additional parameter controls what objects the automatic statistic gathering job (that runs in the nightly maintenance window) will look after. The possible values for this parameter are ALL,ORACLE, and AUTO. ALL means the automatic statistics gathering job will gather statistics on all objects in the database. ORACLE means that the automatic statistics gathering job will only gather statistics for Oracle owned schemas (sys, sytem, etc) Finally AUTO (the default) means Oracle will decide what objects to gather statistics on. Currently AUTO and ALL behave the same.

In summary, DBMS_STATS obeys the following hierarchy for parameter values, parameters values set in the DBMS_STAT.GATHER*_STATS command over rules everything. If the parameter has not been set in the command we check for a table level preference. If there is no table preference set we use the global preference.
Categories: DBA Blogs, Development

Will the Optimizer Team be at Oracle Open World 2009?

Inside the Oracle Optimizer - Thu, 2009-07-23 20:18
With only two and a half months to go until Oracle Open World in San Francisco, October 11-15th, we have gotten several requests asking if we plan to present any session at the conference.

We have two session and a demo station in the Database campground at this year's show. We will give a technical presentation on What to Expect from the Oracle Optimizer When Upgrading to Oracle Database 11g and the Oracle Optimizer Roundtable.

The technical session, which is on Tuesday Oct 13 at 2:30 pm, gives step by step instructions and detailed examples of how to use the new 11g features to ensure your upgrade goes smoothly and without any SQL plan regressions.

The roundtable, which is on Thursday Oct. 15th at 10:30 am, will give you a first hand opportunity to pose you burning Optimizer and statistics questions directly to a panel of our leading Optimizer developers. In fact if you plan to attend the roundtable and already know what questions you would like to ask, then please send them to us via email and we will be sure to include them. Other wise, you can hand in your questions at our demo station at any stage during the week, or as you enter the actual session. Just be sure to write your questions in clear block capitals!

We look forward to seeing you all at Oracle Open World.

Categories: DBA Blogs, Development

Initial version of DataMapper Oracle adapter

Raimonds Simanovskis - Mon, 2009-07-20 16:00

datamapper.jpgWhat is DataMapper?

DataMapper is Ruby Object/Relational Mapper that is similar to ActiveRecord (component of Ruby on Rails) but still it handles several things differently than ActiveRecord.

I got interested in DataMapper because I liked better some of its design decisions when compared with ActiveRecord. And in particular DataMapper architecture can suite better if you need to work with legacy Oracle database schemas – that is the area where I use Ruby on Rails a lot and for these purposes I also created Oracle enhanced adapter for ActiveRecord.

But as there were no Oracle adapter available for DataMapper I needed to create one :) I started to work on Oracle adapter for DataMapper after the RailsConf and now it is passing all DataMapper tests on all Ruby platforms – MRI 1.8, Ruby 1.9 and JRuby 1.3.

Why DataMapper for Oracle database?

If you would like to learn main differences between DataMapper and ActiveRecord then please start with this overview and this summary of benefits.

Here I will mention specific benefits if you would like to use DataMapper with Oracle database.

Model properties

In DataMapper you always specify in model class definition what Ruby “type” you would like to use for each model attribute (or property as called in DataMapper):

class Post
  include DataMapper::Resource
  property :id,         Serial
  property :title,      String
  property :post_date,  Date
  property :created_at, DateTime
  property :updated_at, Time
end

The main benefit for that is that you can explicitly define when to use Ruby Time, Date or DateTime class which is stored as DATE (or sometimes as TIMESTAMP) in Oracle database. In addition you can define your own custom DataMapper types and define how to serialize them into database.

Composite primary keys

DataMapper core library supports composite primary keys for models. If you use ActiveRecord then there is an option to use additional composite_primary_keys gem but it regularly breaks with latest ActiveRecord versions and quite often it also might break in some edge cases. In DataMapper composite primary keys are defined quite simple:

class City
  include DataMapper::Resource
  property :country,   String, :key => true
  property :name,      String, :key => true
end
Legacy schemas

DataMapper is quite useful when you want to put Ruby models on top of existing Oracle schemas. It is possible to provide different database field name for property or provide custom sequence name for primary keys:

class Post
  include DataMapper::Resource
  property :id, Serial, :field => "post_id", :sequence => "post_s"  
end

You can also define one model that can be persisted in two different repositories (e.g. databases or schemas) and use different naming conventions in each repository:

class Post
  include DataMapper::Resource
  repository(:old) do
    property :id, Serial, :field => "post_id", :sequence => "post_s"
  end
  repository(:default) do
    property :id, Serial
  end
end

As a result DataMapper can be used also for data migration between different databases.

Bind variables

ActiveRecord always generates SQL statements for execution as one single string. Therefore Oracle enhanced adapter always initializes Oracle session with setting cursor_sharing=‘similar’. It instructs Oracle always to take all literals (constants) from SQL statement and replace them with bind variables. It reduces the number of unique SQL statements generated but also it is some overhead for Oracle optimizer.

DataMapper always passes all statement parameters separately to corresponding database adapter and therefore it is possible for Oracle adapter to pass all parameters as bind variables to Oracle.

CLOB and BLOB values inserting and selecting

As for ActiveRecord all inserted values should be passed as literals in INSERT statement it was not possible to insert large CLOB and BLOB values directly in INSERT statement. Therefore ActiveRecord Oracle enhanced adapter did separate call-backs for inserting any CLOB or BLOB data after INSERT of other data. In DataMapper it is possible to insert all data at once as CLOB and BLOB data are passed as bind variables.

DataMapper also handles better lazy loading of large columns. So if you define property as Text then by default it will not be selected from database – it will be selected separately only when you use it. Typically it could reduce amount of data that needs to be sent from database to application as Text properties are quite often not needed in e.g. all web pages.

Wny not DataMapper?

If you are fine with ActiveRecord default conventions and you don’t have any issues that I listed previously then probably ActiveRecord is good enough for you and you shouldn’t change to DataMapper. There are of course much more Rails plugins that work with ActiveRecord but not yet with DataMapper. And DataMapper is still much less used and therefore there might some edge cases where it is not tested and you will need to find the issue causes by yourself.

But if you like to try new things then please try it out – and also DataMapper community is quite friendly and helpful and will help to solve any issues :)

Installation of DataMapper Oracle adapter

So if you have decided to try to use DataMapper with Oracle database then follow the instructions how to install it.

Oracle support is done for current development version 0.10.0 of DataMapper – therefore you will need to install the latest versions from GitHub (they are still not published as gems on RubyForge).

DataMapper with Oracle adapter can be used both on MRI 1.8.6 (I am not testing it on 1.8.7) and Ruby 1.9.1 as well as on JRuby 1.3. And currently installation is tested on Mac OS X and Linux – if there is anyone interested in Windows support then please let me know.

MRI 1.8.6 or Ruby 1.9.1

At first you need to have the same preconditions as for ActiveRecord:

  • Oracle Instant Cient
  • ruby-oci8 gem, version 2.0.2 or later

If you are using Mac then you can use these instructions for installation.

Now at first it is necessary to install DataObjects Oracle driver – DataObjects library is unified interface to relational databases (like SQLite, MySQL, PostgreSQL, Oracle) that DataMapper uses to access these databases.

At first validate that you have the latest version of rubygems installed and install necessary additional gems:

gem update --system
gem install addressable -v 2.0

As I mentioned currently you need to install the latest version from GitHub (at first create and go to directory where you would like to store DataMapper sources):

git clone git://github.com/datamapper/extlib.git
cd extlib
git checkout -b next --track origin/next
rake install
cd ..
git clone git://github.com/datamapper/do.git
cd do
git checkout -b next --track origin/next
cd data_objects
rake install
cd ../do_oracle
rake compile
rake install
cd ../..

Now if DataObjects installation was successful you can install DataMapper. UPDATE: Oracle adapter is now in “next” branch of DataMapper so now you need to install it form there:

git clone git://github.com/datamapper/dm-core.git
cd dm-core
git checkout -b next --track origin/next
rake install

Now start irb and test if you can connect to Oracle database (change database name, username and password according to your setup):

require "rubygems"
require "dm-core"
DataMapper.setup :default, "oracle://hr:hr@xe"

and try some basic DataMapper operations (I assume that you don’t have posts table in this schema):

class Post
  include DataMapper::Resource
  property :id,     Serial, :sequence => "posts_seq"
  property :title,  String
end
DataMapper.auto_migrate!
p = Post.create(:title=>"Title")
Post.get(p.id)
Post.auto_migrate_down!
JRuby

At first I assume that you have already installed JRuby latest version (1.3.1 at the moment).

Then you need to place Oracle JDBC driver ojdbc14.jar file in JRUBY_HOME/lib directory (other option is just to put somewhere in PATH).

All other installation should be done in the same way – just use “jruby -S gem” instead of “gem” and “jruby -S rake” instead of “rake” and it should install necessary gems for JRuby.

In addition before installing do_oracle gem you need to install do_jdbc gem (which contains general JDBC driver functionality):

# after installation of data_objects gem
cd ../do_jdbc
jruby -S rake compile
jruby -S rake install
# continue with do_oracle installation
Other DataMapper gems

DataMapper is much more componentized than ActiveRecord. Here I described how to install just the main dm-core gem. You can see the list of other gems in DataMapper web site.

To install additional DataMapper gems you need to

git clone git://github.com/datamapper/dm-more.git
cd dm-more
git checkout -b next --track origin/next
cd dm-some-other-gem
rake install
Questions?

This was my first attempt to describe how to start to use DataMapper with Oracle. If you have any questions or something is not working for you then please write comments and I will try to answer and fix any issues in these instructions.

Categories: Development

2 members JHeadstart Team in Oracle Innovation Showcase

JHeadstart - Thu, 2009-07-16 00:07

We are honored that 2 of our JHeadstart Team members are included in the Conversations with Oracle Innovators of the Oracle Innovation Showcase.

Full credit to Steven, who is the driving power behind the JHeadstart innovations!

Categories: Development

OTNs APEX Developer Competition 2009

Anthony Rayner - Wed, 2009-07-01 05:22
Are you the...

  ...travelling type?  Fancy winning a free ticket for Oracle OpenWorld in San Francisco (October 11 - 15) to meet with like minded APEX enthusiasts and learn more about APEX and other Oracle technology?


Or maybe more the...

  ...bragging type?  How would the words 'Oracle Application Express Developer Competition Winner 2009' look on your CV? It does have a certain ring to it, don't you think?


Or even the...

  ...academic type?   What about the prospect of furthering your understanding of APEX by paging through your winning copy of 'Pro Oracle Application Express'?


Whatever your reasons, enter the OTN 'Oracle Application Express Developer Competition 2009' by submitting an APEX application that stands out from the crowd and you could be in with the opportunity of winning one of these great prizes or accolades!

For more information, including submission guidelines, all important judging criteria and registration details, please visit the OTN page and David Peake's related post. Entries close 24 August, 2009.

Good luck!
Categories: Development

Approaches to "UPSERT"

Kenneth Downs - Mon, 2009-06-29 20:33

This week in the Database Programmer we look at something called an "UPSERT", the strange trick where an insert command may magically convert itself into an update if a row already exists with the provided key. This trick is very useful in a variety of cases. This week we will see its basic use, and next week we will see how the same idea can be used to materialize summary tables efficiently.

An UPSERT or ON DUPLICATE KEY...

The idea behind an UPSERT is simple. The client issues an INSERT command. If a row already exists with the given primary key, then instead of throwing a key violation error, it takes the non-key values and updates the row.

This is one of those strange (and very unusual) cases where MySQL actually supports something you will not find in all of the other more mature databases. So if you are using MySQL, you do not need to do anything special to make an UPSERT. You just add the term "ON DUPLICATE KEY UPDATE" to the INSERT statement:

insert into table (a,c,b) values (1,2,3)
    on duplicate key update
     b = 2,
     c = 3

The MySQL command gives you the flexibility to specify different operation on UPDATE versus INSERT, but with that flexibility comes the requirement that the UPDATE clause completely restates the operation.

With the MySQL command there are also various considerations for AUTO_INCREMENT columns and multiple unique keys. You can read more at the MySQL page for the INSERT ... ON DUPLICATE KEY UPDATE feature.

A Note About MS SQL Server 2008

MS SQL Server introduced something like UPSERT in SQL Server 2008. It uses the MERGE command, which is a bit hairy, check it out in this nice tutorial.

Coding a Simpler UPSERT

Let us say that we want a simpler UPSERT, where you do not have to mess with SQL Server's MERGE or rewrite the entire command as in MySQL. This can be done with triggers.

To illustrate, consider a shopping cart with a natural key of ORDER_ID and SKU. I want simple application code that does not have to figure out if it needs to do an INSERT or UPDATE, and can always happily do INSERTs, knowing they will be converted to updates if the line is already there. In other words, I want simple application code that just keeps issuing commands like this:

INSERT INTO ORDERLINES
       (order_id,sku,qty)
VALUES 
       (1234,'ABC',5)

We can accomplish this by a trigger. The trigger must occur before the action, and it must redirect the action to an UPDATE if necessary. Let us look at examples for MySQL, Postgres, and SQL Server.

A MySQL Trigger

Alas, MySQL giveth, and MySQL taketh away. You cannot code your own UPSERT in MySQL because of an extremely severe limitation in MySQL trigger rules. A MySQL trigger may not affect a row in a table different from the row originally affected by the command that fired the trigger. A MySQL trigger attempting to create a new row may not affect a different row.

Note: I may be wrong about this. This limitation has bitten me on several features that I would like to provide for MySQL. I am actually hoping this limitation will not apply for UPSERTs because the new row does not yet exist, but I have not had a chance yet to try.

A Postgres Trigger

The Postgres trigger example is pretty simple, hopefully the logic is self-explanatory. As with all code samples, I did this off the top of my head, you may need to fix a syntax error or two.

CREATE OR REPLACE FUNCTION orderlines_insert_before_F()
RETURNS TRIGGER
 AS $BODY$
DECLARE
    result INTEGER; 
BEGIN
    SET SEARCH_PATH TO PUBLIC;
    
    -- Find out if there is a row
    result = (select count(*) from orderlines
                where order_id = new.order_id
                  and sku      = new.sku
               )

    -- On the update branch, perform the update
    -- and then return NULL to prevent the 
    -- original insert from occurring
    IF result = 1 THEN
        UPDATE orderlines 
           SET qty = new.qty
         WHERE order_id = new.order_id
           AND sku      = new.sku;
           
        RETURN null;
    END IF;
    
    -- The default branch is to return "NEW" which
    -- causes the original INSERT to go forward
    RETURN new;

END; $BODY$
LANGUAGE 'plpgsql' SECURITY DEFINER;

-- That extremely annoying second command you always
-- need for Postgres triggers.
CREATE TRIGGER orderlines_insert_before_T
   before insert
   ON ORDERLINES
   FOR EACH ROW
   EXECUTE PROCEDURE orderlines_insert_before_F();
A SQL Server Trigger

SQL Server BEFORE INSERT triggers are significantly different from Postgres triggers. First of all, they operate at the statement level, so that you have a set of new rows instead of just one. Secondly, the trigger must itself contain an explicit INSERT command, or the INSERT never happens. All of this means our SQL Server example is quite a bit more verbose.

The basic logic of the SQL Server example is the same as the Postgres, with two additional complications. First, we must use a CURSOR to loop through the incoming rows. Second, we must explicitly code the INSERT operation for the case where it occurs. But if you can see past the cruft we get for all of that, the SQL Server exmple is doing the same thing:

CREATE TRIGGER upsource_insert_before
ON orderlines
INSTEAD OF insert
AS
BEGIN
    SET NOCOUNT ON;
    DECLARE @new_order_id int;
    DECLARE @new_sku      varchar(15);
    DECLARE @new_qty      int;
    DECLARE @result       int;

    DECLARE trig_ins_orderlines CURSOR FOR 
            SELECT * FROM inserted;
    OPEN trig_ins_orderlines;

    FETCH NEXT FROM trig_ins_orderlines
     INTO @new_order_id
         ,@new_sku
         ,@new_qty;

    WHILE @@Fetch_status = 0 
    BEGIN
        -- Find out if there is a row now
        SET @result = (SELECT count(*) from orderlines
                        WHERE order_id = @new_order_id
                          AND sku      = @new_sku
                      )
    
        IF @result = 1 
        BEGIN
            -- Since there is already a row, do an
            -- update
            UPDATE orderlines
               SET qty = @new_qty
             WHERE order_id = @new_order_id
               AND sku      = @new_sku;
        END
        ELSE
        BEGIN
            -- When there is no row, we insert it
            INSERT INTO orderlines 
                  (order_id,sku,qty)
            VALUES
                  (@new_order_id,@new_sku,@new_qty)
            UPDATE orderlines

        -- Pull the next row
        FETCH NEXT FROM trig_ins_orderlines
         INTO @new_order_id
             ,@new_sku
             ,@new_qty;

    END  -- Cursor iteration

    CLOSE trig_ins_orderlines;
    DEALLOCATE trig_ins_orderlines;

END
A Vague Uneasy Feeling

While the examples above are definitely cool and nifty, they ought to leave a certain nagging doubt in many programmers' minds. This doubt comes from the fact that an insert is not necessarily an insert anymore, which can lead to confusion. Just imagine the new programmer who has joined the team an is banging his head on his desk because he cannot figure out why his INSERTS are not working!

We can add a refinement to the process by making the function optional. Here is how we do it.

First, add a column to the ORDERLINES table called _UPSERT that is a char(1). Then modify the trigger so that the UPSERT behavior only occurs if the this column holds 'Y'. It is also extremely import to always set this value back to 'N' or NULL in the trigger, otherwise it will appear as 'Y' on subsequent INSERTS and it won't work properly.

So our new modified explicit upsert requires a SQL statement like this:

INSERT INTO ORDERLINES
       (_upsert,order_id,sku,qty)
VALUES
       ('Y',1234,'ABC',5)

Our trigger code needs only a very slight modification. Here is the Postgres example, the SQL Server example should be very easy to update as well:

   ...trigger declration and definition above
   IF new._upsert = 'Y'
      result = (SELECT.....);
      _upsert = 'N';
   ELSE
      result = 0;
   END IF;
   
   ...rest of trigger is the same
Conclusion

The UPSERT feature gives us simplified code and fewer round trips to the server. Without the UPSERT there are times when the application may have to query the server to find out if a row exists, and then issue either an UPDATE or an INSERT. With the UPSERT, one round trip is eliminated, and the check occurs much more efficiently inside of the server itself.

The downside to UPSERTs is that they can be confusing if some type of explicit control is not put onto them such as the _UPSERT column.

Next week we will see a concept similar to UPSERT used to efficiently create summary tables.

Categories: Development

Why do I have hundreds of child cursors when cursor_sharing set to similar in 10g

Inside the Oracle Optimizer - Tue, 2009-05-26 12:04

Recently we received several questions regarding a usual situation where a SQL Statement has hundreds of child cursors. This is in fact the expected behavior when

  1. CURSOR_SHARING is set to similar

  2. Bind peeking is in use

  3. And a histogram is present on the column used in the where clause predicate of query

You must now be wondering why this is the expected behavior. In order to explain, let's step back and begin by explaining what CURSOR_SHARING actually does. CURSOR_SHARING was introduced to help relieve pressure put on the shared pool, specifically the cursor cache, from applications that use literal values rather than bind variables in their SQL statements. It achieves this by replacing the literal values with system generated bind variables thus reducing the number of (parent) cursors in the cursor cache. However, there is also a caveat or additional requirement on CURSOR_SHARING, which is that the use of system generated bind should not negatively affect the performance of the application. CURSOR_SHARING has three possible values: EXACT, SIMILAR, and FORCE. The table below explains the impact of each setting with regards to the space used in the cursor cache and the query performance.

.nobrtable br { display: none }



CURSOR_SHARING VALUESPACE USED IN SHARED POOLQUERY PERFORMANCE
EXACT (No literal replacement)Worst possible case - each stmt issued has its own parent cursorBest possible case as each stmt has its own plan generated for it based on the value of the literal value present in the stmt
FORCEBest possible case as only one parent and child cursor for each distinct stmtPotentially the worst case as only one plan will be used for each distinct stmt and all occurrences of that stmt will use that plan
SIMILAR without histogram presentBest possible case as only one parent and child cursor for each distinct stmtPotentially the worst case as only one plan will be used for each distinct stmt and all occurrences of that stmt will use that plan
SIMILAR with histogram presentNot quite as much space used as with EXACT but close. Instead of each stmt having its own parent cursor they will have their own child cursor (which uses less space)Best possible case as each stmt has its own plan generated for it based on the value of the literal value present in the stmt




In this case the statement with hundreds of children falls into the last category in the above table, having CURSOR_SHARING set to SIMILAR and a histogram on the columns used in the where clause predicate of the statement. The presence of the histogram tells the optimizer that there is a data skew in that column. The data skew means that there could potentially be multiple execution plans for this statement depending on the literal value used. In order to ensure we don't impact the performance of the application, we will peek at the bind variable values and create a new child cursor for each distinct value. Thus ensuring each bind variable value will get the most optimal execution plan. It's probably easier to understand this issue by looking at an example. Let's assume there is an employees table with a histogram on the job column and CURSOR_SHARING has been set to similar. The following query is issued

select * from employees where job = 'Clerk';

The literal value 'Clerk' will be replaced by a system generated bind variable B1 and a parent cursor will be created as

select * from employees where job = :B1;

The optimizer will peek the bind variable B1 and use the literal value 'Clerk' to determine the execution plan. 'Clerk' is a popular value in the job column and so a full table scan plan is selected and child cursor C1 is created for this plan. The next time the query is executed the where clause predicate is job='VP' so B1 will be set to 'VP', this is not a very popular value in the job column so an index range scan is selected and child cursor C2 is created. The third time the query is executed the where clause predicate is job ='Engineer' so the value for B1 is set to 'Engineer'. Again this is a popular value in the job column and so a full table scan plan is selected and a new child cursor C3 is created. And so on until we have seen all of the distinct values for job column. If B1 is set to a previously seen value, say 'Clerk', then we would reuse child cursor C1.
.nobrtable br { display: none }



Value for B1Plan UsedCursor Number
ClerkFull Table ScanC1
VPIndex Range ScanC2
EngineerFull Table ScanC3



As each of these cursors is actually a child cursor and not a new parent cursor you will still be better off than with CURSOR_SHARING set to EXACT as a child cursor takes up less space in the cursor cache. A child cursor doesn't contain all of the information stored in a parent cursor, for example, the SQL text is only stored in the parent cursor and not in each child.

Now that you know the explanation for all of the child cursors you are seeing you need to decide if it is a problem for you and if so which aspect affects you most, space used in the SHARED_POOL or query performance. If your goal is to guarantee the application performance is not affected by setting CURSOR_SHARING to SIMILAR then keep the system settings unchanged. If your goal is to reduce the space used in the shared pool then you can use one of the following solutions with different scopes:

  1. Individual SQL statements - drop the histograms on the columns for each of the affected SQL statements

  2. System-wide - set CURSOR_SHARING to FORCE this will ensure only one child cursor per SQL statement


Both of these solutions require testing to ensure you get the desired effect on your system. Oracle Database 11g provides a much better solution using the Adaptive Cursor Sharing feature. In Oracle Database 11g, all you need to do is set CURSOR_SHARING to FORCE and keep the histograms. With Adaptive Cursor Sharing, the optimizer will create a cursor only when its plan is different from any of the plans used by other child cursors. So in the above example, you will get two child cursors (C1 and C2) instead of 3.

Categories: DBA Blogs, Development

Too Many Blogs

Oracle EPM Smart Space - Tue, 2009-04-21 15:54

I thought I would be able to keep up with all these blogs but I cannot so I am posting to let you know that I will only be blogging at Essbase Labs.  You will find some of same content but geared more towards Essbase instead of general EPM...

http://essbaselabs.blogspot.com/

Categories: Development

ruby-plsql new version - Ruby 1.9.1 support and more

Raimonds Simanovskis - Mon, 2009-04-20 16:00

I have released ruby-plsql gem (Ruby API for Oracle PL/SQL procedure calls) new version 0.3.0 which includes several new features.

Ruby 1.9.1

Probably the most important is support for Ruby 1.9.1 – now you can use both Oracle enhanced adapter and ruby-plsql gem on all three major Ruby plaforms:

  • MRI 1.8.6 with ruby-oci8 1.0.x library or gem
  • Ruby/YARV 1.9.1 with ruby-oci8 2.0 library or gem (currently just trunk version of ruby-oci8 2.0 contains the last bug fixes for Ruby 1.9.1)
  • JRuby (so far tested with 1.1.6) with JDBC Oracle driver
ActiveRecord connection

In addition usage of ruby-plsql gem in Ruby on Rails project is simplified. Now you can include in environment.rb or some initializer file just:

plsql.activerecord_class = ActiveRecord::Base

and you don’t need to specify plsql.connection anymore – it will always use current ActiveRecord connection. This is also useful when ActiveRecord reestablishes connection to database as you don’t need to reestablish plsql connection in this case.

In addition if you use several different connections to Oracle database then you can assign to plsql.activerecord_class also different class that inherits from ActiveRecord::Base and has connection to different database.

Database time zone

Also you can also specify in which timezone DATE values are stored in database:

plsql.default_timezone = :local

or

plsql.default_timezone = :utc

This will affect how DATE values (without timezone) will be converted to Time or DateTime values (with timezone), default selection is :local timezone. If you have set plsql.activerecord_class then the value will be taken from ActiveRecord::Base.default_timezone.

BLOB support

You can now use BLOB data type for input and output parameters and function return values.
I remind you that also NUMBER, VARCHAR2, DATE, TIMESTAMP and CLOB data types are supported,

Synonym support

Now you can also use private and public database synonyms to functions or procedures or packages.
E.g. if ORA_LOGIN_USER is public database synonym to SYS.LOGIN_USER function then instead of

plsql.sys.login_user

you can use

plsql.ora_login_user
Installation

To install the gem as always do

sudo gem install ruby-plsql

or call the correct gem command version for JRuby or Ruby 1.9.1.

Source code of ruby-plsql is located at GitHub where you can find usage examples in RSpec tests.

Categories: Development

What a news! Oracle Buys Sun !

Pawel Barut - Mon, 2009-04-20 15:36
Written by Paweł Barut
What a news: Oracle Buys Sun !! I'm not the first one that blogs about it, but anyway let me express my comments. What does it mean for IT:
1. Oracle software will be installed on almost every PC; Sun Java might be renamed to Oracle Java. It will give wider recognition for Oracle trademark also to people that are not so much engaged with IT Industry.
2. Oracle becomes hardware producer. Well, you might know already HP-Oracle Database Machine (Exadata), but this was joint-venture. Now Oracle will produce hardware and software that runs on it. If everything goes right next "Database Machine" will be fully "Oracle".
3. Sun Solaris is #1 platform for running Oracle, and the second one is Linux. This could mean that rate for Linux implementation in enterprises will be lower, as many companies would choose OS and DB from one vendor. I know that Oracle is also offering Linux, but Oracle sales will be promoting Solaris, as this deals will be better for Oracle.
4. Oracle now owns 2, ehh, 3 databases: Oracle, Berkley XML DB and now MySQL. I could mean that MySQL will get even bettor, or rather it will mean that Oracle will offer MySQL customer to migrate to Oracle. So far Oracle does not have good record for supporting free editions of Databases. We will see...
5. With Oracle-Sun merge, Oracle will be shipping now full stack: hardware,OS, Database, Midlleware, ERM, CRM and many others. Oracle is becoming giant now...

Read also what others have to say:

Keep reading,
Paweł
Categories: Development

The Relational Model

Kenneth Downs - Sun, 2009-04-19 15:50

If you look at any system that was born on and for the internet, like Ruby on Rails, or the PHP language, you find an immense wealth of resources on the internet itself, in endless product web sites, blogs, and forums. But when you look for the same comprehensive information on products or ideas that matured before the web you find it is not there. Relational databases stand out as a product family that matured before the internet, and so their representation in cyberspace is very different from the newer stuff.

The Math Stuff

You may have heard relational theorists argue that the strength of relational databases comes from their solid mathematical foundations. Perhaps you have wondered, what does that mean? And why is that good?

To understand this, we have to begin with Edsger W. Dijkstra, a pioneer in the area of computer science with many accomplishments to his name. Dijkstra believed that the best way to develop a system or program was to begin with a mathematical description of the system, and then refine that system into a working program. When the program completely implemented the math, you were finished.

There is a really huge advantage to this approach. If you start out with a mathematical theory of some sort, which presumably has well known behaviors, then the working program will have all of those behaviors and, put simply, everybody will know what to expect of it.

This approach also reduces time wasted on creative efforts to work out how the program should behave. All those decisions collapse intot he simple drive to make the program mimic the math.

A Particular Bit of Math

It so happens that there is a particular body of math known as Relational Theory, which it seemd to E. F. Codd would be a very nice fit for storing business information. In his landmark 1970 paper A Relational Model of Data for Large Shared Data Banks (pdf) he sets out to show how these mathematical things called "relations" have behaviors that would be ideal for storing business models.

If we take the Dijkstra philosophy seriously, which is to build systems based on well-known mathematical theories, and we take Codd's claim that "Relations" match well to business record-keeping needs, the obvious conclusion is that we should build some kind of "Relational" datastore, and so we get the Relational Database systems of today.

So there in a nutshell is why relational theorists are so certain of the virtues of the relational model, it's behaviors are well-known, and if you can build something that matches them, you will have a very predictable system.

They are Still Talking About It

If you want to know more about the actual mathematics, check out the comp.databases.theory Usenet group, or check out Wikipedia's articles on Relational Algebra and Relational Calculus.

A Practical Downside

The downside to all of this comes whenever the mathematical model describes behaviors that are contrary to human goals or simply irrelevant to them. Examples are not hard to find.

When the web exploded in popularity, many programmers found that their greatest data storage needs centered on documents like web pages rather than collections of atomic values like a customer's discount code or credit terms. They found that relational databases were just not that good at storing documents, which only stands to reason because they were never intended to. In theory the model could be stretched, (if the programmer stretched as well), but the programmers could feel in their bones that the fit was not right, and they began searching for something new.

Another example is that of calculated values. If you have shopping cart, you probably have some field "TOTAL" somewhere that stores the final amount due for the customer. It so happens that such a thing violates relational theory, and there are some very bright theorists who will refuse all requests for assistance in getting that value to work, because you have violated their theory. This is probably the most shameful behavior that relational theorists exhibit - a complete refusal to consider extending the model to better reflect real world needs.

The Irony: There are No Relational Databases

The irony of it all is that when programmers set out to build relational systems, they ran into quite a few practical downsides and a sort of consensus was reached to break the model and create the SQL-based databases we have today. In a truly relational system a table would have quite a few more rules on it than we have in our SQL/TABLE based systems of today. But these rules must have seemed impractical or too difficult to implement, and they were scratched.

There is at least one product out there that claims to be truly relational, that is Dataphor. The Weird Optional Part

Probably the grandest irony in the so-called relational database management systems is that any programmer can completely break the relational model by making bad table designs. If your tables are not normalized, you lose much of the benefits of the relational model, and you are completely free to make lots of non-normalized and de-normalized tables.

Conclusion

I have to admit I have always found the strength of relational databases to be their simplicy and power, and not so much their foundations (even if shaky) in mathematical theory. A modern database is very good at storing data in tabular form, and if you know how to design the tables, you've got a great foundation for a solid application. Going further, I've always found relational theorists to be unhelpful in the extreme in the edge cases where overall application needs are not fully met by the underlying mathematical model. The good news is that the products themselves have all of the power we need, so I left the relational theorists to their debates years ago.

Categories: Development

Core ADF 11: Mapping Multiple FocusViewId's with XMLMenuModel

JHeadstart - Thu, 2009-04-16 20:48

In the first post of this Core ADF 11 series, I discussed various options for designing your page structure. This post applies when you use stand-alone .jspx pages. If you make these pages available through a menu (af:navigationPane) using XMLMenuModel where each menu item navigates to a page (either in the unbounded task flow or by calling a bounded task flow), the menu tab will be automatically highlighted when you set the focusViewId property of the itemNode in the XML Menu Model file to the name of the .jspx page that appears when you click on the menu item. Unfortunately, you can specify only one focusViewId for an itemNode. If your .jspx page has a button or link to navigate to a secondary page, you will loose the menu item highlighting because the focusViewId of the menu itemNode no longer matches the view id of the current (secondary) page. Even worse, if you have nested itemNodes defined in the XML Menu Model, the submenu that was displayed with the primary page will disappear when navigating to the secondary page.

The "poor-man's solution" to this problem is adding nested item nodes to the XML Menu Model for the detail pages. This technique is easy but can only be applied when there is no page-specific submenu that should be displayed with the pages. You simply create child itemNodes in the XML Menu model for the detail pages in the task flow, which will never be displayed as menu items because you do not include an af:navigationPane that displays this menu level.

A more sophisticated solution, that allows for page-specific submenus, is to subclass XMLMenuModel. The subclass looks up a MenuViewIdMapping managed bean that maps all viewIds of the secondary pages in the task flow to the viewId of the primary page that is defined as focusViewId in the XML Menu Model file.

Here are the steps to implement this technique:

1. Create a subclass of org.apache.myfaces.trinidad.model.XMLMenuModel and override method getFocusRowKey as follows:

  public Object getFocusRowKey()
  {
    // first check whether default behavior results in focus of some menu item 
    Object focusRowKey = super.getFocusRowKey();          
    if (focusRowKey!=null)
    {
      return focusRowKey;
    }
    Map> focusPathMap = getViewIdFocusPathMap();
    // check whether there is a menu mapping for the current viewId
    Map viewIdMapping = getViewIdMapping();
    String viewId = FacesContext.getCurrentInstance().getViewRoot().getViewId();
    if (viewIdMapping!=null && viewIdMapping.get(viewId)!=null)
    {
      String mappedViewId = (String) viewIdMapping.get(viewId);
      List focusPath = focusPathMap.get(mappedViewId);
      return focusPath != null? focusPath.get(0): null;                  
    }
    return null;
  }

Add the following helper method:

  public Map getViewIdMapping()
  {
    ExpressionFactory ef = 
      FacesContext.getCurrentInstance().getApplication().getExpressionFactory();
    ELContext context = FacesContext.getCurrentInstance().getELContext();
    return (Map) ef.createValueExpression(context
                                          ,"#{pageFlowScope.MenuViewIdMapping}"
                                          , Map.class).getValue(context);
  }

2. Change the menu bean property managed-bean-class to reference your subclass:

  <managed-bean>
    <description>Menu Model Managed Bean</description>
    <managed-bean-name>root_menu</managed-bean-name>
    <managed-bean-class>view.MyXMLMenuModel</managed-bean-class>
    <managed-bean-scope>request</managed-bean-scope>
    <managed-property>
      <property-name>createHiddenNodes</property-name>
      <value>false</value>
    </managed-property>
    <managed-property>
      <property-name>source</property-name>
      <property-class>java.lang.String</property-class>
      <value>/WEB-INF/root_menu.xml</value>
    </managed-property>
  </managed-bean>

3. In each taskflow that can be launched from a menu item, add a managed bean named "MenuViewIdMapping" that defines the mapping between secondary pages and the page defined as focusViewId in the XML Menu Model. Here is an example:

  <managed-bean>
    <managed-bean-name>MenuViewIdMapping</managed-bean-name>
    <managed-bean-class>java.util.HashMap</managed-bean-class>
    <managed-bean-scope>pageFlow</managed-bean-scope>
       <map-entries>
         <map-entry>
           <key>/task-flow-definition1/BoundedTaskFlow1DetailPage</key>
           <value>/task-flow-definition1/BoundedTaskFlow1Page</value>
         </map-entry>  
         <map-entry>
           <key>/task-flow-definition1/BoundedTaskFlow1AnotherDetailPage</key>
           <value>/task-flow-definition1/BoundedTaskFlow1Page</value>
         </map-entry>  
       </map-entries>
  </managed-bean>

That's it. You can download a sample workspace that illustrates the solution here. In a later post, we will discuss the solution for the "one-page application" design with dynamic regions. In that scenario the correct menu item should be highlighted based on the currently selected dynamic region.

Categories: Development

How to install Oracle Database 10g on Mac OS X Intel

Raimonds Simanovskis - Sat, 2009-04-11 16:00

UPDATE: Created instructions how to install Oracle 10g on Mac OS X Snow Leopard

Couple days ago Oracle developers on Mac OS X received Easter present – finally Oracle Database 10g was released for Mac OS X 10.5 Intel platform. This download includes installation guide for Mac OS X but as any Oracle installation guide it is quite long and contains a lot of unnecessary information for first time install as well as does not contain some necessary information.

Therefore I am posting here my shorter tutorial how to install it. And this tutorial is targeted to developers who want to install local Oracle database for development needs on their MacBook, iMac or Mac Pro.

Initial preparation

If you are a developer then I suppose you already have Xcode tools installed which are required also for Oracle installation. And I tried these steps on Mac OS X latest version 10.5.6.

Then you need to create oracle user as well as increase default kernel parameters. Open Terminal and switch to root user:

sudo -i

Create oinstall group and oracle user (I used group and user number 600 to ensure that they do not collide with existing groups and users):

dscl . -create /groups/oinstall
dscl . -append /groups/oinstall gid 600
dscl . -append /groups/oinstall passwd "*"
dscl . -create /users/oracle
dscl . -append /users/oracle uid 600
dscl . -append /users/oracle gid 600
dscl . -append /users/oracle shell /bin/bash
dscl . -append /users/oracle home /Users/oracle
dscl . -append /users/oracle realname "Oracle software owner"
mkdir /Users/oracle
chown oracle:oinstall /Users/oracle

Change password for oracle user:

passwd oracle

Change default kernel parameters:

vi /etc/sysctl.conf

and enter values recommended by Oracle:

kern.sysv.semmsl=87381
kern.sysv.semmns=87381
kern.sysv.semmni=87381
kern.sysv.semmnu=87381
kern.sysv.semume=10
kern.sysv.shmall=2097152
kern.sysv.shmmax=2197815296
kern.sysv.shmmni=4096
kern.maxfiles=65536
kern.maxfilesperproc=65536
net.inet.ip.portrange.first=1024
net.inet.ip.portrange.last=65000
kern.corefile=core
kern.maxproc=2068
kern.maxprocperuid=2068

After this reboot your computer so that these new kernel parameters would be taken into effect. After reboot open again Terminal and now login as oracle user:

su - oracle

Set shell settings in .bash_profile

vi .bash_profile

and enter

export DISPLAY=:0.0
export ORACLE_BASE=$HOME
umask 022
ulimit -Hn 65536
ulimit -Sn 65536

As you see I prefer to install all Oracle related files under home directory of oracle user therefore I am setting ORACLE_BASE to home directory. And also include ulimit settings – I forgot to do this initially and got strange TNS service errors because of that.

Now execute this script so that these settings are applied to current shell:

. ./.bash_profile

Now download db.zip installation archive and place it somewhere and unzip it:

mkdir Install
cd Install
# download db.zip to this directory
unzip db.zip
cd db/Disk1

Now you are ready to start installation:

./runInstaller
Installation

In installation wizard I selected the following options:

  • Advanced Installation – so that I can change some default options
  • Standard Edition – as I don’t need additional features of Enterprise Edition
  • Create Database / General Purpose
  • Global database name: orcl, SID: orcl
  • Character set: UTF-8 AL32UTF8
  • Create database with sample schemas
  • Selected “Use the same password for all the accounts” – do not specify default “manager” password as it will not be allowed :)
  • Password Management – selected this to unlock necessary sample accounts (e.g. HR schema account that I use as default test schema)

At the end of installation you will be instructed to run one shell script from root.
Hopefully installation will complete successfully.

Additional oracle user settings

If you will use oracle user later then add the following lines to .bash_profile of oracle user:

export ORACLE_HOME=/Users/oracle/oracle/product/10.2.0/db_1
export DYLD_LIBRARY_PATH=$ORACLE_HOME/lib
export ORACLE_SID=ORCL
PATH=$PATH:$ORACLE_HOME/bin

After this relogin as oracle user and verify listener status:

lsnrctl status

and if it is down then start it with

lsnrctl start

and verify if you can connect to Oracle database with sample user (that I unlocked during installation)

sqlplus hr/hr@orcl

If it fails then do some investigation :)

Change listener to listen on localhost

As I need this Oracle database just as local development database on my computer then I want to change the listener so that it would listen just on localhost port 1521:

vi $ORACLE_HOME/network/admin/listener.ora

and change the contents of the file to:

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (SID_NAME = PLSExtProc)
      (ORACLE_HOME = /Users/oracle/oracle/product/10.2.0/db_1)
      (PROGRAM = extproc)
    )
    (SID_DESC =
      (SID_NAME = orcl)
      (ORACLE_HOME = /Users/oracle/oracle/product/10.2.0/db_1)
    )
  )
LISTENER =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1))
      (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
    )
  )

Then also change ORCL alias definition in $ORACLE_HOME/network/admin/tnsnames.ora to:

ORCL =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = orcl)
    )
  )

After this change restart listener and try to connect with sqlplus to verify that these changes are successful.

Automatic startup of Oracle database

If you want that Oracle database is started automatically when your computer is booted then you need to create the following startup script. Start terminal and switch to root.

At first edit /etc/oratab and change N to Y at the end of line for ORCL database – this will be used by dbstart utility to find which databases should be started automatically.

Then create startup script for Oracle database:

mkdir /Library/StartupItems/Oracle
cd /Library/StartupItems/Oracle
vi Oracle

and enter the following:

#!/bin/sh

# Suppress the annoying "$1: unbound variable" error when no option
# was given
if [ -z $1 ] ; then
  echo "Usage: $0 [start|stop|restart] "
  exit 1
fi

# source the common startup script
. /etc/rc.common

# Change the value of ORACLE_HOME to specify the correct Oracle home
# directory for the installation
ORACLE_HOME=/Users/oracle/oracle/product/10.2.0/db_1
DYLD_LIBRARY_PATH=$ORACLE_HOME/lib
export ORACLE_HOME DYLD_LIBRARY_PATH

# change the value of ORACLE to the login name of the
# oracle owner at your site
ORACLE=oracle

PATH=$PATH:$ORACLE_HOME/bin

# Set shell limits for the Oracle Database
ulimit -Hu 2068
ulimit -Su 2068
ulimit -Hn 65536
ulimit -Sn 65536

StartService()
{
  ConsoleMessage "Starting Oracle Databases"
  su $ORACLE -c "$ORACLE_HOME/bin/dbstart $ORACLE_HOME"
}

StopService()
{
  ConsoleMessage "Stopping Oracle Databases"
  su $ORACLE -c "$ORACLE_HOME/bin/dbshut $ORACLE_HOME"
}

RestartService()
{
  StopService
  StartService
}

RunService "$1"

and then make this script executable

chmod a+x Oracle

and in addition create properties file:

vi StartupParameters.plist

with the following contents:

{
  Description     = "Oracle Database Startup";
  Provides        = ("Oracle Database");
  Requires        = ("Disks");
  OrderPreference = "None";
}

Now you can verify that these scripts are working. Open new terminal and try

sudo /Library/StartupItems/Oracle/Oracle stop

to stop the database and

sudo /Library/StartupItems/Oracle/Oracle start

to start again the database. And later you can reboot your computer also to verify that Oracle database will be started automatically.

Hide oracle user from login window

After computer reboot you probably noticed that now you got oracle user in initial login window. To get rid of it execute this from terminal:

sudo defaults write /Library/Preferences/com.apple.loginwindow HiddenUsersList -array-add oracle
What next?

Now when you have Oracle database installed you would need some development tools that you could use to access the database. Here are some links:

Please comment if you find any issues with Oracle Database 10g installation using this tutorial.

Categories: Development

Our first official book review in on its way...

Java 2 Go! - Fri, 2009-04-10 18:42
by Eduardo Rodrigues Great news! We have been contacted (surprisingly) by Packt Publishing a few days ago and asked if we would be interested in read one of their new books and then publish an...

This is a summary only. Please, visit the blog for full content and more.
Categories: Development

Caution: FOR loop and CONTINUE in Oracle 11g

Pawel Barut - Mon, 2009-04-06 04:59
Written by Paweł Barut
Friend of mine showed me interesting issue of using CONTINUE Statement. CONTINUE is an new statement in Oracle 11; it allow to skip processing of current iteration of look, and go to begging of next iteration. So here is working sample:
SQL> set serveroutput on
SQL> begin
  2    dbms_output.put_line('=== START');
  3    for i in 1..15 loop
  4      dbms_output.put_line('before:'||i);
  5      if mod(i, 5) = 0 then
  6        dbms_output.put_line('CONTINUE');
  7        continue;
  8      end if;
  9      dbms_output.put_line('after:'||i);
 10    end loop;
 11    dbms_output.put_line('=== STOP');
 12  end;
 13  /
=== START
before:1
after:1
before:2
after:2
before:3
after:3
before:4
after:4
before:5
CONTINUE
before:6
after:6
before:7
after:7
before:8
after:8
before:9
after:9
before:10
CONTINUE
before:11
after:11
before:12
after:12
before:13
after:13
before:14
after:14
before:15
CONTINUE
=== STOP

But when we use CONTINUE in loop, that is based on implicit cursor, it gives wrong results:
SQL> begin
  2    dbms_output.put_line('=== START');
  3    for r in (select level num from dual connect by level <= 115) loop
  4      dbms_output.put_line('before:'||r.num);
  5      if mod(r.num, 5) = 0 then
  6        dbms_output.put_line('CONTINUE');
  7        continue;
  8      end if;
  9      dbms_output.put_line('after:'||r.num);
 10    end loop;
 11    dbms_output.put_line('=== STOP');
 12  end;
 13  /
=== START
before:1
after:1
before:2
after:2
before:3
after:3
before:4
after:4
before:5
CONTINUE
before:101
after:101
before:102
after:102
before:103
after:103
before:104
after:104
before:105
CONTINUE
=== STOP

In that case CONTINUE statement goes to iteration no 101 instead of going to iteration 6, as expected!!
To make the long story short, I've experiment with few setting and found that setting PLSQL_OPTIMIZE_LEVEL to 1 this block runs as expected:
SQL> ALTER SESSION SET PLSQL_OPTIMIZE_LEVEL = 1;
SQL> begin
  2    dbms_output.put_line('=== START');
  3     for r in (select level num from dual connect by level <= 115) loop
  4       dbms_output.put_line('before:'||r.num);
  5       if mod(r.num, 5) = 0 then
  6         dbms_output.put_line('CONTINUE');
  7         continue;
  8       end if;
  9       dbms_output.put_line('after:'||r.num);
 10     end loop;
 11     dbms_output.put_line('=== STOP');
 12  end;
 13  /
=== START
before:1
after:1
before:2
after:2
before:3
after:3
before:4
after:4
before:5
CONTINUE
before:6
after:6
[... many lines cutted out from output...]
after:99
before:100
CONTINUE
before:101
after:101
before:102
after:102
before:103
after:103
before:104
after:104
before:105
CONTINUE
before:106
[... many lines cutted out from output...]
after:114
before:115
CONTINUE
=== STOP

For me it looks as bug in PL/SQL Optimizer on level 2. In that case PL/SQL does bulk collect for 100 rows when FOR LOOP with cursor is used. Seems that CONTINUE in that case causes to fetch next portion of records instead of taking next record from buffer. I've reported bug to Oracle Support, but until it will get fixed, be aware of possible wrong results.
This was tested on Oracle 11.1.0.6 and 11.1.0.7 Enterprise Edition.

Keep reading,
Paweł

--
Related Articles on Paweł Barut blog:
    Categories: Development

    Core ADF 11: Abandoning a Bounded Taskflow with Stand-Alone Pages

    JHeadstart - Thu, 2009-04-02 22:14

    In the first post of this Core ADF 11 series, I discussed various options for designing your page structure. This post applies when you choose to use bounded task flows with stand-alone .jspx pages. If you make these bounded task flows available through a menu where each menu item calls a bounded task flow, you will hit the "cannot abandon bounded taskflow" issue: when you start the home page of your application (located in the unbounded taskflow), and then click on a menu item for the first time, you start the bounded taskflow associated with the menu item. Now, the menu is still visible, and the end user might decide to click on another menu item before he correctly finished the bounded task flow. In other words, he wants to abandon the task flow he started through the first menu item. However, when the user clicks on another menu item, nothing will happen! This is because the control flow rules that start each bounded taskflow are defined in the unbounded taskflow (in adfc-config.xml). They are not available in the context of the first bounded taskflow started by the user.

    The "poor man's" solution to this problem is hiding or disabling all menu items and other global actions that will launch a bounded task flow (like "Help" and "Preferences") until the user executes a bounded task flow return action and returns to a page in the unbounded task flow. Not a very user friendly solution....

    Another technique is to use a bounded taskflow template for all your bounded task flows. In the template you duplicate all control flow rules and all task-flow call activities that are defined in the unbounded task flow. While this might seem to work at first sight, this is not a viable option. What happens with this approach is that each time you click on another menu item, a new bounded sub task flow is started in the context of the current task flow: in other words, you keep on nesting bounded task flows. This is highly undesirable and will cause problems sooner or later with memory resources and/or transaction management. Since the task flows are nested, taskflow resources are never released, and transactions are never rolled back. Don't do this!

    The only proper solution is to execute a return activity to quit the current task flow before starting the new bounded task flow. In this post, I will describe how you can implement this in a generic way which hides the complexity for your developers. The outline of the solution is as follows:

    • The action property of all menu items as defined in the XML menu Model is prefixed with "abandon:" (This is all what your developers need to know)
    • A custom navigation handler is used that checks whether the navigation outcome starts with "abandon:". If so, and the current task flow context is a bounded task flow, then the navigation outcome as specified after the "abandon:" prefix is stored in a request attribute named "callMenuItem", and the super navigation handler is called with the CallMenuItem outcome. If we are not in the context of a bound taskflow, we call the super navigation handler directly with the navigation outcome as specified after the "abandon:" prefix.
    • The CallMenuItem outcome navigates to a generic task flow return activity named "CallMenuItem" which in turn navigates to the CallMenuItem activity in the unbounded taskflow.
    • The CallMenuItem activity in the unbounded taskflow is a "dynamic router" a Java class with a getOutcome method that returns the value of the EL expression passed in as argument. In our case, the router returns the value of the currentMenuItem request attribute, which causes navigation to the end target: either some page in the unbounded task flow, or another bounded task flow call activity in the unbounded task flow.

    Here are the detailed steps to implement this:

    1. Prefix the value of the action property in all your menu items in XML MenuModel with "abandon:"

    2. Create a custom navigation handler with the following code:

    public class MyNavigationHandler
      extends NavigationHandlerImpl
    {
      public static final String ABANDON_PREFIX = "abandon:";
      private static final String CURRENT_MENU_ITEM = "currentMenuItem";
      private static final String CALL_MENU_ITEM = "CallMenuItem";


      public MyNavigationHandler(NavigationHandler navHandler)
      {
        super(navHandler);
      }


      public void handleNavigation(FacesContext facesContext, String action,
                                   String outcome)
      {
        if (outcome != null && outcome.startsWith(ABANDON_PREFIX))
        {
          abandonTaskFlowIfNeeded(facesContext, action, outcome);
        }
        else
        {
          super.handleNavigation(facesContext, action, outcome);
        }
      }


      public void abandonTaskFlowIfNeeded(FacesContext facesContext,
                                          String action, String outcome)
      {
        String strippedOutcome =
          outcome.substring(MyNavigationHandler.ABANDON_PREFIX.length());
        TaskFlowContext tfc =
          ControllerContext.getInstance().getCurrentViewPort().getTaskFlowContext();
        if (tfc.getTaskFlowId() == null)
        {
          // we are not in a bounded task flow, do normal navigation
          handleNavigation(facesContext, action, strippedOutcome);
        }
        else
        {
          String newOutcome = strippedOutcome;
          Map requestMap = 
              FacesContext.getCurrentInstance().getExternalContext().getRequestMap();
          requestMap.put(CURRENT_MENU_ITEM, newOutcome);
          handleNavigation(facesContext, null, CALL_MENU_ITEM);
        }
      }
    }

    3. Register your custom navigation handler in the faces-config.xml. Go to the Overview tab, click on Application, and set your class name in the navigation handler property.

    4. Create a bounded taskflow template used by all your bounded task flows, and add the following code to it:

    <task-flow-return id="CallMenuItem">
      <outcome>
        <name>CallMenuItem</name>
      </outcome>
    </task-flow-return>
    <control-flow-rule>
      <from-activity-id>*</from-activity-id>
      <control-flow-case>
        <from-outcome>CallMenuItem</from-outcome>
        <to-activity-id>CallMenuItem</to-activity-id>
      </control-flow-case>
    </control-flow-rule>
    </task-flow-template>

    When using declarative transaction management, you can add that a rollback should be performed as part of the return activity.

    5. Create a Java class named DynamicRouter with the following code:

    public class DynamicRouter implements Serializable
    {
      public String getOutcome(String outcome)
      {
        return outcome;
      }
    }

    6. Add the following code to adfc-config.xml (the unbounded task flow):

    <method-call id="CallMenuItem">
      <method>#{dynamicRouter.getOutcome}</method>
      <parameter>
        <class>java.lang.String</class>
        <value>#{requestScope.currentMenuItem}</value>
      </parameter>
      <outcome>
        <to-string/>
      </outcome>
    </method-call>


    <control-flow-rule>
      <from-activity-id>*</from-activity-id>
      <control-flow-case>
        <from-outcome>CallMenuItem</from-outcome>
        <to-activity-id>CallMenuItem</to-activity-id>
      </control-flow-case>
      ...
    </control-flow-rule>  


      <managed-bean>
        <managed-bean-name>dynamicRouter</managed-bean-name>
        <managed-bean-class>view.DynamicRouter</managed-bean-class>
        <managed-bean-scope>request</managed-bean-scope>
      </managed-bean>

    That's it. You can download a sample workspace that illustrates the solution here. In a later post, we will discuss how to add an alert to warn the end user about unsaved changes when he abandons a task flow.

    Categories: Development

    Pages

    Subscribe to Oracle FAQ aggregator - Development