Feed aggregator

New release of Lucene Domain Index based on Lucene 2.9.0

Marcelo Ochoa - Wed, 2009-09-30 06:47
A new binary distribution of Lucene Domain Index ( for Oracle 10g/11g has been released.
Lucene Domain Index is integration of Lucene Project running inside the Oracle JVM and integrated to the SQL layer by adding a new index type.
This new version uses latest Lucene 2.9.0 core libraries and introduces some of the changes on API.
Here complete list of changes:
  • Tested with Oracle 11gR2, 11gR1 and 10.2 databases.
  • DefaultUserDataStore do a SAX parsing to get text nodes and attributes from an XMLType value.
  • A SimpleLRUCache is used to load rowids and his associated Lucene doc id, this reduce memory consumption when querying very big tables. A new parameter has been added, CachedRowIdSize by default 10000 to control the size of the LRU cache.
  • Lucene Domain Index core was updated to use TopFieldCollector and to avoid computation time when lscore() is not used.
  • Two new parameter has been added NormalizeScore which control when to track the Max Score and when querying, both parameters are consequence of new Lucene Collector API and boost the performance when querying.
  • A table alias L$MT is defined for the master table associated to the index to be used in complex queries associating columns from master tables and columns from dependant tables.
Full documentation is at Google Doc.
Download binary version for Oracle 10g and 11g.
One of the biggest changed introduced into Lucene core libraries are in the TopCollector API, it introduces optimizations when you don't need to compute the score, track maximum score and preserve the Lucene document ID in order.
This changes are reflected in Lucene Domain Index through the usage of lscore() ancilliary operator. For example:
create table emails (
emailFrom VARCHAR2(256),
emailTo VARCHAR2(256),
subject VARCHAR2(4000),
emailDate DATE,
bodyText CLOB)
and an index created as:
create index emailbodyText on emails(bodyText) indextype is lucene.LuceneIndex
parameters('Analyzer:org.apache.lucene.analysis.StopAnalyzer;ExtraCols:emailDate "emailDate",subject "subject",emailFrom "emailFrom",emailTo "emailTo"');
-- required to Sort by subject
alter index emailbodyText parameters('FormatCols:subject(NOT_ANALYZED),emailFrom(NOT_ANALYZED),emailTo(NOT_ANALYZED)');
-- do not track max score
alter index emailbodyText parameters('NormalizeScore:false');
this queries reflect when score is computed or not.
SELECT /*+ DOMAIN_INDEX_SORT */ lscore(1) sc,subject
FROM emails where emailfrom like '%@gmail.com' and
lcontains(bodytext,'subject:lucene injection',1)>0
order by lscore(1) DESC;
  • Lucene score is computed and do not track maximum score, the result will be returned using relevance order descendant, default Lucene ordering no extra effort.
FROM emails
where lcontains(bodytext,'subject:lucene injection','subject:ASC',1)>0;
  • Lucene score is not computed and obviously the maximum score is not tracked.
Note that if we are querying using the optimizer hint DOMAIN_INDEX_SORT and lcontains(..,'subject:ASC',..) the score value is not relevant the result will be returned using the column subject ascending.
Similar result can be obtained using order by SQL, but to do that the RDBMS first collects all the rows that match to lcontains() operator and then do an order by, the difference in this simple test is an optimizer cost 3 over 2.
The NormalizeScore parameter is by default true, to get back compatibility, applications which assume an score in range 0..1 will run without any change, but if you want to get faster response time by avoiding max score computation you can change it by using alter index parameters DDL command.
To see which really means NormalizeScore parameter look a this example using WikiPedia English dump:
select /*+ FIRST_ROWS DOMAIN_INDEX_SORT */ lscore(1),
extractValue(object_value,'/page/revision/timestamp') "revisionDate"
from pages where lcontains(object_value,'rownum:[1 TO 2] AND sport','revisionDate:ASC',1)>0;
with NormalizeScore:false returns:
1.56775963306427001953125 SCCA 25-FEB-02 PM +00:00
1.79172527790069580078125 Sports utility vehicle 04-APR-02 PM +00:00
with NormalizeScore:true returns:
0.875 SCCA 25-FEB-02 PM +00:00
1 Sports utility vehicle 04-APR-02 PM +00:00
Both results are logically because we are looking for a result order by revisionDate:ASC, but the score computed is different.
Well, I will be at Oracle Open World 09, if anybody want to know more about this kind of internal implementation I will talk at the Oracle Unconference and obviously we can meet at the OTN Lounge, see you there Marcelo.

"conn / as sysdba" on Windows

Wijaya Kusumo - Tue, 2009-09-29 04:33
We can do the following when we want to connect to Oracle db using "conn / as sysdba" in *nix environment:% export ORACLE_SID=ORCL% sqlplus /nolog% conn / as sysdbaNot a big deal. However, I got the following doing the same on Windows XP:C:\>set ORACLE_SID=ORCL C:\>sqlplus /nologSQL*Plus: Release - Production on Tue Sep 29 16:08:51 2009Copyright (c) 1982, 2007, Oracle. All Rights

Module name for logon trigger in 11g R2 AWR report

Virag Sharma - Mon, 2009-09-28 00:53
Today while working on Production performance tuning, We came across one difference in awr report of 11g R1 and 11R2

In oracle 11g R1 AWR report logon trigger modelue name usually come like
sqlplus / perl / Mid Tier@server_name_from_it_login

But in 11g R2 AWR report logon trigger modelue name usually come like oraagent.bin@DB_SERVER_WHERE_IT_RUN

New change looks more logical because DB server running logon trigger code ,
not the user.

# Excerpt from 11g R1 AWR report
# In below given example app678 is server name from where
# user logged in to database using sqlplus

Module: sqlplus@app678utl (TNS V1-V3)

#Excerpt from 11g R2 AWR report
#In below given example apps001 is database server

Module: oraagent.bin@apps001 (TNS V1-V3)

Categories: DBA Blogs

New features in ActiveRecord Oracle enhanced adapter version 1.2.2

Raimonds Simanovskis - Sun, 2009-09-27 16:00

During the last months many new features have been implemented for ActiveRecord Oracle enhanced adapter which are now included in Oracle enhanced adapter version 1.2.2. You can find full list in change history file, here I will tell about the main ones.


Now Oracle enhanced adapter has improved RDoc documentation for all public methods. So you can go to RDoc documentation of installed gem or go and view published documentation on-line.

Schema definition

There are many new features in schema definition methods that you can use in migration files:

  • When you use add_index then ActiveRecord is automatically generating index name using format index_table_name_on_column1_and_column2_… which previously could cause Oracle errors as Oracle identifiers should be up to 30 characters long. Now default index names are automatically shortened down to 30 or less characters (of course you can always use also :name option to specify shortened version by yourself).
  • Now adapter is ignoring :limit option for :text and :binary columns (as in Oracle you cannot specify limit for CLOB and BLOB data types). Previously it could cause errors if you tried to migrate Rails application from e.g. MySQL where :text and :binary columns could have :limit in schema definition.
  • If you define :string column with* :limit option then it will define VARCHAR2 column with size in characters and not in bytes (this makes difference if you use UTF-8 with language where one character might be stored as several bytes). This is expected behavior from ActiveRecord that you define maximum string size in UTF-8 characters.
  • Now you can use add_foreign_key and remove_foreign_key to define foreign key constraints in migrations (see RDoc documentation for details). Syntax and some implemenatation for foreign key definition was taken from foreigner Rails plugin as well as some ideas taken from active_record_oracle_extensions plugin.
  • add_foreign_key definitions will be also extracted in schema.rb by rake db:schema:dump task. Therefore they will be also present in test database when you will recreate it from schema.rb file.
  • Foreign keys are also safe for loading of fixtures (in case you are still using them instead of factories :)). disable_referential_integrity method is implemented for Oracle enhanced adapter which is called by ActiveRecord before loading fixtures and which disables all currently active foreign key constraints during loading of fixtures.
  • You can use add_synonym and remove_synonym to define database synonyms to other tables, views or sequences. add_synonym definitions will also be extracted in schema.rb file.
  • It is possible to create tables with primary key trigger. There will be no difference in terms how you would create new records in such table using ActiveRecord but in case you have also need to do direct INSERTs into the table then it will be easier as you can omit primary key from INSERT statement and primary key trigger will populate it automatically from corresponding sequence.
  • ActiveRecord schema dumper is patched to work correctly when default table prefixes or suffixes are used – they are now removed from schema.rb dump to avoid duplicate prefixes and suffixes when recreating schema from schema.rb.
Legacy schema support

Some features which can support “weird” legacy database schemas:

  • If you are using ActiveRecord with legacy schema which have tables with triggers that populate primary key triggers (and not using default Rails and Oracle enhanced adapter conventions) then you can use set_sequence_name :autogenerated in class definition to tell adapter to omit primary key value from INSERTs.
  • You can use ActiveRecord also with tables that you can access over database link. To do that you need to define local synonym to remote table (and also remote sequence if you want to insert records as well) and then use local synonym in set_table_name in class definition. Previously adapter could not get remote table columns, now it will get table columns also over database link.
    But still you cannot specify remote table (like “table_name@db_link”) directly in set_table_name as table_name will be used as column prefix in generated SQL statements where “@db_link” will not be valid syntax.
    And when you define local synonyms then please use the new add_synonym feature :)
Connection options
  • cursor_sharing option default value is changed from “similar” to “force” – please read explanation in discussion group post what it is and why the new default value is recommended choice.
  • When using JRuby and JDBC you can set TNS_ADMIN environment variable to tnsnames.ora directory and then use TNS database alias in database.yml file (specify just database: option and remove host: option). This might be useful for more complex TNS connection definitions, e.g. connection to load balanced Oracle RAC.
  • Adapter will not raise error if it cannot locate ojdbc14.jar* file. So either put it in $JRUBY_HOME/lib or ensure that it will be loaded by application server. Would love to hear feedback from people who are using this adapter with JRuby to find out if this behaves well now :)
  • Now you can get PL/SQL debugging information into your ActiveRecord log file. Use dbms_output.put_line in your PL/SQL procedures and functions (that are called from ActiveRecord models) and in your ActiveRecord model use connection.enable_dbms_output and connection.disable_dbms_output around your database calls to get dbms_output logging information into ActiveRecord log file. But please use it just in development environment with debug log level as in production it would add too much overhead for each database call. And this feature also requires that you install ruby-plsql gem.

As you see this probably is the largest “point” release that I have had :) Thanks also to other contributors which patches were included in this release.

As always you can install Oracle enhanced adapter on any Ruby platform (Ruby 1.8.6 / 1.8.7 or Ruby 1.9.1 or JRuby) with

gem install activerecord-oracle_enhanced-adapter

If you have any questions please use discussion group or post comments here.

Categories: Development

Getting Started with Oracle Pro*C on Linux (Instant Client Zip Files)

Mark A. Williams - Sat, 2009-09-26 13:57

The Oracle Pro*C Precompiler is a popular choice for developing Oracle applications in the C/C++ languages. The primary advantage of using the precompiler is that it allows you to embed SQL (and PL/SQL) directly into your application. Used in combination with Oracle Instant Client packages you can easily begin to develop applications using the Pro*C Precompiler capabilities. For information on getting started using Pro*C on Windows, please see my previous post entitled "Oracle Pro*C on Windows with Express Edition Products".

Many of the components used in the previous post are also used here and the steps are similar (if not the same). For this post, all components are installed on a single host (oel02) running Oracle Enterprise Linux. The components used are (valid at time of posting):

  • Oracle Database 10g Express Edition (available here)
  • Oracle Enterprise Linux (available here)
  • Oracle Instant Client Packages for Linux x86 (available here)
  •     Instant Client Package - Basic Lite
  •     Instant Client Package - SDK
  •     Instant Client Package - Precompiler
  •     Instant Client Package - SQL*Plus

NOTE: The precompiler uses the "standard" Oracle Technology Network license rather than the Instant Client license. Be sure to review the license!

You can, of course, allow for some variation in the above; however, you may then need to make adjustments to the steps that follow. For example, the database can be on another host and/or platform. In order to keep things as simple as possible I have elected to use a single host for everything. In order for the sample code to work unaltered you should have access to the HR sample schema user (included in Express Edition). If you require character set or language support not provided by the "Basic Lite" Instant Client package you should use the "Basic" package. In addition, version of the components should work as well (though I have not tested it).

Installing the Instant Client Packages

I downloaded the following Instant Client packages using the above provided link to my home directory on oel02:

  • instantclient-basiclite-linux32-
  • instantclient-precomp-linux32-
  • instantclient-sdk-linux32-
  • instantclient-sqlplus-linux32-

Installing each one is simply a matter of unzipping each file:

[markwill@oel02 ~]$ unzip instantclient-basiclite-linux32-
[markwill@oel02 ~]$ unzip instantclient-precomp-linux32-
[markwill@oel02 ~]$ unzip instantclient-sdk-linux32-
[markwill@oel02 ~]$ unzip instantclient-sqlplus-linux32-

The act of unzipping the files will create an "instantclient_11_2" directory in the directory where the files are unzipped (/home/markwill in my case). The complete list of files installed after unzipping each file is as follows:

[markwill@oel02 ~]$ cd instantclient_11_2/
[markwill@oel02 instantclient_11_2]$ pwd
[markwill@oel02 instantclient_11_2]$ find .
[markwill@oel02 instantclient_11_2]$

Configure the Environment

One of the nice things about using Pro*C on Linux (and UNIX) is that we do not normally have to configure the Pro*C configuration file (pcscfg.cfg) as would generally be necessary in Windows. As a convenience for setting environment variables (such as the PATH), in my home directory I have created a file I use to do this (I'm using the bash shell here):

[markwill@oel02 ~]$ cat oic11.env
export ORACLE_BASE=/home/markwill
export ORACLE_HOME=$ORACLE_BASE/instantclient_11_2
export PATH=$ORACLE_HOME:$ORACLE_HOME/sdk:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/usr/X11R6/bin:~/bin
[markwill@oel02 ~]$

I can then set my environment to use the newly installed Oracle components as follows:

[markwill@oel02 ~]$ . ./oic11.env

Test the Oracle Installation

Since SQL*Plus was installed as part of the above packages, it can be used to quickly and easily test the installation. Here I connect to the Express Edition database as the HR sample schema user using the EZConnect syntax (host:port/Service_Name):

[markwill@oel02 ~]$ sqlplus hr/hr@oel02:1521/XE

SQL*Plus: Release Production on Sat Sep 26 13:27:59 2009

Copyright (c) 1982, 2009, Oracle.  All rights reserved.

Connected to:
Oracle Database 10g Express Edition Release - Production

SQL> select user from dual;


SQL> exit
Disconnected from Oracle Database 10g Express Edition Release - Production
[markwill@oel02 ~]$

Create a Test Application

Now that I have tested that the Oracle software is working correctly, I create a simple test application. I call the test application "proctest" and create a new directory to hold the files:

[markwill@oel02 ~]$ mkdir -p Projects/proc/proctest
[markwill@oel02 ~]$ cd Projects/proc/proctest

Here's the content of the proctest.pc source file:

[markwill@oel02 proctest]$ cat proctest.pc
** standard include files for Pro*C application
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <sqlca.h>
#include <sqlda.h>
#include <sqlcpr.h>

#define NAME_LEN 30
#define ERR_LEN 512


** host variables to hold results of query
int     dept;
int     sal;
int     rn;

** indicator variables used to determine null-ness
short dept_ind;
short fname_ind;
short lname_ind;
short sal_ind;
short rn_ind;

** standard Pro*C error handler function
void sql_error(char *msg)
  char err_msg[ERR_LEN];
  size_t buf_len, msg_len;



  if (msg)
    printf("%s\n", msg);

  buf_len = sizeof (err_msg);
  sqlglm(err_msg, &buf_len, &msg_len);
  printf("%.*s", msg_len, err_msg);



int main(int argc, char *argv[])
  ** setup username, password and database (ezconnect format)
  strncpy((char *) username.arr, "hr", NAME_LEN);
  username.len = (unsigned short) strlen((char *) username.arr);

  strncpy((char *) password.arr, "hr", NAME_LEN);
  password.len = (unsigned short) strlen((char *) password.arr);

  strncpy((char *) database.arr, "oel02:1521/XE", NAME_LEN);
  database.len = (unsigned short) strlen((char *) database.arr);

  ** register the error handler function
  EXEC SQL WHENEVER SQLERROR DO sql_error("ORACLE error: \n");

  ** attempt to connect to the database
  EXEC SQL CONNECT :username IDENTIFIED BY :password USING :database;

  ** indicate we want to break out of the loop when we get a no data
  ** found message (i.e. at end of fetch)

  ** declare and open cursor using analytic function
  ** to get "top 3" in department by highest salary
    select   *
      select   department_id,
               row_number() over (partition by department_id
                                  order by salary desc) rn
      from     employees
      where    department_id is not null
    where   rn <= 3;

  EXEC SQL OPEN emp_cursor;

  ** loop through cursor fetching results
  ** and printing as we go
  for (;;)
    EXEC SQL FETCH emp_cursor
      INTO :dept:dept_ind,

    fname.arr[fname.len] = '\0';
    lname.arr[lname.len] = '\0';

    printf("%d, %s %s, %d, %d\n", dept, fname.arr, lname.arr, sal, rn);

  ** clean-up and exit
  EXEC SQL CLOSE emp_cursor;


  return EXIT_SUCCESS;
[markwill@oel02 proctest]$

The sample application is pretty bland - the only real point of interest is the use of an analytic function to get the "top 3" ranking of salaries by department. The use of the analytic function will allow us to see how options can be passed to the Pro*C precompiler.

Building the Sample

Rather than craft a custom Makefile or enter the various build/compiler commands by hand, the demonstration Makefile that is provided as part of the precompiler package installed earlier can be used with only small modification. To use this Makefile, copy it into the directory with the source file. On my system this was done as follows:

[markwill@oel02 proctest]$ cp $ORACLE_HOME/sdk/demo/demo_proc_ic.mk .

I use VIM to edit the file. Of course you can use whatever editor you wish.

[markwill@oel02 proctest]$ vi demo_proc_ic.mk

The important part is listed in the "NOTES" section in the Makefile:

#    1. Please change "cc/CC" and the "InstantClient directories" to point to
#       appropiate locations on your machine before using this makefile.

Because the CC and cc entries are already correct, I did not alter them. I did, however, change the Instant Client directory entries as follows:

# InstantClient Directories.

By using the ORACLE_HOME environment variable (which is set in my oic11.env file) in the Makefile I do not need to hard-code the actual path.

Before building the sample, take a minute or two to review the Makefile comments. I build the sample using the following command-line:

[markwill@oel02 proctest]$ make -f demo_proc_ic.mk build PROCFLAGS="common_parser=yes" \
> EXE=proctest OBJS="proctest.o"

Notice how PROCFLAGS is used to pass the "common_parser=yes" to the proc binary (i.e the Pro*C program itself). The EXE option determines the name of the binary executable produced and the OBJS option determines what object files are needed. In this case the options are simple, but larger and more complex projects likely use more than a single object file and possibly other Pro*C options.

Once the build has completed test the application:

[markwill@oel02 proctest]$ ./proctest
10, Jennifer Whalen, 4400, 1
20, Michael Hartstein, 13000, 1
20, Pat Fay, 6000, 2

[ snip ]

100, Nancy Greenberg, 12000, 1
100, Daniel Faviet, 9000, 2
100, John Chen, 8200, 3
110, Shelley Higgins, 12000, 1
110, William Gietz, 8300, 2
[markwill@oel02 proctest]$

One final addition I make to the Makefile is creating a new target called "dust". A Makefile traditionally has a "clean" target which can be used to remove various (non-source!) files. These files are generally used during the build process, but are not needed by the final executable. The "clean" target also removes the executable, however. Since I like the idea of being able to clean the no longer necessary files, but want to keep the executable, I add a target called "dust". Here's the "end" of the Makefile with the "dust" target added:

# Clean up all executables, *.o and generated *.c files
clean: $(CLNCACHE)

dust: $(CLNCACHE)
        $(REMOVE) $(PROCDEMO).o $(PROCDEMO).c $(PROCDEMO).lis

        $(REMOVE) $(CACHEDIR)
        $(REMOVE) $(ICLIBHOME)libclntsh$(SO_EXT)

Of course, adding this target is purely optional; however, after building the sample, I then "dust" the directory:

[markwill@oel02 proctest]$ make -f demo_proc_ic.mk dust PROCDEMO=proctest
rm -rf SunWS_cachea
rm -rf /home/markwill/instantclient_11_2/libclntsh.so
rm -rf proctest.o proctest.c proctest.lis
[markwill@oel02 proctest]$

This then leaves the directory a bit cleaner with only the Makefile, source file, and executable:

[markwill@oel02 proctest]$ ls -al
total 28
drwxrwxr-x  2 markwill markwill 4096 Sep 26 14:31 .
drwxrwxr-x  3 markwill markwill 4096 Sep 26 13:40 ..
-rw-rw-r--  1 markwill markwill 3617 Sep 26 14:31 demo_proc_ic.mk
-rwxrwxr-x  1 markwill markwill 9798 Sep 26 14:29 proctest
-rw-rw-r--  1 markwill markwill 2900 Sep 26 14:14 proctest.pc
[markwill@oel02 proctest]$

Sanity check your ORM

Peter O'Brien - Fri, 2009-09-25 15:54
Have you ever had your application completely crash under load and wondered why something that just worked fine yesterday can drive you mental today?

Most of the time an Object-Relational Mapping package is the right solution for your applications to interact with a database. It does not matter if it is free, commercial or home grown, you will generally find that it reduces the amount of code. Moreover, the application code is better designed and easier to test. Depending on your chosen solution, you may even see performance improvements with caching and database specific optimisations.

However, you may also find substational performance degradations when the data volume increases. It is important to remember that no matter how sophisticated your ORM package is, it may still produce inefficient SQL statements. If you had SQL statements embedded in your code you would review that as part of the code review. So if you are going to hand over a critical operation such as the generation of SQL statements then these need to be reviewed.

Your application can become a performance crime scene. What can you do to prevent it? Early analysis.

The execution plan, sometimes called query plan or explain plan, for an SQL statement will provide insight into how the database will act on the SQL statement. It is available in most databases and is a useful tool to determine the efficiency of the statement. For example, an execution plan can indicate what indexes would be used. When code is being reviewed, the execution plans for SQL statements should also be reviewed, in much the same way that the running test classes would be reviewed for correctness.

I learned this the hard way when I found a delete operation took too long and eventually timed out in a test environment that had a high volume of records. A review of the code showed a nice, clean object oriented design and a very simple, easy to understand ORM configuration. The application in question was used to manage financial documents. It had the concept of a ChequeBook, one or more Cheques in a ChequeBook, but also that both a ChequeBook and Cheque were a Document. The class diagram in this article illustrates that relationship. The relationship was also reflected in the database with corresponding tables. The 'type' attribute was used as a discriminator for the two concrete objects: ChequeBook and Cheque.

The problem arose when attempting to delete a ChequeBook. The ORM could delete the Cheque records using the parentContentId easily enough, but since there was no corresponding column on the Documents table, the ORM then produced this innocuous looking statement.


Appears quite innocent at first, in fact it might even be thought quite a clever attempt to clean up all records on the DOCUMENTS table that should have a corresponding CHEQUES record but doesn't. In the development database with only a few hundred CHEQUES, this performs ok. Push that up to a few thousand and you soon realise the problem with doing this blanket select statement on CHEQUES even if it does use an index.

In this case I had to code around the ORM with a hand crafted SQL statement. I could have just invoked remove() on each Cheque entity, but for large ChequeBooks this would have produced a lot of SQL statements. Another approach could have been to use a stored procedure but that would not have been easily portable.

The key lesson learned though was to sanity check the ORM before it drives you crazy with it's unexpected eccentricities. Check your ORM documentation to see how you can get profile details of the SQL statements generated. Go to your database server and get reports on SQL execution plans. The ORM is effectively another developer on your team generating SQL statements. Know and understand what it is producing.

Oracle R12 Payables Discount

V.J. Jain - Thu, 2009-09-24 14:54

One user posted a question on OTN forums for Payables – http://forums.oracle.com/forums/thread.jspa?messageID=3785430#3785430

Hi all,

I have a scenario involving discount on the purchase of an Item. My invoice, as received as the supplier is as follows:

Item xxx 100
Discount (10)

Total 90
VAT(15%) 15.50

Net 103.50

When I create an invoice in AP, I add a line with amount $100 and assign a tax of 15%
The above create a total of $115 and i add this total as header amount in the invoice.
AND in the Scheduled Payment tab at line level, I add a discount amount of $10 for the “First Discount” Info.

When I make payment for the above invoice, a pop-up appears indicating discount will be applied. But the Payment amount is $105 (Invoice header amount minus discount).

Is this the only way to treat discount in AP?
I need the payment amount to be $103.50 after applying discount.

My response was:

If I understand correctly, it seems like you are manually calculating the tax and entering it to the invoice amount. If you configure your taxes in Oracle, and then enter the invoice for the actual amount, then Oracle will know the true invoice amount which is the discount basis and create the tax line according to the configurations (the way you want it). What you are doing now is adding the tax outside Apps configuration and then expecting Apps to calculate the appropriate discount. If you want to use the standard functionality, you need to set up all dependent parts of that functionality. If tax calculation is a dependency of the discount basis, which is certainly a dependency of discount payment terms, then you need to set it up properly.

This really demonstrates one major theme that I consistently encounter with Oracle Apps consultants.  There is a lack of true understanding of the fundamentals behind the applications.  I’m not saying that is the case for this particular user because I don’t know him/her but it is the case for the majority of consultants that are staffed into most projects.

Everything in the ERP system is part of a business flow.  For a company that sells products, it is something similar to
- Leads to Quotes to Sales to [PO|Manufacturing to] Shipping to Invoice to Receipt to Cash to Financials

For Payables it is something like
- Purchasing to Receipt to Accrual to Invoice to Payment

For Discount Payment Term it is going to be
- Invoice /w Discount Terms to Payment with Discount Taken

Most vendors are going to exclude tax and freight from the discount allowed by prompt payment of the invoice.  So obviously if you want Oracle to calculate the amount to take on an invoice with discount terms, it is going to need to know what portion of the Invoice is for freight and tax.  Expecting the discount to be applied to only the $100 of a $115 ($15 from tax) invoice when the invoice is entered as $115 invoice is not going to work.  Have you ever heard of a tax line?

R12 does complicate the management of taxes with the E-Business Tax module.  I know of several consultants who are “experts” in R12 E-Tax and are getting great bill rates as they help setup the new module.

I learned R12 EBTax but not without several weeks of continuous headaches.  I won’t take projects to setup the R12 EBTax because it is tedious.  Regardless, it is important for Oracle apps consultants to be aware of how taxes work in R12 or you run into situations where things are not working the way you want and you don’t know why.  To someone who understands the applications, it jus seems like you lack common sense.

A trivial index demo

Claudia Zeiler - Wed, 2009-09-23 16:20
Today I was sitting in a public library, minding my own business when a man who I had never seen before, leaned over to me and asked me, "Is it true that you can drop and index created explicitly, but not one created implicitly as part of a constraint? Initially the reaction was implicit/explicit? Please speak English. When I sorted that out there were 2 further reactions, also unspoken, "Duh, that's pretty obvious", and "Why me?"

"It is an Ancient DBA and he stoppeth one of three."

I demonstrated the matter a bit to him.

Logged on as scott, I create a play table:
SQL> create table tmp(f1 number(4), f2 number(4), f3 number(4));

Table created.

SQL> insert into tmp values (1, 2, 3);

1 row created.

SQL> create index tmp_f2 on tmp(f2);

Index created.

SQL> drop index tmp_f2;

Index dropped.

All straight forward. The index was created 'explicitly' and there is no constraint that it affects if it is dropped, so I can drop it without problem.

Now to add a constraint, and an 'implicit' index.

SQL> alter table tmp modify f2 unique;

Table altered.



SQL> select index_name from user_indexes where table_name = 'TMP';


The constraint has been created as has an index

SQL> drop index SYS_C0011482;
drop index SYS_C0011482
ERROR at line 1:
ORA-02429: cannot drop index used for enforcement of unique/primary key

and as expected, the index cannot be dropped since it is used by the constraint.

It is all very trivial.

What did get a little interesting was that I was able to enter multiple rows of nulls in a field with a 'unique' constraint. I need to review the properties of null.

And the real question that was never answered

"Now wherefore stopp'st thou me?"

Concepts Guide: 7/27 - Data Dictionary

Charles Schultz - Wed, 2009-09-23 09:36
I still cannot believe "someone" had the gall to reduce the data dictionary to 5 pages. =) And one blank page to rub salt into the wound.

I had an interesting challenge when trying to explain the dictionary to a colleague. So we all have a general idea of what the data dictionary is and what it does (just read the first page of this chapter in the docs). When does it get built? Interesting question. Usually we say that catalog.sql builds the dictionary, or even bsq.sql. However, did you realize that 601 fixed tables ( exist in memory just for starting the instance (NOTE: no database!)? Try this one on for size:
/u01/app/oracle/product/ echo "db_name = SAC" > initSAC.ora
/u01/app/oracle/product/ . oraenv
ORACLE_HOME = [/u01/app/oracle] ? /u01/app/oracle/product/
/u01/app/oracle/product/ sqlplus / as sysdba

SQL*Plus: Release - Production on Wed Sep 23 09:32:35 2009

Copyright (c) 1982, 2007, Oracle. All Rights Reserved.

Connected to an idle instance.

SAC_SQL > startup nomount
ORACLE instance started.

Total System Global Area 419430400 bytes
Fixed Size 2286584 bytes
Variable Size 114105352 bytes
Database Buffers 268435456 bytes
Redo Buffers 34603008 bytes

SAC_SQL > select count(*) from X$KQFTA;


If you want all the names of those fixed tables, select KQFTANAM.

So, even without running any scripts, we have "data dictionary" objects residing in memory. This helps me to understand what the dictionary is and what it does. For an analogy, it is like the overhead address space a program uses to store all its global variables. In this case, it is just happens to be organized into relational tables with gobbly-gook table/column names and normalized up the wazoo! =)

I have to confess, I was quite disappointed with this chapter. Of the 5 pages, the first 4 tell you not to change anything and give a brief overview of the differences between USER, ALL and DBA views. The last page starts to get just a tiny bit juicy and at least mention dynamic performance views, but then like a cheap battery dies out too soon.

There is some really cool stuff locked away in the x$ tables, let alone the rest of the db metadata. For a quick glimpse under the covers, I found eygle's list quit informative and helpful; it is allegdegly a copy of Metalink note 22241.1, which is supposed to be internal only (I cannot see it at this point in time); it merely lists the fixed tables to demonstrate the naming convention, but it also gives us an idea how things are organized and what types of things the kernel tracks. I hope one day Oracle wakes up and realizes the advantage of sharing more information like this.

Yong Huang also has a listing.

Starting my blog, finally

V.J. Jain - Mon, 2009-09-21 16:06

I’ve been wanting to create a blog for a long time now and I’m finally getting around to it.  Originally, I was hoping that the Oracle Forums would be an interactive resource but with almost 20,000 registered members, it is too difficult to moderate.

So here it is, my new blog.  Feel free to join and comment!

- V.J. Jain

A new decade for Enterprise Java Applications

Peter O'Brien - Mon, 2009-09-21 14:54
We are coming up on the 10th anniversary of the EJB 1.1 final release. When you take a look at the J2EE specification and the Enterprise Java Application programming models and frameworks that are out there now you begin to appreciate what an important step the EJB specification was. Not so much for what you can do with it today, but how it introduced a paradigm shift.

A more recent paradigm shift is cloud computing which effectively puts the application container, in a container. One to watch is CloudFoundry which provides a pay-as-you-go platform using the SpringSource application server. This is a Spring base java application server which is built on Apache Tomcat. The CloudFoundry makes this platform available using Amazon Web Service infrastructure. Although still in beta, it is available now.

<h2 class="bTitle">Announcing new

Susan Duncan - Mon, 2009-09-21 11:09
Announcing new Oracle Development SIG in the UK

On the 27th of October, the UKOUG are launching a new Development SIG aimed at any developer who is developing using Oracle tools and technology. The first special interest group (SIG) will have a "modernization" theme to it and covers topics as including Apex, SOA, Forms, Designer and JDeveloper.

A full SIG agenda is published here.

As usual with these events, space is limited and given the wealth of experienced presenters and topics, it should serve as a great learning opportunity as well as the chance to exchange ideas and meet with fellow developers.

Learning about parameters

Charles Schultz - Mon, 2009-09-21 09:06
We are kicking off an effort to analyze and hopefully standardize parameters across some 200 databases. Fortunately, we have a history of parameters which is made possible by an in-house job that scrapes them from each database every night.

So this post is more about my own little path of discovery; I am sure most of you know this already.

Lesson 1: Pay attention to what the documentation says
For instance, db_cache_size has this little blurb in the 10gR2 docs:
The value must be at least 4M * number of cpus * granule size (smaller values are automatically rounded up to this value).
Let us assume the documentation means to say 4*1024*1024, as opposed to 4 megabytes; 4 megabytes * granule size (which is also in bytes) would give you bytes squared, which makes no sense. We will also assume the granule size = 8KB, but in order for the numbers to be realistic, we will say 8 bytes. That first 4M is really throwing things off. 4M * 8 bytes = 33,554,432 bytes, or 32 megabytes.

So let's say we have a generic setting of, say 200M (bear with me). If you upgrade your host from using 5 CPUs (4M * 5 cpus * 8K = 32MB * 5 = 160MB) to 10 CPUs (160MB * 2 = 320MB), the setting for db_cache_size will automatically jump up to 320MB, despite your explicit setting of 200M. If you had budgeted your databases to max out memory, you might want to be a tad careful. =)

Lesson 2: Comments in the init file are stored in x$ksppcv.ksppstcmnt (and thus v$parameter.UPDATE_COMMENT)
Of course, one has to put the comments in the right place. We pretty much discovered this by accident; if you put a # followed by text after the parameter=value, you are all set. Of course, if you ever look at spfiles, they have the same thing. Just something we have missed for a very long time. =) We have found this to be very handy in identifying parameters that deviate from a given "standard" or are set as a result of a bug. Or an upgrade (which are just really massive bug fixes, as well as new bug introductions, right? *grin*).

Lesson 3: Undocumented/unpublished x$ tables really sucks
I really wish Oracle would document the x$ tables for us. So I am looking at X$KSPPI.ksppity; all the online blogs/white papers/articles that I can find decode this value into 4 groups (boolean, string, number, file). But wait, in 10gR2 I have 5 distinct types (1-4, 6). 6 seems to correspond to "Big Integer", that little convention of using [K|M|G] after a number to distinguish a shorthand for its size. Not sure why 5 was skipped - we do not have any values for 5.

Overdramatic? Saying that such a small thing sucks? This is merely the tip of the iceberg - there is so much out there that is not available for the general public, and the experts end up making guesses (really good and quite often very accurate guesses, but guesses none-the-less).

Well that is it for now. Still exploring, learning... seeing what's out there.

The ultimate story about OCR, OCRMIRROR and 2 storage boxes – Chapter 2

Geert De Paep - Sat, 2009-09-19 14:29
Scenario 2: loss of ocrmirror, both nodes down

(This is the follow-up of chapter 1)

Let’s investigate the vote count a little further by doing the following test:

  • First stop crs on both nodes
  • Then make the lun with ocrmirror unavailable to both nodes

What happens?

Let’s check the ocr status before starting crs on any node:

bash-3.00# ocrcheck
PROT-602: Failed to retrieve data from the cluster registry

The crs alert file shows:

2008-07-18 15:57:36.438
[client(24204)]CRS-1011:OCR cannot determine that the OCR content contains the latest updates. Details in /app/oracle/crs/log/nodea01/client/ocrcheck_24204.log.

and the mentioned ocrcheck_24204.log file:

Oracle Database 10g CRS Release Production Copyright 1996, 2008 Oracle.
All rights reserved.
2008-07-18 15:57:36.405: [OCRCHECK][1]ocrcheck starts…
2008-07-18 15:57:36.437: [ OCRRAW][1]proprioini: disk 0 (/dev/oracle/ocr) doesn’t
have enough votes (1,2)

2008-07-18 15:57:36.438: [ OCRRAW][1]proprinit: Could not open raw device
2008-07-18 15:57:36.438: [ default][1]a_init:7!: Backend init unsuccessful : [26]
2008-07-18 15:57:36.439: [OCRCHECK][1]Failed to access OCR repository: [PROC-26: Error while accessing the physical storage]
2008-07-18 15:57:36.439: [OCRCHECK][1]Failed to initialize ocrchek2
2008-07-18 15:57:36.439: [OCRCHECK][1]Exiting [status=failed]…

I didn’t try to start the CRS at this time, however I am sure it would result in the same error messages. Note the colored messages. The second one explains what the real problem is: one of the ocr devices is unavailable: error while accessing the physical storage. This is exactly the information you need to troubleshoot a failing crs start. The other message tells us more about the internals: the remaining ocr has only 1 vote, which isn’t enough. So that’s rule 3 in the world of CRS. So read and remember for once and for all:

  1. Rule 1: CRS can start if it finds 2 ocr devices each having one vote (the normal case)
  2. Rule 2: CRS can start if it finds 1 ocr having 2 votes (the case after loosing the ocrmirror).
  3. Rule 3: CRS CANNOT start if it finds only one ocr device having only 1 vote

Now if this is a production environment and we really want to get the cluster + databases up, how do we proceed? Well we can do so by manually telling the cluster that the remaining ocr is valid and up-to-date. Note however that this is an important decision. It is up to you to know that the remaining ocr is valid. If you have been playing too much with missing luns, adding services, missing the other lun etc… it may be that the contents of the ‘invisible’ ocrmirror are maybe more recent than those of the visible ocr. If in that case you tell crs that the ocr is valid, you may loose important information from your ocrmirror. Anyway in most cases you will know very well what to do, and issue as root:

ocrconfig -overwrite

Now find the most recent file in $ORA_CRS_HOME/log/nodename/client and see that it contains:

Oracle Database 10g CRS Release Production Copyright 1996, 2008 Oracle.
All rights reserved.
2008-07-18 15:59:56.828: [ OCRCONF][1]ocrconfig starts…
2008-07-18 15:59:58.644: [ OCRRAW][1]propriowv_bootbuf: Vote information on disk
0 [/dev/oracle/ocr] is adjusted from [1/2] to [2/2]

2008-07-18 15:59:58.644: [ OCRCONF][1]Successfully overwrote OCR configuration on
2008-07-18 15:59:58.644: [ OCRCONF][1]Exiting [status=success]…

So now we are in the situation of scenario 1: one ocr device available having 2 votes. This gives:

Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :     295452
         Used space (kbytes)      :       5112
         Available space (kbytes) :     290340
         ID                       : 1930338735
         Device/File Name         : /dev/oracle/ocr
                                    Device/File integrity check succeeded
         Device/File Name         : /dev/oracle/ocrmirror
                                    Device/File unavailable

         Cluster registry integrity check succeeded

And the crs startup happens without problem:

-bash-3.00# crsctl start crs<br />Attempting to start CRS stack<br />The CRS stack will be started shortly

Note however that you still have to recover from this as in scenario 1 using “ocrconfig -replace ocrmirror /dev/…” once the storage box containing the ocrmirror is available again.

Conclusion of scenario 2

When loosing an ocr or ocrmirror while crs is down on both nodes, Oracle is not able to update the vote count of the remaining ocr (no crs processes are running to do this). As a consequence it is up to you to do that by using the “overwrite” option of ocrconfig. After this, CRS can start as normal and later on you can recover from this when the ocrmirror becomes available again or when you can use another new device for ocrmirror.

So this looks great, let’s buy that additional storage box now.

But still I am not satisfied yet. Until now we had ‘clean errors’. I.e. both nodes were up or down, and the storage disappeared from both nodes at the same time. Let’s play a little more in the next chapters…

Interrupting Flashback Database

Fairlie Rego - Fri, 2009-09-18 20:52
So a user called me up and said he was flashing back a database and he was not too happy with the amount of time it was taking to complete the operation and so he did a Ctrl-C...

When he tried to open the database this is what he saw..


SQL> shutdown abort
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 838860800 bytes
Fixed Size 2033832 bytes
Variable Size 520099672 bytes
Database Buffers 310378496 bytes
Redo Buffers 6348800 bytes
Database mounted.
ORA-38760: This database instance failed to turn on flashback database

SQL> alter database flashback off;

Database altered.

SQL> alter database open;
alter database open
ERROR at line 1:
ORA-38760: This database instance failed to turn on flashback database

===>... Well he just turned off flashback..

SQL> alter database open resetlogs;
alter database open resetlogs
ERROR at line 1:
ORA-01139: RESETLOGS option only valid after an incomplete database recovery


Looking through the alert.log

Fri Sep 18 08:35:05 2009
flashback database to restore point TEST_AFTER
Fri Sep 18 08:35:05 2009
ORA-38757 signalled during: flashback database to restore point TEST_AFTER...
Fri Sep 18 12:00:48 2009

No errors in the RVWR process trace file

So I just dropped the restore point and voila

> drop restore point TEST_AFTER;

Restore point dropped.

> select flashback_on from v$database;


> alter database open;

Database altered.

ODAC Production Release Available

Mark A. Williams - Wed, 2009-09-16 13:50
The new ODAC Production release is now available. See these links for more information and to download.

- Download
- New Features List
- ODT SQL Tuning Advisor Viewlet
- ODT Oracle Performance Analyzer Viewlet
- ODP.NET White Paper

Happy coding.

quick catch up on peoplesoft

Nuno Souto - Wed, 2009-09-16 06:32
It's been a while since I posted anything to do with my "beloved" Peoplesoft...umm... well, you know what I mean...Some might recall this post a while ago? It's where I discussed our approach to this common problem with scratchpad tables in Peoplesoft.Anyways: some developments I reckon could be of use to anyone going through the same problem.I've since had a good exchange with Dave Kurtz where Noonsnoreply@blogger.com4

How to install Oracle Database 10g on Mac OS X Snow Leopard

Raimonds Simanovskis - Sun, 2009-09-13 16:00

sl_oracle.jpgOracle Database 10g is not yet officially supported on new Mac OS X 10.6 Snow Leopard but thanks to comments at my previous tutorial I managed to do Oracle 10g installation on fresh Mac OS X Snow Leopard.

If you have upgraded from Leopard with Oracle 10g installation to Snow Leopard then most probably Oracle 10g should work fine and you should not do anything. These instructions are just for fresh installation of Snow Leopard.

And also please take in mind that Oracle 10g on Snow Leopard is not supported yet by Oracle and therefore please do not run critical production applications on it :)

So here are my updated Oracle 10g installation instructions for Snow Leopard.

Initial preparation

At first you need Xcode tools installed on your Mac OS X.

Then you need to create oracle user as well as increase default kernel parameters. Open Terminal and switch to root user:

sudo -i

Create oinstall group and oracle user (I used group and user number 600 to ensure that they do not collide with existing groups and users):

dscl . -create /groups/oinstall
dscl . -append /groups/oinstall gid 600
dscl . -append /groups/oinstall passwd "*"
dscl . -create /users/oracle
dscl . -append /users/oracle uid 600
dscl . -append /users/oracle gid 600
dscl . -append /users/oracle shell /bin/bash
dscl . -append /users/oracle home /Users/oracle
dscl . -append /users/oracle realname "Oracle software owner"
mkdir /Users/oracle
chown oracle:oinstall /Users/oracle

Change password for oracle user:

passwd oracle

Change default kernel parameters:

vi /etc/sysctl.conf

and enter values recommended by Oracle:


Oracle DB installation scripts have reference to Java version 1.4.2 which is not present on Snow Leopard. The easiest way to fix it is to create symbolic link to newer version of Java:

sudo ln -s /System/Library/Frameworks/JavaVM.framework/Versions/1.5.0 /System/Library/Frameworks/JavaVM.framework/Versions/1.4.2

After this reboot your computer so that these new kernel parameters would be taken into effect.

After reboot you need to log in as new “Oracle software owner” user (as now Snow Leopard has stricter control for access to X11 display and therefore I couldn’t manage to start Oracle installation just from terminal).

Open Terminal application and set shell settings in .bash_profile

vi .bash_profile

and enter

export DISPLAY=:0.0
umask 022
ulimit -Hn 65536
ulimit -Sn 65536

As you see I prefer to install all Oracle related files under home directory of oracle user therefore I am setting ORACLE_BASE to home directory. And also include ulimit settings – I forgot to do this initially and got strange TNS service errors because of that.

Now execute this script so that these settings are applied to current shell:

. ./.bash_profile

Now download db.zip installation archive and place it somewhere and unzip it:

mkdir Install
cd Install
# download db.zip to this directory
unzip db.zip
cd db/Disk1

Now you are ready to start installation. In Snow Leopard you need to pass -J-d32 option to installation script to force to run Java in 32-bit mode as some native libraries are 32-bit:

./runInstaller -J-d32

In installation wizard I selected the following options:

  • Standard Edition – as I don’t need additional features of Enterprise Edition
  • Install Software Only – we will need to do some fixes before database creation

In the middle of installation you will get error message “Error in invoking target ‘all_no_orcl ipc_g ihsodbc32’ …” (message truncated). Please do not press anything and switch to Terminal application.

cd ~/oracle/product/10.2.0/db_1/rdbms/lib
vi ins_rdbms.mk

and in this file you need to search for line containing HSODBC_LINKLINE (in vi enter /HSODBC_LINKLINE) and comment out this line with putting @# @ in front of it:


and save changed file.

In this way we disable failing compilation of library which is anyway not needed for our Oracle DB installation.

After that you can switch back to Oracle installation application and press Retry.

At the end of installation you will be instructed to run one shell script from root. To do that open new tab in Terminal and execute (substitute “username” with your login name):

su - username
sudo /Users/oracle/oracle/product/10.2.0/db_1/root.sh

Hopefully installation will complete successfully.

Creation of database

Switch back to Terminal tab with oracle user and add the following lines to .bash_profile of oracle user:

export ORACLE_HOME=/Users/oracle/oracle/product/10.2.0/db_1
export ORACLE_SID=orcl

and execute it

. ~/.bash_profile

Now you need to modify $ORACLE_HOME/jdk/bin/java script and change ...java -Xbootclasspath... to ...java -d32 -Xbootclasspath.... This is necessary to force netca and dbca utilities to run in 32-bit mode.

Now you need to do the major installation hack :) Unfortunately the main oracle executable binary when compiled under Snow Leopard is giving core dumps when starting Oracle database and currently the only way how I managed to fix it is to replace this executable file with the one which was compiled previously under Leopard. So you need to download it in trust me that it is correct :)

curl -O http://rayapps.com/downloads/oracle_se.zip
unzip oracle_se.zip
chmod ug+s oracle
rm oracle_se.zip

(If you installed Oracle Enterprise Edition then please substitute oracle_se.zip with oracle_ee.zip)

Now you can run Network Configuration Assistant


and select all default options to create listener and wait until you get confirmation message that listener is configured and started.

After that you can run Database Configuration Assistant


and select

  • Create a Database
  • General Purpose
  • Specify orcl as Global Database Name and SID (or set it to something different if you need)
  • Specify password for SYS and SYSTEM users
  • I selected also Sample Schemas
  • and in Character Sets I selected Use Unicode (AL32UTF8)

At the end of installation I tried to use Password Management to unlock additional schemas but it didn’t work – so you need to unlock other sample schemas if needed using sqlplus.

At the end of installation verify if you can connect to newly created database

sqlplus system@orcl

I hope that my fixes will help you as well and you will be able to connect to database.

If you want to unlock other sample users then do it from sqlplus, e.g.:

alter user hr account unlock identified by hr;

Further instructions are the same as for Leopard and there are no more changes.

Change listener to listen on localhost

As I need this Oracle database just as local development database on my computer then I want to change the listener so that it would listen just on localhost port 1521:

vi $ORACLE_HOME/network/admin/listener.ora

and change it to:

    (SID_DESC =
      (SID_NAME = PLSExtProc)
      (ORACLE_HOME = /Users/oracle/oracle/product/10.2.0/db_1)
      (PROGRAM = extproc)
    (SID_DESC =
      (SID_NAME = orcl)
      (ORACLE_HOME = /Users/oracle/oracle/product/10.2.0/db_1)
      (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))

Then also change ORCL alias definition in $ORACLE_HOME/network/admin/tnsnames.ora to:

    (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
      (SERVICE_NAME = orcl)

After this change restart listener and try to connect with sqlplus to verify that these changes are successful.

Automatic startup of Oracle database

If you want that Oracle database is started automatically when your computer is booted then you need to create the following startup script. Start terminal and switch to root.

At first edit /etc/oratab and change N to Y at the end of line for ORCL database – this will be used by dbstart utility to find which databases should be started automatically.

Then create startup script for Oracle database:

mkdir /Library/StartupItems/Oracle
cd /Library/StartupItems/Oracle
vi Oracle

and enter the following:


# Suppress the annoying "$1: unbound variable" error when no option
# was given
if [ -z $1 ] ; then
  echo "Usage: $0 [start|stop|restart] "
  exit 1

# source the common startup script
. /etc/rc.common

# Change the value of ORACLE_HOME to specify the correct Oracle home
# directory for the installation

# change the value of ORACLE to the login name of the
# oracle owner at your site


# Set shell limits for the Oracle Database
ulimit -Hu 2068
ulimit -Su 2068
ulimit -Hn 65536
ulimit -Sn 65536

  ConsoleMessage "Starting Oracle Databases"
  su $ORACLE -c "$ORACLE_HOME/bin/dbstart $ORACLE_HOME"

  ConsoleMessage "Stopping Oracle Databases"
  su $ORACLE -c "$ORACLE_HOME/bin/dbshut $ORACLE_HOME"


RunService "$1"

and then make this script executable

chmod a+x Oracle

and in addition create properties file:

vi StartupParameters.plist

with the following contents:

  Description     = "Oracle Database Startup";
  Provides        = ("Oracle Database");
  Requires        = ("Disks");
  OrderPreference = "None";

Now you can verify that these scripts are working. Open new terminal and try

sudo /Library/StartupItems/Oracle/Oracle stop

to stop the database and

sudo /Library/StartupItems/Oracle/Oracle start

to start again the database. And later you can reboot your computer also to verify that Oracle database will be started automatically.

Hide oracle user from login window

After computer reboot you probably noticed that now you got oracle user in initial login window. To get rid of it execute this from terminal:

sudo defaults write /Library/Preferences/com.apple.loginwindow HiddenUsersList -array-add oracle
What next?

Now when you have Oracle database installed you would need some development tools that you could use to access the database. Here are some links:

Please comment if you find any issues with Oracle Database 10g installation on Snow Leopard using this tutorial.

Categories: Development

ASM Hands-On Training, Server Enviroment Setup And Aliases

Alejandro Vargas - Sat, 2009-09-12 05:37

The server where the labs of the ASM Hands-On Training works is configured with tcsh as the default oracle user shell.

A .cshrc file containing several aliases was setup to easy moving around and executing repetitive commands.

On this document there is a general explanation of how the environment can be used.

Details on this file: Server Enviroment Setup And Aliases

Categories: DBA Blogs

ASM Hands-On Training, Lab 21, ASMCMD Use And Options

Alejandro Vargas - Sat, 2009-09-12 05:27

On this lab we will review some of the useful set of commands provided by the ASM command line utility.

Some of the asmcmd commands display information, these information is based on v$asm views, other commands actually make changes to the structure like mkdir or rm.

Details on file: ASMCMD Use And Options

Categories: DBA Blogs


Subscribe to Oracle FAQ aggregator