Skip navigation.

Feed aggregator

Is anyone actually using this database? - How to tell whether or not you can delete an old database

OracleContractors - Sat, 2011-08-27 04:07

When you log into a database server that has been running for several years, you will often find lots of files from databases that may not have been used for some time.

The people who created them probably left the company long ago and there is little documentation on the databases. Nobody is certain whether or not they are still being used, so they just sit around using up disk space.  If the databases are started up, then they’re also using memory, even though they may not be doing anything productive.

Rather than prolonging this situation, it’s a good idea to remove these databases - once you’ve confirmed that no-one uses them anymore.  Unfortunately, within a large business it may not be easy finding out who within the company is using which databases, if documentation has not been kept up to date.

As a starting point you can look in the following places:
(i) Database alert log
If the database is currently shut down, then what date was it shut down? If this was a long-time ago (i.e. more than 1 year), then the chances are that the database is no longer used, or is so out of date with current business data that it would need to be refreshed/updated before it could be of any use. 

Note: Just because things such as configuration changes (i.e. extending datafiles) are shown in the alert log, it doesn’t mean to say that it’s actually in use.  A DBA can carry out maintenance tasks on a database, whether or not there are any business users carrying out work.

Similarly, you may find that a backup tool such as RMAN regularly connects to the database.  This also doesn’t mean to say that the database is being used by the business - just that it’s being backed up.

(ii) Listener log
Search in the listener log for the last entries for that database. Even if the database is shutdown, this will let you know the last time that anyone was using it.  Be careful that you don’t think that someone is using the database just because there is an automated application process or reporting tool that connects to the database regularly.  If individual users are not connecting to the database, then it may not be in use anymore. 

Note: Double-check with any reports or application support teams as well, because the database may just be a repository, in which case individual user connections could be rare.

(iii) Datafile timestamps
If the database is shutdown, What is the last modified date shown in the filesystem for the datafiles?  I’ve come across situations where you can’t find any entries for the SID in the listener log and the alert log for the database has been removed, so this is a good way to find out when the database was last open.

Note: This is assuming that someone hasn’t just copied the files from another location, without preserving the original file timestamps.
(iv) Run AWR/ASH or Statspack reports
These will let you know if anyone has been using the database within the last week or longer - depending on the retention period configured for the database.  (If performance snapshots are not configured, then you may also find useful information on long-running transactions in V$SESSION_LONGOPS).

Note: Just because a database hasn’t been used in the last week, doesn’t mean to say it’s not required anymore.  It could be used for monthly, weekly or annual reporting, so you may not see regular activity.
 (v) Database Auditing
If database auditing has been enabled, it will give you an idea who has been accessing the database. (e.g. query views such as DBA_AUDIT_SESSION and DBA_AUDIT_TRAIL).
(vi) Standby databases
It’s worth running the query SELECT DATABASE_ROLE FROM V$DATABASE; If this returns “PHYSICAL STANDBY” or “LOGICAL STANDBY”, then it could mean that someone has deleted a primary database in the past, but neglected to remove the associated standby database. 
 

Before deleting any databases, it would be advisable to email anyone that may have an interest in the database and then ensure that a copy of the database files and configuration information is archived to tape or other storage for a pre-defined period of time.  Ensure that you have approval from all interested parties before carrying out any work like this, which could have an impact on the business.

Categories: APPS Blogs

Three New free of charge Oracle White Papers released for August!!!

OracleContractors - Mon, 2011-08-15 07:14

Hi all,

Many thanks to the following authors for their White Papers, your contributions are much appreciated & help to share Oracle knowledge on a global scale:

Martin Dvorak - “How to Plan & Deliver Oracle eBusiness Suite Training with UPK”

Claire Aukett - “Absence Management using R12 Oracle HCM & CRM”

Elwyn Lloyd Jones - “Upgrade Strategies for OWB Environments”

If you would like a copy of any of these papers, please register with the White Paper area of our website where you can gain access to our full White Paper library & if you are interested in becoming a White Paper author yourself, please contact me at: kirsten.campbell@oraclecontractors.com

New papers are also being released in September too so watch this space!

Categories: APPS Blogs

To OCP or not to OCP, that is the question

OracleContractors - Sat, 2011-08-13 03:09

Over the years, the topic of whether or not to take the Oracle Certified Professional (OCP) exams has been discussed many times. A large number of clients and agencies now regularly ask for candidates who are OCP qualified.

The main arguments against the exams seem to be along the following lines:

 

The exams only deal with theoretical situations. You can’t beat real-life experience. 
This is true, but the exams do demonstrate that you are able to understand technical issues.  In order to resolve problems, you need to know how the software works.  You also need real-life experience of using your theoretical knowledge in a practical manner, before you can become an effective DBA.

Outside of a test laboratory or classroom, you have real users, applications and software from multiple vendors. Once exposed to these environments you become a much better DBA.

 
The exam is just a memory test.
That’s true to some extent - but you have to understand the question and which of the possible answers is the correct one. You still have to understand what you’ve remembered.  Even though you may forget the exam topics over time, at least you have positive proof that at the time you took the exam, you knew that area of Oracle in detail.

 

I don’t need to take the exams to show that I keep up to date.
Whilst you can just read the documentation, at least the exams prove that you’ve made the effort to keep your skills current. Otherwise everyone else just has your word for it that you have.  

The other issue is that you can read the documentation but not understand it properly. Passing the exam is proof that you understood the concepts in sufficient depth to pass the exam.

Taking the exams also provides a more focused way of keeping up to date.

 

 
Why bother learning about lots of features that you’re never going to use?
There are lots of features that you may never use, but if a new problem arises - if you’ve kept up to date - then you’re are aware of all the possible solutions.  You don’t always have several days or hours to go away and research all the available options. Even having a high-level overview of a solution can mean that you not only resolve issues more quickly, but that you’re more likely to come up with the most effective solution.  If you aren’t aware of other solutions then you never will use them. You’ll just end up doing things the same way that they’ve been done for years.

Another reason for learning about many features is that unless you can predict the future, how do you know what features you will never use? 

Knowledge of lots of functionality is useful when resolving issues because more options that were once separate from the main database installation are now integrated into it. Sometimes these options can cause errors even though your application isn’t actually using them.

The exams also demonstrate that you’re interested enough in the technology to want to keep up to date. Nobody forces you to take them.

Whether or not you decide to take the exams is a personal choice, but I would say that they can be useful as a starting point to differentiate between two DBA’s who have a similar level of experience.  There is still no substitute for real-world experience and just because someone passes the exam doesn’t necessarily mean that they’ll be a better DBA. 

(In case you’re wondering, I have taken the exams!)

Categories: APPS Blogs

Tracing ODBC Connections to an Oracle Database

OracleContractors - Thu, 2011-08-04 23:06

Various applications can be configured to connect to an Oracle database using an ODBC connection. When there are problems with the connection, it can sometimes be useful to enable ODBC tracing. 

This is a pretty straightforward task and can often highlight useful information to diagnose issues such as incorrect ODBC drivers or driver versions, or attempting to use incorrect database connection information. 

The Scenario
————-
To demonstrate ODBC tracing, we’ll first log into an Oracle 11.2.0.1.0 Enterprise edition database called “ORCL11″ and create an account called “odbc1″:   

create user odbc1 identified by odbc1;

(Note: You may want to make your password more secure than this! Remember also that 11g has case-sensitive passwords by default)

grant create session, create table to odbc1;
alter user odbc1 default tablespace users;
alter user odbc1 quota 10M on users;

Next we’ll connect as our new user and create a test table with a small amount of data.

connect  odbc1/odbc1

create table odbc_test_tab (col1 varchar2(40));
insert into odbc_test_tab values (’TEST’);
insert into odbc_test_tab values (’TEST2′);
commit;


Then we create an ODBC connection to our ORCL11 database.  Create the connection using a System DSN called “EXCEL_TEST11″. Use the Oracle ODBC Driver 11.2.00.01 and the odbc1 database account to connect.

The last step is to create a new Excel 2010 spreadsheet called “odbctesting.xlsx” 

Note: To keep this post brief, I haven’t included full details of the steps to create the ODBC connection, or of setting up the connection in Excel. If anyone wants detailed instructions on how to do this, please let me know.  For the Excel connection the main steps are to go to the Data tab - “From Other Sources” - “From Data Connection Wizard” - “ODBC DSN” - Next and then select the “EXCEL_TEST11″ ODBC connection.  (Even though you’ll see a large listing of database objects, the ODBC1.ODBC_TEST_TAB is in the list - near the end. All the other objects are database views and tables to which PUBLIC - i.e. all users - have been  granted access).   Don’t select the option to save the password to the file.
 

Turning on tracing
——————
Next we’ll turn on ODBC tracing, so that we can see what’s happening when the connection is being made. From a Windows XP client, as you need to do is:

Start - control panel - Administrative Tools - Data Sources (ODBC)   (or you can go  Start - run - odbcad32)

Then: Go to the “Tracing” tab - Click on the “Start Tracing Now” button

Once you’ve clicked on the “Start Tracing Now” button, you’ll notice it will change to be a “Stop Tracing Now” button - Apply - OK.   This closes the ODBC Data Source Administrator.
Notes:
(i) To change the location of the logfile, click on the “Browse” button on the Tracing tab. You can also change the name of the log file. Then click Save.
(ii) Be aware, that this will turn on tracing for ALL ODBC connections running on this client.
(iii) Tracing could have a serious performance impact on your application, so only enable it if neccessary. 
(iv) Microsoft support article ID: 942976 notes that 64-bit versions of windows have two versions of the ODBC Administrator tool:

%systemdrive%\Windows\SysWoW64 folder.   - 32-bit version of Odbcad32.exe
%systemdrive%\windows\System32 folder.   - 64-bit version - also called Odbc32.exe

If running odbcad32 to edit 32-bit DSN’s, then specify the full path to the executable. 32-bit System DSN’s will only appear in the 32-bit version of odbc32.exe and 64-bit System DSN’s will only appear in the 64-bit version  of odbc32.exe.  However, be aware that User DSN’s will appear in both versions. Please refer to the Microsoft support note for more details.  
 

Viewing a successful connection
——————————-
To see a successful connection, we can open our spreadsheet and select the “Refresh All” option on the Data tab.
The ODBC tracing logfile contains a large amount of information. An extract is shown below. (I’ve added comments pre-fixed by “#”, but you obviously won’t see these in the logfile).


e               8fc-dd0 EXIT  SQLAllocConnect  with return code 0 (SQL_SUCCESS)   # successfully connected to the database.

  WCHAR *             0×03B9F460 [      37] “SELECT * FROM “ODBC1″.”ODBC_TEST_TAB”"  # select statement to run against our table.

e               8fc-dd0 EXIT  SQLDescribeColW  with return code 0 (SQL_SUCCESS) # Description of column data within the table follows
  …
  WCHAR *             0×0013FA20 [       4] “COL1″                # Column name

  SQLULEN *           0×024FEA68 (40)                             # Column length      

e               8fc-dd0 EXIT  SQLExecDirectW  with return code 0 (SQL_SUCCESS)  # Successfully executed SQL select statement

e               8fc-dd0 EXIT  SQLFetch  with return code 0 (SQL_SUCCESS) # Successful fetch of data from the first row in our table

e               8fc-dd0 EXIT  SQLFetch  with return code 0 (SQL_SUCCESS) # Successful fetch of data from the second row in our table

e               8fc-dd0 EXIT  SQLFetch  with return code 100 (SQL_NO_DATA_FOUND) # No more data to be fetched from the table - there are only 2 rows.

e               8fc-dd0 EXIT  SQLDisconnect  with return code 0 (SQL_SUCCESS)  #Disconnect from the database
 

Viewing an unsuccessful connection attempt due to an incorrect password
————————————————————————
Here we refresh our spreadsheet again, but deliberately use an incorrect password:

Excel error messages:

[Oracle][ODBC][Ora]ORA-01017:invalid username/password; logon denied
[Microsoft][ODBC Driver Manager] Driver’s SQLSetConnectAttr failed

ODBC Trace file error messages:

  DIAG [28000] [Oracle][ODBC][Ora]ORA-01017: invalid username/password; logon denied
 (1017)
  DIAG [IM006] [Microsoft][ODBC Driver Manager] Driver’s SQLSetConnectAttr failed (0)

 

Wrong Database Service name specified
————————————-
We refresh our connection, but put an incorrect entry in the “Service Name” box:

Excel error messages:

[Oracle][ODBC][Ora]ORA-12170: TNS:Connect timeout occurred
[Microsoft][ODBC Driver Manager] Driver’s SQLSetConnectAttr failed

ODBC Trace file error messages:
  DIAG [S1000] [Oracle][ODBC][Ora]ORA-12170: TNS:Connect timeout occurred (12170)
  DIAG [IM006] [Microsoft][ODBC Driver Manager] Driver’s SQLSetConnectAttr failed (0)

Listener is down
—————-
We shut down the listener and then refresh the spreadsheet. Excel error messages:

[Oracle][ODBC][Ora]ORA-12541: TNS:no listener

ODBC Trace file error messages:

DIAG [S1000] [Oracle][ODBC][Ora]ORA-12541: TNS:no listener (12541)
DIAG [IM006] [Microsoft][ODBC Driver Manager] Driver’s SQLSetConnectAttr failed (0)

 

 

Database is down
—————–
Startup the listener and shut down the 11g database. Refresh the Excel spreadsheet:
Excel error messages:

[Oracle][ODBC][Ora]ORA-12514: TNS:listener does not currently know of service requested in connect descriptor

ODBC Trace file error messages:

DIAG [S1000] [Oracle][ODBC][Ora]ORA-12514: TNS:listener does not currently know of service requested in connect descriptor (12514)
DIAG [IM006] [Microsoft][ODBC Driver Manager] Driver’s SQLSetConnectAttr failed (0)

Note: You might think, why bother tracing ODBC if you get the error messages in Excel anyway?  That’s fine, but the main purpose of this post is to illustrate that if you are using an application which doesn’t supply detailed messages, you may be able to find out the cause of any issues by turning on ODBC tracing.
 

 

Turning off ODBC Tracing
————————
To stop tracing, you can then just open up the ODBC Data Source Administrator as before - go to the “Tracing” tab and click on the “Stop Tracing Now” button - Apply - OK. 
Note: Don’t leave tracing turned on permanently, otherwise it could fill up the local drive on the client. ODBC tracing can generate a lot of information. Remember to delete or archive old trace files.

Other references:
http://support.microsoft.com/kb/268591/EN-US

Categories: APPS Blogs

How big is my oracle database?

OracleContractors - Thu, 2011-07-28 14:50

People ask this question a lot on the internet and most of the answers just seem to focus on a query of dba_data_files, similar to that shown below:

Datafiles
select sum(bytes)/1048576 “DATAFILES_SIZE_MB” from dba_data_files;
That’s fine to start with, but you should also include tempfiles, which are used when an operation such as a large sort is too big to fit into the relevant memory allocated to the session.
Tempfiles
select sum(bytes)/1048576 “TEMPFILES_SIZE_MB” from dba_temp_files;
 

Your redo logs can also use up a large amount of disk space - especially if your database has more than the minimum number of 2 redo log groups. (You may also have several members within each group).
 

Redologs
select sum(bytes)/1048576 “REDOLOGS_SIZE_MB” from v$log;

The database obviously needs controlfiles to record information such as which datafiles belong to the database.  If your CONTROL_FILE_RECORD_KEEP_TIME is set to a large value, then your controlfiles can become quite large.

Controlfiles
select round(sum(block_size*file_size_blks)/1048576,2) “CONTROLFILESIZE_MB” from v$controlfile;

From 10g onwards, flashback database is not enabled by default, but if it is, then this area can grow rapidly over time. 

Flash Recovery Area
select * from v$recovery_file_dest; 
select * from v$flash_recovery_area_usage;  

These views will show sizing details and free space available. 

Note: If your backups are held outside of the flash recovery area, then you’ll also need to allow space for these. This will depend on your backup strategy and backup retention policy. (Export/datapump export dumpfiles also need to be planned for).

If you are using RMAN incremental backups and have block change tracking enabled, then include this file:

Block change tracking file
select filename, nvl(bytes/1048576,0) “BLOCK_CT_SIZE_MB” from v$block_change_tracking;
 

Files referenced by database directories or the utl_file_dir parameter
Your application may read from, or write to external files via database directories or the utl_file_dir parameter.

Other examples of using external directories are for

(a) External tables -

select a.owner||’.'||a.table_name||’ stored in directory ‘||b.directory_path “EXTERNAL_TABLES”
from
dba_external_locations a, dba_directories b
where a.directory_owner=b.owner
and a.directory_name=b.directory_name;
(b) If you are storing multiple versions of the same tablespace within a file group repository. (i.e. tablespace versioning).

select a.tablespace_name, a.version, a.file_group_owner, a.file_group_name,
b.file_name, b.file_directory
from dba_file_group_tablespaces a, dba_file_group_files b
where a.file_group_owner=b.file_group_owner
and a.file_group_name=b.file_group_name;
 

Miscellaneous files
There are a large number of files which you could also include in your sizing if you wanted to. Though most of these are really external to the database.
Examples include:

(a) The spfile/pfile and any ifile referenced files.
(b) Any external scheduler jobs (i.e. program_type=’EXECUTABLE’ and program_action which points to a shell script).
(c) Configuration files such as Oracle wallet files and database gateway/hs services files.                   

(d) Oracle networking files. (e.g. tnsnames.ora, sqlnet.ora, listener.ora)
(e) Passwordfile.
(f) Any application code that needs to be deployed to the database server.
(g) Any database management or monitoring scripts that need to be on the server. 
(h) Files referenced by the audit_file_dest parameter, if audit_trail is set to use the “OS” or “XML” options.
(i) Archived redo logs and standby redo logs. Be aware of space usage related to the workload of your database and whether or not you have multiple destinations defined.
(j) Any software that needs to be deployed to the server. (e.g. The oracle software itself takes up several gigabytes of space).

If you wanted to be very precise, you could remove unused space from the calculations above, but I haven’t done that here, in order to keep things more straightforward. In real-life you would probably be better leaving the extra space available to allow for future growth of the database - which you should also plan into your size calculations.

As you can see, calculating size requirements for an Oracle database is not always as simple as you would think.  I’ve tried to include all elements in this post, though in reality a lot of them won’t apply to most databases. Please feel free to share any other items not on the list, that you feel should be included.

Categories: APPS Blogs

ODTUG Kscope 2011

Patrick Barel - Fri, 2011-07-08 05:07

60

Replace by position

Patrick Barel - Fri, 2011-07-08 05:07

Have you ever wanted to replace a character in a string just by position? I needed this the other day and IMHO there is no way in Oracle PL/SQL to do this, so I decided to write this my self.

It is really simple code and it probably needs improvement but it works for now and I think it is rather simple. Any improvement suggestions are of course very welcome in the comments.

CREATE OR REPLACE FUNCTION replacepos

 
( source_in      IN VARCHAR2

  , replacechar_in IN VARCHAR2

  , position_in    IN NUMBER) RETURN VARCHAR2 IS

  l_returnvalue VARCHAR2(32767);

BEGIN

  — copy from the source string up to, but not including,

  — the character position

  — to be replaced

  l_returnvalue := substr( str1 => source_in

                         , pos => 1

                         , len => position_in - 1);

  — add the replacement character

  — just a single character, but more can be sent in,

  — so substring the parameter

  l_returnvalue := l_returnvalue ||

                    substr( str1 => replacechar_in

                          , pos =>
1

                          , len => 1);

  — copy the rest of the source string

  l_returnvalue := l_returnvalue ||

                    substr( str1 => source_in

                          , pos => position_in + 1);

  RETURN l_returnvalue;

END replacepos;

[Book Review] jQuery Pocket Reference by David Flanagan

Patrick Barel - Fri, 2011-07-08 05:07

iconjQuery is the “write less, do more” JavaScript library. Its powerful features and ease of use have made it the most popular client-side JavaScript framework for the Web. This book is jQuery’s trusty companion: the definitive “read less, learn more” guide to the library. jQuery Pocket Reference explains everything you need to know about jQuery, completely and comprehensively.

catI decided to review the JQuery pocket reference. As the title already shows this should be treated as a reference rather than a book you should read from start to finish. I read the first couple of chapters and it learned me a lot about the way JQuery works. I think it is a very powerful way to interact with webpages and it should come in handy while doing APEX development.Especially the way you can respond to user actions, making your page more interactive and thus more attractive to the user.
I think the rest of the book should be treated the way I think it is supposed to be treated: as a reference rather than a book to read from A through Z. Knowledge from this should come in handy while developing APEX applications, especially when implementing Dynamic Actions.
Part of the book consists of an extensive description of every available function, at least, the ones available at the time of writing. Also all the available selectors and selector functions are described.
There is a full chapter on creating plugins. It is probably good to know how to create a plugin in JQuery to increase your knowledge about how this can be used when building an APEX plugin around it.
I think when creating specials in APEX, like dynamic actions or plugins, it is good to know at least a bit of how stuff works and JQuery becomes of ever increasing importance when building web interfaces. Whether you build your applications completely by hand, using plain HTML and JavaScript, use PHP or APEX or whatever interface you are using, almost every one of them uses or can use JQuery.
This books helps in understanding the language and design a bit better. Of course you can use the internet to find all the information you need, but if you are a bit like me, then you tend to get lost in all the information you find out there and often it is really hard to understand what is being written and what it is meant to do.

Analytic vs Hierarchical

Patrick Barel - Fri, 2011-07-08 05:07

While playing around with Analytic functions with a Hierarchical function in my query I stubled upon some strange behaviour I cannot explain (yet). The documentation provided by Oracle states the following:

Analytic functions are the last set of operations performed in a query except for the final ORDER BY clause.

Right, that would mean I can apply something analytic and something hierarchical in one query. But when I tried it, it didn’t seem to work the way I expected.

First, I started off with a simple hierarchical query:

SELECT LEVEL elevel
,sys_connect_by_path(e.ename, '/') epath
,e.*
FROM emp e
CONNECT BY PRIOR e.empno = e.mgr
START WITH e.mgr IS NULL;

Image1

Now I want to apply an Analytic function. I want to know what the next level is.

SELECT LEVEL elevel
,sys_connect_by_path(e.ename, '/') epath
,lead(LEVEL) over(PARTITION BY 1 ORDER BY 1) ellevel
,e.*
FROM emp e
CONNECT BY PRIOR e.empno = e.mgr
START WITH e.mgr IS NULL;

Image2

Hey, where did my path value go. It seems that analytic functions screw up the hierarchical function that has been used. I do not understand how this happens, but I figured out a workaround:

WITH emp_hier AS
(SELECT LEVEL elevel
,sys_connect_by_path(e.ename, '/') epath
,e.*
FROM emp e
CONNECT BY PRIOR e.empno = e.mgr
START WITH e.mgr IS NULL)
SELECT lead(elevel) over(PARTITION BY 1 ORDER BY 1) ellevel
,e2.*
FROM emp_hier e2;

Image3

As it turns out, I can force the SQL engine to apply an analytical function last if I make sure the resultset is created using subquery factoring.

XOR in SQL

Patrick Barel - Fri, 2011-07-08 05:07

If you have ever tried to implement a requirement that implies that either one field is filled, or the other one but not both and not both not then you might have thought about using an XOR function. Unfortunately this is not implemented in Oracle SQL.

First of all, let’s review some of the truth tables that are implemented in SQL.

AND left right result   false false false   false true false   true false false   true true true

OR left right result   false false false   false true true   true false true   true true true

NOT left result   false true   true false

The way XOR should be implemented is:

XOR left right result   false false false   false true true   true false true   true true false

In PL/SQL there is a function available that implements this truthtable. It’s in the standard package and looks like this:

desc sys.standard.XOR;
Parameter Type    Mode Default?
--------- ------- ---- -------- 
(RESULT)  BOOLEAN
LEFT      BOOLEAN IN
RIGHT     BOOLEAN IN            

Let’s create a simple table and fill it up for testing:

create table t

(left  number

,right number);

insert into t (left, right) values (0,0);

insert into t (left, right) values (0,1);

insert into t (left, right) values (1,0);

insert into t (left, right) values (1,1);

commit;

An idea would be to just call this function in a query like this:

select * from t

where sys.standard.xor(((left <> 0)), ((right <> 0)))

But, unfortunately the boolean datatype has not been implemented in SQL, so this doesn’t work.

How to solve this issue then? We can of course write down all the rules in a where clause and make sure it return the correct results:

select * from t

where (

        (    ((left is not null) and (left <> 0))

         and ((right is null) or (right = 0))

        )

        or

        (    ((left is null) or (left = 0))

         and ((right is not null) and (right <> 0))

        )

       )

That is one ugly query. How can we rewrite the XOR function using just AND and OR function. The OR function gives almost everything we need, but it gives one result to many. And that is exactly the outcome of applying the AND function to our dataset. So:

XOR = a OR b minus (a AND b)

select * from t

where (   ((left is not null) and (left <> 0))

        or ((right is not null) and (right <> 0))

       )

minus

select * from t

where (    ((left is not null) and (left <> 0))

        and ((right is not null) and (right <> 0))

       )

or

XOR = a OR b and NOT (a AND b)

select * from t

where (   ((left is not null) and (left <> 0))

        or ((right is not null) and (right <> 0))

       )

       and not

       (    ((left is not null) and (left <> 0))

        and ((right is not null) and (right <> 0))

       )

I think these last two queries are more descriptive to what is being done. Which query to choose depends on how much data is in your table. In my example it doesn’t make any difference in time. I tested with a table with over 29500 rows and it appears the minus operation is a bit faster than the version with the extra predicate.

Rather simple code, but I hope it will help you a bit (it certainly helped me). I wrote this to make sure I don’t forget it ;-)

Dynamic ShowHide in APEX

Patrick Barel - Fri, 2011-07-08 05:07
Region Display Selector.png

When you want to show and hide certain regions in APEX based on the selection of the user then there is of course the possibility to use the Region Display Selector as a region on your page. Using this region all the other regions in the page that have Region Display Selector in the attributes section of the region set to YES. I don’t really like the way this region is built, the way it looks when you run the page that is.

I wanted to have more control about the look and feel of my selector region.

Region Display Selector bar.png

First of all, I don’t want the ‘Show All’ option and I cannot find a way to turn this button off.

Next, I want to display the option below each other, instead of next to each other.

Then, I want to be able to add different images to the options.

navigate.png

But doing so means I will have to do everything myself, which is not a bad thing, just takes some time to do it.

Enter Dynamic Actions. When I click an item, I want the appropriate region to be displayed and the other ones should be hidden. First iteration of my test page included 5 dynamic actions and in each dynamic action I showed the appropriate region and hid the other ones. So every dynamic action included 5 True Actions. 1 to show the region and 4 to hide others. But since I am a developer I am in essence lazy and I want to achieve SPOD (Single Point Of Definition). Thinking about what actually needs to be done is not show selected region and hide others, but actually hide all regions and show the selected region. That would include one ‘overall’ dynamic action the will hide all the regions and one dynamic action per item. Luckily you can have a single dynamic action attached to multiple items, so I created a single dynamic action on the click event of multiple items.

When.png

Too bad there is no multi select or shuttle option on the selection of items as there is for the affected item selection, but we can type the different item names separated by commas. When the user clicks on one of the items defined all regions will be hidden. Then, after this single dynamic action, I need to create separate dynamic actions on every item to display the right region.Dynamic Actions Order.png

Make sure the regions are hidden first and then the right region is shown. If you get this the other way around, then nothing will be displayed at all because the region will be displayed and right after displaying it, it will be hidden again because the other dynamic action says so.

When this was working I wanted to give some more feedback to the user when he/she hovers over one of the triggering items. This takes another two dynamic actions. One to underline the text when the user enters the item (with the mouse) and one to set the text to normal when the user leaves the item. I can use pretty much the same approach as I did with the hiding of the regions, but I don’t want the true action to affect all the items when pointing to a single one. Luckily APEX provides an option for the Affected Elements called Triggering Element.

Affected Elements.png

This way the chosen Setting will only be applied to the element that triggered this dynamic action.

You can view a demo for these dynamic actions at: http://apex.oracle.com/pls/apex/f?p=41376 or download the source from http://bar-solutions.com/weblog/wp-content/show-hide.zip and install it in your own workspace.

[Book Review] SQL Pocket Guide By Jonathan Gennick

Patrick Barel - Fri, 2011-07-08 05:07

cat I started reading this book by Jonathan Gennick about a week ago. After a couple of
chapters I realized this book is more a reference than a book you
should read from A through Z. I am really an Oracle guy. I think I
know a lot about the PL/SQL and I think I know a lot about SQL as
well. It’s fun to see how much of this knowledge can be applied to
other databases as well. This reference is a nice book to get ideas on
how other databases help us developers and maybe even create some of
the functions in the Oracle environment.

This book shows most of the functions available in all databases and
how they differ from environment to environment. It describes just
about all the functions you will ever need in your day to day work.
When you need to do something that is not described in this book, then
you are probably better off by diving into the specific documentation
for your preferred database.

As said, this book is a good reference for your basic needs. If you
need to make application database independent (which IMHO is a really
bad idea, but that’s something else) you can use this reference to see
what is supported in all the different RDBMS’s.

Pipelined Table Functions

Patrick Barel - Fri, 2011-07-08 05:07

I have had trouble with a certain view I have to create at the customer site I am currently working at. The view involves 3 SQL queries combined by using a UNION ALL, since the separate queries are mutually exclusive. Using the UNION ALL makes it a bit faster since Oracle doesn’t have to do all the work of sorting and checking for duplicate rows.

Retrieving data from this view resulted in memory errors since there is too much data for the server to come up with and to return to me. After checking all kinds of initialization parameters I decided to try and build a couple of pipelined table functions that would retrieve the same data for me, but I assumed it wouldn’t put such a big pressure on my memory usage.

A pipelined table function will come up with a single row for the result set and pipe it out of the function when it’s available. In my opinion there would only be need for enough memory for this single row instead of enough memory for all the rows of the entire result set.

Since the function gets called from SQL the SQL engine needs to be aware of the result it produces. So I need to create types that define a row of the result. And since the function will return more than a single row, I also need to define a NESTED TABLE, a collection of these rows. Using two types I can mimic the appearance of a table.

Now all I have to do is create functions that will give me the result I am interested in.

My ‘normal’ view looks like this:

create or replace view v_emps as

select e.empno, e.ename, e.job, e.mgr, e.hiredate, e.sal, e.comm

   from emp e

  where e.deptno = 10

union all

select e.empno, e.ename, e.job, e.mgr, e.hiredate, e.sal, e.comm

  from emp e

where e.deptno = 20

union all

select e.empno, e.ename, e.job, e.mgr, e.hiredate, e.sal, e.comm

  from emp e

where e.deptno = 30

union all

select e.empno, e.ename, e.job, e.mgr, e.hiredate, e.sal, e.comm

  from emp e

where e.deptno = 40

/

In the following example I create a new view based on the table functions for the EMP table. The SQL approach is better, easier to read and faster, but it shows what should be done if you want to move over to table functions.

The way such a function look is kind of strange at first sight.

Function f1 return mynestedtabletype pipelined

Is

  <declaration>

  l_record myrecordtype;

begin

  loop

    <do stuff to define l_record>

    pipe row(l_record); — send the row out

  end loop;

  return; — return nothing but control

end f1;

First I create a type for the records I return from my function:

CREATE TYPE emp_t AS OBJECT

(

  empno    NUMBER(4),

  ename    VARCHAR2(10),

  job      VARCHAR2(9),

  mgr      NUMBER(4),

  hiredate DATE,

  sal      NUMBER(7, 2),

  comm     NUMBER(7, 2)

)

/

Then I create a nested table of the record type I just created. The functions will return (or say they will return) a variable of this type:

CREATE TYPE emps_tt AS TABLE OF emp_t

/

Next is the package which will contain the table functions. I create 4 functions to retrieve all the datasets I need.

CREATE OR REPLACE PACKAGE emp_tf IS 
  — Author  : Patrick Barel

  — Purpose : Table Functions for EMP

  — Public function and procedure declarations

  FUNCTION get_accounting RETURN emps_tt PIPELINED;

  FUNCTION get_research RETURN emps_tt PIPELINED;

  FUNCTION get_sales RETURN emps_tt PIPELINED;

  FUNCTION get_operations RETURN emps_tt PIPELINED;

END emp_tf;

/

CREATE OR REPLACE PACKAGE BODY emp_tf IS

  — Function and procedure implementations

  FUNCTION get_accounting RETURN emps_tt

    PIPELINED IS

    CURSOR c_emps IS

      SELECT e.empno, e.ename, e.job, e.mgr

            ,e.hiredate, e.sal, e.comm

        FROM emp e

       WHERE e.deptno = 10;

    r_emps        c_emps%ROWTYPE;

    l_returnvalue emp_t;

  BEGIN

    OPEN c_emps;

    LOOP

      FETCH c_emps INTO r_emps;

      l_returnvalue := emp_t(r_emps.empno, r_emps.ename, r_emps.job

                            ,r_emps.mgr, r_emps.hiredate

                            ,r_emps.sal, r_emps.comm);

      EXIT WHEN c_emps%NOTFOUND;

      PIPE ROW(l_returnvalue);

    END LOOP;

    CLOSE c_emps;

    RETURN;

  END get_accounting;

  —

  FUNCTION get_research RETURN emps_tt

    PIPELINED IS

    CURSOR c_emps IS

      SELECT e.empno, e.ename, e.job, e.mgr

            ,e.hiredate, e.sal, e.comm

        FROM emp e

       WHERE e.deptno = 20;

    r_emps        c_emps%ROWTYPE;

    l_returnvalue emp_t;

  BEGIN

    OPEN c_emps;

    LOOP

      FETCH c_emps

        INTO r_emps;

      l_returnvalue := emp_t(r_emps.empno, r_emps.ename, r_emps.job

                            ,r_emps.mgr, r_emps.hiredate

                            ,r_emps.sal, r_emps.comm);

      EXIT WHEN c_emps%NOTFOUND;

      PIPE ROW(l_returnvalue);

    END LOOP;

    CLOSE c_emps;

    RETURN;

  END get_research;

  —

  FUNCTION get_sales RETURN emps_tt

    PIPELINED IS

    CURSOR c_emps IS

      SELECT e.empno, e.ename, e.job, e.mgr

            ,e.hiredate, e.sal, e.comm

        FROM emp e

       WHERE e.deptno = 30;

    r_emps        c_emps%ROWTYPE;

    l_returnvalue emp_t;

  BEGIN

    OPEN c_emps;

    LOOP

      FETCH c_emps

        INTO r_emps;

      l_returnvalue := emp_t(r_emps.empno, r_emps.ename, r_emps.job

                            ,r_emps.mgr, r_emps.hiredate

                            ,r_emps.sal, r_emps.comm);

      EXIT WHEN c_emps%NOTFOUND;

      PIPE ROW(l_returnvalue);

    END LOOP;

    CLOSE c_emps;

    RETURN;

  END get_sales;

  —

  FUNCTION get_operations RETURN emps_tt

    PIPELINED IS

    CURSOR c_emps IS

      SELECT e.empno, e.ename, e.job, e.mgr

            ,e.hiredate, e.sal, e.comm

        FROM emp e

       WHERE e.deptno = 40;

    r_emps        c_emps%ROWTYPE;

    l_returnvalue emp_t;

  BEGIN

    OPEN c_emps;

    LOOP

      FETCH c_emps

        INTO r_emps;

      l_returnvalue := emp_t(r_emps.empno, r_emps.ename, r_emps.job

                            ,r_emps.mgr, r_emps.hiredate

                            ,r_emps.sal, r_emps.comm);

      EXIT WHEN c_emps%NOTFOUND;

      PIPE ROW(l_returnvalue);

    END LOOP;

    CLOSE c_emps;

    RETURN;

  END get_operations;

  —

BEGIN

  NULL;

END emp_tf;

/

In the implementation of the functions I could of course use BULK OPERATIONS to speed up the processing instead of the row-by-row (or slow-by-slow) approach I used now, but using bulk operations use more memory (the retrieved data is loaded into memory) and that is exactly what I am trying to avoid. Using BULK COLLECT with a limit clause might be an option, but that is something to explore later on.

The new view looks like this:

create or replace view v_emps_tf as

select * from table(emp_tf.get_accounting)

union all

select * from table(emp_tf.get_research)

union all

select * from table(emp_tf.get_sales)

union all

select * from table(emp_tf.get_operations)

/

Pretty much the same as the original one, but now based on my functions.

Let’s take a look at the ‘strange’ things here. When you look at the function specification it says it will return a nested table type, but in fact it will return nothing. Every result gets piped out of the function (PIPE ROW) when it is done.

Another thing is the from part of the queries. We tell the SQL engine to treat the results of the function as if it were a table. For this we use the TABLE() operator.

Doing some checks (select * from v_emp minus select * from v_emp_tf) tells me the result of both views is the same, since there are no rows returned. When I turn timing on I notice that the ‘normal’ SQL approach is faster than the table function approach. But the issue wasn’t speed, it was memory usage.

Dynamic Actions in APEX

Patrick Barel - Fri, 2011-07-08 05:07

After visiting the OPP/APEXPosed conference in Brussels I decided to write a post on how to create a relatively simple dynamic action in APEX. When you are working with password fields your only visual check to see if the entered text in both fields match may be the number of characters entered. Of course you can check for equality on submit but that may be a bit too late for the user.

For this post I will work with a textfield instead of a password field, but both will work the same.

First of all we create a page.

A simple blank page will do.

Give it a title.

Add a simple HTML region to hold the items.

Don’t use tabs (for this demo ;-))

Review what you just chose:  and press Finish.

The page should be created succesfully:

The page rendering part should look something like this:

When you open the context menu on the region you can choose to create a Page Item:

For the demonstration we will choose Text Field (that way what we enter will be visible)

Give the item a name:

Give the item a label:

Review the chosen options:

The item doesn’t have to be conditionally displayed so you can just press ‘Create Item’ here:

Repeat these steps for the second page item:

 

 

 

Now the page rendering portion will look something like this:

Using the context menu we can Create a Dynamic Action.

Give it a name and possibly a sequence.

Choose the event to use for this Dynamic Action to fire. In this case I think On Key Release will be the best choice. Choose the Selection Type, this action will rely on changes of items. Choose the items you want to fire this action.  In condition choose the kind of condition you want to use. Choose JavaScript Expression and type in your Javascript expression.

document.getElementById(’P100_TEXTFIELDA’).value ==

document.getElementById(’P100_TEXTFIELDB’).value


 

Then select the kind of Dynamic Action you wish to create. We will be creating an Advanced Action

 
Select what you will be doing when the condition evaluates to TRUE:

In this case we want to change the background color of the textbox, so we choose to Set the Style. We also want to execute this action when the page is loaded.

When the condition evaluates to FALSE we want to do the same action, but with a different color

Then select the kind of page elements you want to be affected by this dynamic action.

Next we choose on which fields this dynamic action will fire:

  

The Dynamic action now shows up in the Page Rendering section of our page.

Running the page shows our Dynamic Action in action:

 

I hope this helps in your understanding of the process of building Dynamic Actions in APEX. Of course this is a pretty simple example and you can probably come up with better ideas for the use of Dynamic Action, but as you can see, this is done with next to no coding (just the Javascript for the condition took some coding).

KScope 11

Patrick Barel - Wed, 2011-06-01 02:31

About 4 weeks from now KScope11 will be well underway. I am really looking forward to attending this conference. I think it is more ‘developer-centric’ than for instance Oracle Open World is. I still have to create my schedule for the days, but I know of at least one session where I will be present.

That will be my own session on Pipelined Table Functions. It will be on Tuesday, June 28th. Here’s the abstract of my presentation:

“If you can do it in SQL, use SQL.” But sometimes even the very powerful version of SQL that Oracle provides is not enough and you need more, like loops, conditions, etc. If you can make the output of a function like it’s a table, then you can use it in SQL and have access to all the power PL/SQL provides. After this session, you will know how to create functions that can be used as tables in SQL.

Before that I will do a preview session (or test run if you will) on Tuesday, June 14th at AMIS in the Netherlands. If you are not attending KScope11 in Long Beach and you live in or near the Netherlands then you are invited to join us. If you are attending KScope11 then you are of course invited to join me there.

Hope to meet you either in Nieuwegein or in Long Beach.

EBS Technology Stack Blog in transition

Steven Chan - Wed, 2011-03-16 10:17
I started this blog in 2006 as a personal experiment. Since then, this blog has morphed into a combination of a breaking news source and a wide-ranging reference library. Over the years, I've deliberately established an editorial policy of ensuring that we break EBS techstack news to you in real-time, free of hype and unvarnished by marketing considerations. It now includes contributions from almost thirty guest authors. We've collectively published about 800 articles, fielded over 5,200 comments, and handled countless emails from readers. We get millions of pageviews per year, and there are over a million incoming links to our published articles. This blog's success has been an object lesson in how a small, personal experiment can take on a life of its own. I still manage this blog in my free time, mostly, but it now lives in an odd grey area somewhere between my official job responsibilities and a personal project. Things are about to change I fully understand the importance of this site to you. Unfortunately, there are major changes underway that will dramatically affect this blog. I am working with our executive management team to investigate options for preserving both our ability to get breaking news to you as well as preserving the reference library functions that you've come to rely upon. In the meantime, I'm saddened to report that this blog is on hiatus. We won't be publishing new articles or announcements. Comments have been frozen. This is temporary, but I don't have a date yet when we'll get back into our normal swing of things. Stay tuned. I'll post more updates here as soon as possible. Regards, Steven Chan Chief Editor Steven Chan http://blogs.oracle.com/stevenChan/about.html
Categories: APPS Blogs

Be-Ice Consulting is now open in Belgium

Borkur Steingrimsson - Sat, 2011-03-12 09:29

First-class services for Oracle Business Intelligence in Belgium

Oracle E-Business Suite Plug-in 4.0 Released for OEM 11g (11.1.0.1)

Steven Chan - Fri, 2011-02-25 11:01
We're very pleased to announce the release of Oracle E-Business Suite Plug-in 4.0, an integral part of Application Management Suite for Oracle E-Business Suite. ams-ebs.png The management suite combines features in the standalone Application Management Pack (AMP) for Oracle E-Business Suite and Application Change Management Pack (ACMP) for Oracle E-Business Suite with Oracle's real user monitoring and configuration management capabilities. The features that were available in the standalone management packs are now packaged into the Oracle E-Business Suite Plug-in 4.0, which is now fully certified with Oracle Enterprise Manager 11g Grid Control. This latest plug-in extends Grid Control with E-Business Suite specific management capabilities and features enhanced change management support. The Oracle E-Business Suite Plug-in is released via patch 8333939. For the AMP and ACMP installation guide, see: * Getting Started with Oracle E-Business Suite Plug-in Release 4.0 (Note 1224313.1) Steven Chan http://blogs.oracle.com/stevenChan/about.html
Categories: APPS Blogs

New Oracle E-Business Suite R12 OS and Tools Requirements on IBM AIX on Power Systems

Steven Chan - Thu, 2011-02-24 13:08
IBM has announced May 1st, 2011 as the end of Support for Version 8 of the IBM XL C/C++ compiler currently used for Release 12 builds and patching. The target date of the switchover -- May 1st 2011 -- corresponds to when this older compiler will no longer be supported by IBM. Beginning on May 1st 2011, Oracle E-Business Suite patches for Release 12 (12.0, 12.1) on the IBM AIX on Power Systems platform will be built with Version 9 of the IBM XL C/C++ compiler. Customers who plan to patch or upgrade their E-Business Suite R12 environments after May 1st, 2011 must meet all the new requirements prior to applying new patches or upgrades. John Abraham
Categories: APPS Blogs

Windows 7 Service Pack 1 Certified with E-Business Suite

Steven Chan - Wed, 2011-02-23 12:25
There are three possible configurations for Windows 7 desktop clients: 1. 32-bit Windows 7, 32-bit browsers, 32-bit JRE 2. 64-bit Windows 7, 32-bit browsers, 32-bit JRE 3. 64-bit Windows 7, 64-bit browsers, 64-bit JRE We certified the first configuration in December 2009: E-Business Suite with 32-bit Windows 7 desktop clients running 32-bit versions of Internet Explorer (IE) and Firefox with the 32-bit JRE. We certified the second configuration in September 2010: E-Business Suite with 64-bit Windows 7 desktop clients running 32-bit versions of IE and Firefox with the 32-bit JRE. I'm pleased to announce that Microsoft Windows 7 Service Pack 1 (SP1) is now certified with both of those configurations for Oracle E-Business Suite 11i and Oracle E-Business Suite R12 with the following minimum requirements: Steven Chan http://blogs.oracle.com/stevenChan/about.html
Categories: APPS Blogs