Feed aggregator

Check out the ADF content at this year's ODTUG KScope11 conference

Chris Muir - Fri, 2011-03-04 01:47
For all ADF developers, I'd like the chance to point out that this year's ODTUG Kaleidscope '11 conference in Long Beach USA has a substantial amount of ADF presentations. In fact quite a few active members of the ADF EMG will be presenting this year which is great.

For anyone who doesn't know much about the KScope events, ODTUG's conference series highlights real world experience with developers who are actually using Oracle products in the field. For anyone who doesn't consider themselves an expert, the KScope conference provides a great opportunity to learn from the pros tips n tricks on how to get your ADF applications from development to production

In turn for Fusion MiddleWare there's just on 60 sessions covering topics from SOA to WebCenter and of course ADF. Further incentives include streams dedicated to PL/SQL, the database, Apex and more. (Oh, and apparently it's sunny and warm in Long Beach and there's beaches too!)

If you're interested in the content, specifically the Fusion MiddleWare content, check out: http://kscope11.com/fusion ...then select either the Symposium or Presentations links.

We hope to see you there.

Oracle APEX 4.0 Cookbook

Marc Sewtz - Tue, 2011-03-01 16:39
Thanks to some recent train travel, I had a chance to catch up on some reading, and so I looked at the recently published Oracle APEX 4.0 Cookbook by Michel van Zoest and Marcel van der Plas. It's great to see more and more APEX books coming out, and of course there are already several excellent ones out there. This new one I found quite interesting due to the way it's been structured. The authors basically have chapters covering the main areas of interest, i.e. creating basic applications, themes, websheets, web serivces, APIs, etc. And then within each chapter, they have smaller sections that lay out, step by step, how to go about e.g. creating a form, creating a report, creating a theme, etc, all broken down into very manageable pieces. And in each section, they tell you what you need to get started, then walk you through the steps, and then provide some background info on how it all works. So I think this is the kind of book an APEX developer might put on his desk (or e-reader) to quickly look up specific features while they are developing their applications. I think this book provides a great value to both experienced APEX developers and developers who just recently got started. It's not necessarily targeted towards complete beginners though, i.e. it's not so much an introduction into APEX as the authors pretty quickly get into more advanced features, like APIs, AJAX, JavaScript, etc. So for anyone looking for an APEX book they can use to quickly look up certain features, and how to use them, this book is doing an excellent job.

Games (Asian) Indians Play

Khanderao Kand - Mon, 2011-02-28 21:21
I recently read this book,"Games Indian Play". This book is not about Indian games like Kho-Kho, Kabbadi etc. But it is about "why do Indian behave how they behave". To be precise, the sub-title of the book is "">Why we are the way we are". The author, Raghunathan, as used his studies on "Game Theory and Behavioral Economics" to make sense of Indians behavior. Its a though provoking book which uses "Prisoners Dilemma", the famous problem from game theory, to eloborate how Indians are 'rational' but thier self-centered rationalism undermines their long term as well as community interests. His examples cover day to day scenarios covers almost everyone, individuals, politicians, or community by and large. At the end of the book, the author tried to propose crux of "Bhagwat Geeta" as a solution to behavior and explained it in the context of the game theory problem.

I'm Totally Lost: Which Conference Do I Go To for Hyperion and Oracle EPM/BI Content?

Look Smarter Than You Are - Mon, 2011-02-28 17:02
The final Hyperion Solutions conference (the great big conference Hyperion used to put on with non-stop Hyperion content and over 4,000 attendees) was in the spring of 2007.  Back then, everyone knew which conference to attend, because there was only one national conference (Solutions) and then a whole lot of regional HUG (Hyperion User Group) meetings.  But then Oracle bought Hyperion and immediately disbanded the conference leaving the user community in disarray.


There are now several options depending on what you're looking for.  While I could attempt to make some sense out of the whole conference jumble in a blog post, I decided it would be better explained in a webcast.  To that end, I'm devoting two webcasts this week to the question “Now that the Hyperion Solutions conference is gone, which conference should I attend?”


I'm going to compare the benefits of the better known 2011 conferences:


I'm in a unique position to do this, because I don't work for Oracle and I have some ties to every one of these events (so you could say that while I'm biased, I realize the value each one can bring to the right audience).  Usually, our webcasts are only open to Oracle customers (not partners) but in this case, I want everyone to know why you'd want to go to each of the conferences so they don't find themselves wasting money at a conference that's totally not right for them.


Click on the links below to sign up for either Tuesday or Wednesday's webcast:



I will spend around 45 minutes covering all the conferences and then take questions from the audience.  Before you sign up for one of the conferences, devote 45 minutes of your life to making sure you won't find yourself trapped in the 7th circle of hell (otherwise known as stuck at a conference you hate).
Categories: BI & Warehousing

Direct NFS Clonedb

Yasin Baskan - Thu, 2011-02-24 06:47
Direct NFS Clonedb is a feature in 11.2 that you can use to clone your databases. Kevin Closson explains what it is in this post. In his demo videos he is using a perl script to automate the process of generating the necessary scripts. That script is not publicly available as of today but the MOS note 1210656.1 explains how to do the clone manually without the perl script.

Tim Hall also has a step by step guide on how to do the cloning in this post. He also uses the perl script mentioned above.

We have been testing backups and clones on Exadata connected to a 7410 ZFS Storage Appliance, I wanted to share our test on Direct NFS Clonedb. This test is on a quarter rack x2-2 connected to a 7410 storage via Infiniband. A RAC database will be cloned as a single instance database and the clone database will opened in one db node.

Enable Direct NFS on Exadata


For security, as of today default Exadata installation disables some services needed by NFS. To use NFS on Exadata db nodes we enabled those services first.


service portmap start
service nfslock start
chkconfig --level 345 portmap on
chkconfig --level 345 nfslock on

Recent Exadata installations come with Direct NFS (dNFS) enabled, you can check if you have it enabled by looking at the database alert log. When the database is started you can see this line in the alert log if you have dNFS enabled.

Oracle instance running with ODM: Oracle Direct NFS ODM Library Version 3.0

If it is not enabled you can use this command after stopping the database to enable it.

make -f $ORACLE_HOME/rdbms/lib/ins_rdbms.mk dnfs_on

Mount the NFS share

I am assuming 7410 is configured and NFS sharing is up on it at this point. To mount the NFS share you can use a mount command like this on Exadata db nodes.

mount 192.168.27.105:/export/backup1 /backup -o rw,bg,hard,intr,rsize=131072,wsize=131072,intr,nodev,nosuid,actimeo=0

Back up the source database

You can use OS copies or RMAN image copies to back up the database for use in the cloning process. Here are the commands we used, do not forget to create the target directory before.

sql 'alter database begin backup';
backup as copy database format '/backup/clone_backup/%U';
sql 'alter database end backup';

Prepare the clone db

To start the clone database we need an init.ora file and a create controlfile script. You can back up the source database's control file to a text file and use it. In the source database run this to get the script, this will produce a script under the udump directory (/u01/app/oracle/diag/rdbms/dbm/dbm1/trace in Exadata).

SQL> alter database backup controlfile to trace;

Database altered.

After editing the script this is the one we can use for the clone database.

CREATE CONTROLFILE REUSE SET DATABASE "clone" RESETLOGS  NOARCHIVELOG
    MAXLOGFILES 1024
    MAXLOGMEMBERS 5
    MAXDATAFILES 32767
    MAXINSTANCES 32
    MAXLOGHISTORY 33012
LOGFILE
  GROUP 1 (
    '/u01/app/oradata/clone/redo01.log'
  ) SIZE 4096M BLOCKSIZE 512,
  GROUP 2 (
    '/u01/app/oradata/clone/redo02.log'
  ) SIZE 4096M BLOCKSIZE 512,
  GROUP 3 (
    '/u01/app/oradata/clone/redo03.log'
  ) SIZE 4096M BLOCKSIZE 512
DATAFILE
'/backup/clone_backup/data_D-DBM_I-1222199824_TS-UNDOTBS1_FNO-3_bnm5ajrp',
'/backup/clone_backup/data_D-DBM_I-1222199824_TS-SYSTEM_FNO-1_blm5ajro',
'/backup/clone_backup/data_D-DBM_I-1222199824_TS-SYSAUX_FNO-2_bmm5ajro',
'/backup/clone_backup/data_D-DBM_I-1222199824_TS-UNDOTBS2_FNO-4_bom5ajrp',
'/backup/clone_backup/data_D-DBM_I-1222199824_TS-GNS_DATA01_FNO-7_bqm5ajrp',
'/backup/clone_backup/data_D-DBM_I-1222199824_TS-USERS_FNO-5_bpm5ajrp',
'/backup/clone_backup/data_D-DBM_I-1222199824_TS-USERS_FNO-9_bsm5ajrp',
'/backup/clone_backup/data_D-DBM_I-1222199824_TS-USERS_FNO-10_btm5ajrp',
'/backup/clone_backup/data_D-DBM_I-1222199824_TS-USERS_FNO-8_brm5ajrp'
CHARACTER SET AL32UTF8
;

/u01/app/oradata/clone is a directory on the local disks, you can also use NFS for redo logs if you want to. The DATAFILE section lists the image copies we have just produced using RMAN. You can get this list using this sql, be careful about the completion time because you may have previous image copies in the same directory.

select name,completion_time from V$BACKUP_COPY_DETAILS;

Now we need an init.ora file, we can just copy the source database's file and edit it.

SQL> create pfile='/backup/clone.ora' from spfile;

File created.

Since the source database is a RAC database you need to remove parameters related to RAC (like cluster_database, etc...). You also need to change the paths to reflect the new clone database, like in the parameter control_files. Here is the control_files parameter in this test.

*.control_files='/u01/app/oradata/clone/control.ctl'

I also use a local directory, not NFS, for the control file.

There is one parameter you need to add when cloning a RAC database to a single instance database.

_no_recovery_through_resetlogs=TRUE

If you do not set this parameter you will get an error when you try to open the clone database with resetlogs. MOS note 334899.1 explains why we need to set this. If you do not set this this is the error you will get when opening the database.

RMAN> sql 'alter database open resetlogs';

sql statement: alter database open resetlogs
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of sql command on default channel at 02/22/2011 16:13:07
RMAN-11003: failure during parse/execution of SQL statement: alter database open resetlogs
ORA-38856: cannot mark instance UNNAMED_INSTANCE_2 (redo thread 2) as enabled

Now we are ready to create the clone database.

Create the clone db

After preparing the init.ora file and the create controlfile script we can create the database.

export ORACLE_SID=clone
SQL> startup nomount pfile='/backup/clone.ora';
ORACLE instance started.

Total System Global Area 3991842816 bytes
Fixed Size                  2232648 bytes
Variable Size             754978488 bytes
Database Buffers         3087007744 bytes
Redo Buffers              147623936 bytes

SQL>  @cr_control

Control file created.

Now we need to rename the datafiles and point them to a NFS location we want. dbms_dnfs is the package needed for this.

begin
dbms_dnfs.clonedb_renamefile('/backup/clone_backup/data_D-DBM_I-1222199824_TS-UNDOTBS1_FNO-3_bnm5ajrp','/backup/oradata/undotbs1.263.740679581');
dbms_dnfs.clonedb_renamefile('/backup/clone_backup/data_D-DBM_I-1222199824_TS-SYSTEM_FNO-1_blm5ajro','/backup/oradata/system.261.740679559');
dbms_dnfs.clonedb_renamefile('/backup/clone_backup/data_D-DBM_I-1222199824_TS-SYSAUX_FNO-2_bmm5ajro','/backup/oradata/sysaux.262.740679571');
dbms_dnfs.clonedb_renamefile('/backup/clone_backup/data_D-DBM_I-1222199824_TS-UNDOTBS2_FNO-4_bom5ajrp','/backup/oradata/undotbs2.265.740679601');
dbms_dnfs.clonedb_renamefile('/backup/clone_backup/data_D-DBM_I-1222199824_TS-GNS_DATA01_FNO-7_bqm5ajrp','/backup/oradata/gns_data01.264.741356977');
dbms_dnfs.clonedb_renamefile('/backup/clone_backup/data_D-DBM_I-1222199824_TS-USERS_FNO-5_bpm5ajrp','/backup/oradata/users.266.740679611');
dbms_dnfs.clonedb_renamefile('/backup/clone_backup/data_D-DBM_I-1222199824_TS-USERS_FNO-9_bsm5ajrp','/backup/oradata/users.274.741357097');
dbms_dnfs.clonedb_renamefile('/backup/clone_backup/data_D-DBM_I-1222199824_TS-USERS_FNO-10_btm5ajrp','/backup/oradata/users.275.741357121');
dbms_dnfs.clonedb_renamefile('/backup/clone_backup/data_D-DBM_I-1222199824_TS-USERS_FNO-8_brm5ajrp','/backup/oradata/users.273.741357075');
end;
/




The first parameter to dbms_dnfs is the backup image copy name we set in the controlfile script, the second parameter is the target filename which should reside in NFS. You can create this script using the following sql on the source database.

select 'dbms_dnfs.clonedb_renamefile('''||b.name||''',''/backup/oradata/'||
       substr(d.file_name,instr(d.file_name,'/',-1)+1)||''');'
from v$backup_copy_details b,dba_data_files d
where b.file#=d.file_id
and b.completion_time>sysdate-1/24;

If you have multiple image copies be careful about the completion_time predicate. In this example I am looking at the image copies of the last hour.

If the target location in the dbms_dnfs call is not on NFS here is what you will get:

SQL> exec dbms_dnfs.clonedb_renamefile('/backup/clone_backup/data_D-DBM_I-1222199824_TS-USERS_FNO-10_b1m59uem','/u01/app/oradata/clone/users.275.741357121');
BEGIN dbms_dnfs.clonedb_renamefile('/backup/clone_backup/data_D-DBM_I-1222199824_TS-USERS_FNO-10_b1m59uem','/u01/app/oradata/clone/users.275.741357121'); END;

*
ERROR at line 1:
ORA-01511: error in renaming log/data files
ORA-17513: dNFS package call failed
ORA-06512: at "SYS.X$DBMS_DNFS", line 10
ORA-06512: at line 1

I got this error in my first try and searched in MOS and Google for some time with no results. Then I realized that my target locations was not on NFS but was on local disk.

As we renamed the datafiles and they are pointing at the NFS location, now we can open the clone database.

SQL> alter database open resetlogs;

Database altered.

At this point we have a clone database ready to use.

The target directory we used shows our files.

[oracle@dm01db01 oradata]$ ls -l
total 98523
-rw-r-----+ 1 oracle dba 10737426432 Feb 22 16:44 gns_data01.264.741356977
-rw-r-----+ 1 oracle dba 17179877376 Feb 22 16:44 sysaux.262.740679571
-rw-r-----+ 1 oracle dba 17179877376 Feb 22 16:44 system.261.740679559
-rw-r-----+ 1 oracle dba 17179877376 Feb 22 16:44 undotbs1.263.740679581
-rw-r-----+ 1 oracle dba 17179877376 Feb 22 16:44 undotbs2.265.740679601
-rw-r-----+ 1 oracle dba 10737426432 Feb 22 16:44 users.266.740679611
-rw-r-----+ 1 oracle dba  2147491840 Feb 22 16:44 users.273.741357075
-rw-r-----+ 1 oracle dba  2147491840 Feb 22 16:44 users.274.741357097
-rw-r-----+ 1 oracle dba  2147491840 Feb 22 16:44 users.275.741357121

Even though ls shows their size equal to the source database file sizes, du shows us the truth.

[oracle@dm01db01 oradata]$ du -hs *
129K    gns_data01.264.741356977
74M     sysaux.262.740679571
2.8M    system.261.740679559
15M     undotbs1.263.740679581
5.4M    undotbs2.265.740679601
129K    users.266.740679611
129K    users.273.741357075
129K    users.274.741357097
129K    users.275.741357121

The datafile sizes are minimal since we did not do any write activity on the clone database yet. These sizes will get bigger after some activity.

Be Featured on My Next Presentation, Win Amazon Gift Certificates

OCP Advisor - Thu, 2011-02-24 02:16

My next presentation is at the annual Oracle Applications User Group (OAUG) conference called COLLABORATE11 in Orlando, FL.

The session is entitled:

"OCP Advisor's Tips on How To Become An Oracle Certified Professional"


If you are Oracle Certified, here is your chance to win Amazon.com gift certificates!


Please list your Oracle certifications with year you were certified as a response to this post or e-mail me at ocp.advisor@gmail.com


I will select 4 certified professionals and include their certification story in a detailed case study. If you have any tips to share, please post here or send an e-mail. Also responders will be mentioned in the acknowledgement section of the whitepaper.


Thanks in advance and best wishes for your certification plans!

For a list of sessions being hosted by your blog author, please visit the COLLABORATE11 Session Search page and select Speaker Name: Mohan Dutt

UKOUG Director Elections - Please Vote!

Lisa Dobson - Tue, 2011-02-22 14:38
There's only a few days left for voting in this year's Director Elections, with the process closing on February 28th.More information on the election process, including information on the role of Directors, and this years candidates can be found here.Each UKOUG membership is entitled to one vote, with ballot papers being sent to the main contact for that membership, but if they have a colleague Lisahttp://www.blogger.com/profile/16434297444320005874noreply@blogger.com0

OTN Developers Days: Hands-on Oracle Database 11g App Dev

Kuassi Mensah - Fri, 2011-02-18 19:41
OTN Dev Days Hands-on Oracle DB 11g is coming to you in Dallas (03/09th) , Toronto (03/30th) and Chicago (04/25th).
See you there.

ADF BC: Creating an "EXISTS" View Criteria

Chris Muir - Thu, 2011-02-17 19:10
The EXISTS keyword in SQL queries is an efficient mechanism for returning record sets from one dataset when they exist in another dataset. For example we can write queries like:

SELECT org.org_id, org.name FROM organisations org WHERE EXISTS
(SELECT 1 FROM events evt WHERE evt.org_id = org.org_id
AND evt.contact_name = 'Eddie Harris')

....which returns all organisations which have a related event whose contact is Eddie Harris.

ADF Business Components in JDeveloper 11g allow the creation of EXISTS subqueries via the View Object named View Criteria feature. They're easy to implement if you already know how to create View Criteria, as long as you know one small trick.

Default Business Components

Given the SQL query above using the one-to-many organisations-to-events example, imagine we have default Entity Objects (EOs), EO Associations, View Objects (VOs) and VO Links, as seen in this picture:

View Object Link Accessors

When created via the Business Components from Table wizard, the VO Link OrgEvtFkLink created, based on the EO Association OrgEvtFkAssoc, will include Accessors options under its Relationship tab in the VO Link editor:

If you select the pencil icon next to the Accessors options it reveals the View Link Properties dialog:

....from which you can see the "Generate Accessor" option selected for the "In View Object: Organisations View". While the selected state is the default option when created, it's this option which is essential for setting up the EXISTS View Criteria.

View Object View Criteria

Once you've ensured the Accessor option is set as described above, when you create a new View Criteria for the View Object, Organisations in our case, we configure the View Criteria as follows. First in the Create View Criteria dialog we should change the View Criteria name to something more suitable to reflect what the View Criteria will do for us:

Next select the Add Criteria button, which will create the basis of the expression used by the query:

On selecting the Attribute drop down, you'll discover a list of attributes from the OrganisationsView VO. In this list you'll note an attribute called "EventsView". This attribute is only available because of the options you configured in the View Object Link Accessors above. If you hadn't gone with the default options, the EventsView attribute would not be available, and you would not be able to create the EXISTS View Criteria:

With the EventsView Accessor selected the dialog for the first time shows the EXISTS clause:

The only think left to do is to select the Criteria Group expression of the EXISTS statement, in the example above this is the "Event No =" option, and change this using the supplied options in the fields below, to the actual expression we want to use in the EXISTS clause. From our example this is matching the Events Contact Name to a String:

Once completed in the right hand side you can see the EXISTS subquery that the View Criteria will apply to the OrganisationsView VO when executed.

Note I've also turned off the Ignore Case and Ignore Null Values options.

Testing

In the Business Components Browser, on opening the OrganisationsView, and selecting the View Criteria via the Find button, we're first prompted for a value for the bind variable:

...which once supplied, returns the only matching Organisations record:

Thanks

Thanks to Eddie Harris at SAGE Computing Services for revealing the technique.

Whatever happened to Fusion Applications?

Andrews Consulting - Thu, 2011-02-17 12:48
Oracle Fusion Applications were announced with great fanfare at an event at San Francisco City Hall over five years ago.  The announcement created a high enough level of concern and confusion that a year later Oracle introduced the concept of “Applications Unlimited” – a promise to keep the traditional applications (including JDE and PeopleSoft) viable […]
Categories: APPS Blogs

Neighboring Siblings?

Ramkumar Menon - Wed, 2011-02-16 09:30

 

Found an Interesting observation on C.M.Spielberg McQueen’s Blog – XPath 1.0 describes, amongst other axes, ones that allow access to immediate parent and immediate child nodes, as well as access to ancestor and descendant node-sets, but does not provide for immediate siblings – The only way to access these are via predicates – preceding-sibling::*[1] or following-sibling::*[1], and not explicit next-sibling and a previous-sibling axes.

 

It’s Conference Season!

Cary Millsap - Mon, 2011-02-14 16:59
My favorite mode of life is being busy doing something that I enjoy and that I know, beyond a doubt, is the Right Thing to be doing. Any hour I get to spend in that zone is a precious gift.

I’ve been in that zone nearly continuously for the past three weeks. I’ve been doing two of my favorite things: lots of consulting work (helping, earning, and learning), and lots of software development work (which helps me help, earn, and learn even faster).

I’m looking forward to the next four weeks, too, because another Right Thing that I love to do is talk with people about software performance, and three of my favorite events where I can do that are coming right up:
  • RMOUG Training Days, Denver CO — I leave tomorrow. I’m looking forward to reuniting with lots of good friends. My stage time will be Wednesday, February 16th, when I’ll talk about material from my new “Mastering Performance with Extended SQL Trace” paper. 
  • NoCOUG Winter Conference, Pleasanton CA — I’ll be in the east Bay Area on Thursday, February 24th presenting the keynote address where I’ll discuss whether Exadata means never having to “tune” again and then spending two hours helping people to think clearly about performance.
  • Hotsos Symposium, Irving TX — I’ll present “Thinking Clearly about Performance” on Monday, March 7th. I love the agenda at this event. It’s a high quality lineup that is dedicated purely to Oracle software performance. This is one of the very few conferences where I can enjoy sitting and just watching for whole days at a time. If you are interested in Oracle system performance, do not miss this. 
Happy Valentine’s Day. I shall hope to see you soon.

Pages

Subscribe to Oracle FAQ aggregator