Feed aggregator

More information sources on ActiveRecord Oracle enhanced adapter

Raimonds Simanovskis - Fri, 2009-01-02 16:00

I’m glad to see that there are many users of ActiveRecord Oracle enhanced adapter and therefore comments in this blog is not anymore the best way how to communicate with Oracle enhanced adapter users. Therefore I created several other information exchange places which I hope will be more effective.

The latest addition is wiki pages on GitHub where I put usage description and examples as well as some troubleshooting hints that previously were scattered in README file and different blog posts. This is the first place where to look for information about Oracle enhanced adapter. And if you are GitHub user then you can correct mistakes or add additional content also by yourself.

If you have some question or you would like to discuss some feature then you can use Google discussion group. I will use this discussion group also for new release announcements as well so subscribe to it if you would like to get Oracle enhanced adapter news.

If you would like to report some bug or new feature (and patch would be greatly appreciated) then please use Lighthouse issue tracker.

And source code of Oracle enhanced adapter is still located in Github repository. If you are GitHub user then you can watch it or even fork it and experiment with some new features.

And BTW I just released Oracle enhanced adapter version 1.1.9 with some new features and it has been also tested with latest Rails 2.2 release.

Categories: Development

HAPPY NEW YEAR !!!

Mihajlo Tekic - Thu, 2009-01-01 16:31
HAPPY NEW YEAR TO EVERYONE !!!

I WISH YOU AND YOUR FAMILIES HEALTHY, HAPPY AND SUCCESSFUL NEW YEAR !!!



The Picture above is of my daughter Monika.

She is four months old and is the best thing that happened to me in 2008.

Oracle PeopleSoft hosted docs

Tahiti Views - Wed, 2008-12-31 13:32
Just in time for the new year! The Oracle PeopleSoft group now have their docs on Oracle.com in HTML format, hooked up to a Tahiti search:Oracle PeopleSoft Enterprise Hosted PeopleBooksJohn Russellhttp://www.blogger.com/profile/17089970732272081637noreply@blogger.com0

NEW option in ADRCI purge acommand - UTSCDMP

Virag Sharma - Wed, 2008-12-31 11:40

NEW option in ADRCI purge acommand - UTSCDMP
There is new option in ADRCI for purge command - UTSCDMP


adrci> help purge

Usage: PURGE [[-i ]
[-age [-type ALERT INCIDENT TRACE CDUMP HM UTSCDMP]]]:

Purpose: Purge the diagnostic data in the current ADR home. If no
option is specified, the default purging policy will be used.

Options:
[-i id1 id1 id2]: Users can input a single incident ID, or a range of incidents to purge.
[-age ]: Users can specify the purging policy either to all the diagnostic data or the specified type. The data older than ago will be purged

[-type ALERT INCIDENT TRACE CDUMP HM UTSCDMP]:

Users can specify what type of data to be purged.
Examples:

purge
purge -i 123 456
purge -age 60 -type incident


There where some issue that directories cdmp_* in repositories (=$diagnostic_dest/Diag/rdbms/database_name/instance_name/bdump ) are not purging automatically. When you run following command, it remove cdmp_* directories, which is older the 3600 Minutes

adrci> purge -age 3600 -type UTSCDMP

Categories: DBA Blogs

ADF 11 Course 12-16 January in The Netherlands

JHeadstart - Tue, 2008-12-23 23:06

Oracle University just released the ADF 11 course, covering many new ADF 11 features.
I will be teaching this course from 12 to 16 january 2009 in De Meern, the Netherlands.

There are still seats available. For a list of course topics and registration, use this link.

Categories: Development

Oracle Database Resource Manager 11g - Undocumented New Parameters

Aviad Elbaz - Mon, 2008-12-22 09:31

I've played around with Oracle Database Resource Manager in 10g and it's quite nice and might be very useful for high CPU usage systems, but I found the inability to limit I/O as a drawback since in most cases I've faced the need to limit I/O is more necessary than CPU limit.

When you have, let's say, 8 CPU's on your machine, you need all the 8 to be 100% utilized by Oracle sessions for the resource manager start limit sessions. However, if your machine I/O capabilities are 50 mbps, you need only one or two sessions which perform intensive I/O (batch job/heavy report) to make the database very heavy.

In Oracle Database 11g Release 1, Resource Manager has gotten some new features related to I/O. So I've installed the 11g, made some tests and found some interesting issues.

I'm not going to write about Resource Manager basics or about 11g enhancements as some great articles have already been published about it. For example, you can read Tim's blog post - "Resource Manager Enhancements in Oracle Database 11g Release 1"

But... I'm going to discuss one missing capability (in my opinion) that will hopefully be available with Oracle Database 11g Release 2 with 2 new parameters which have already available but inactive and undocumented.

For those who are not familiar with Oracle Database Resource Manager I'll try to give a short basic introduction:

Oracle Database Resource Manager helps us to prioritize sessions to optimize resource allocation within our database by:

  1. Creating groups of sessions (Consumer Groups) based on similar resource requirements
  2. Allocate resources to those groups (Resource Plan Directive)
  3. Resource Plan - is a container of Resource Plan Directives that allocate resources to Consumer Groups

Only one Resource Plan is active at a time.

When Oracle Database 11g was introduced, some new features for Resource Manager related to I/O have been revealed. Among them:

  1. I/O Calibration -
    New procedure which helps to assess the I/O capabilities of the database storage. The output of this procedure (CALIBRATE_IO) is max_iops, max_mbps and actual_latency.
     
  2. Per Session I/O Limits -
    Unlike previous versions, you can limit I/O requests (or I/O MB) for a session before it moved to a new consumer group. (switch_io_megabytes and switch_io_reqs have been added to CREATE_PLAN_DIRECTIVE procedure)
     

Oracle have added the option to "capture" Oracle sessions by the I/O requests or by the megabytes of I/O they issued in order to move them to a lower priority consumer group.

I have a very fundamental doubt about this enhancements as I don't get the meaning of "capturing" an intensive I/O session and move it to a low priority consumer group which can have only CPU limit ... ?!  The reason we "capture" this session is the amount of I/O it makes, and when we move it to a low priority consumer group we can just limit its CPU resources. We can't limit the amount of I/O for a Consumer Group.

It could have been very useful if Oracle had added the ability to limit I/O for a Consumer Group, like we can limit CPU (with mgmt_pN) for a Consumer Group. What is missing here is the ability to limit I/O for a specific Consumer Group in terms of Maximum I/O per second or Maximum megabytes per second.

Will Oracle enhance Resource Manager in 11g Release 2 to fulfill this capability?

I don't have a confident answer for this question but I assume they will.

While playing around I've noticed two new parameters to the CREATE_PLAN procedure - MAX_IOPS and MAX_MBPS. On first sight it looked like the answer to my question - the ability to limit I/O for session within a plan,

but it's not...

Those two parameter are undocumented and totally ignored in Oracle 11g Release 1 Documentation but available in 11g Release 1 database:

-- create a new resource plan
--
-- Input arguments:
--   plan                       - name of resource plan
--   comment                    - user's comment
--   cpu_mth                    - allocation method for CPU resources
--                                (deprecated)
--   active_sess_pool_mth       - allocation method for max. active sessions
--   parallel_degree_limit_mth  - allocation method for degree of parallelism
--   queueing_mth               - type of queueing policy to use
--   mgmt_mth                   - allocation method for CPU and I/O resources
--   sub_plan                   - whether plan is sub_plan
--   max_iops                   - maximum I/O requests per second
--   max_mbps                   - maximum megabytes of I/O per second
--

PROCEDURE create_plan(plan IN VARCHAR2,
                      comment IN VARCHAR2,
                      cpu_mth IN VARCHAR2 DEFAULT NULL,
                      active_sess_pool_mth IN VARCHAR2
                      DEFAULT 'ACTIVE_SESS_POOL_ABSOLUTE',
                      parallel_degree_limit_mth IN VARCHAR2 DEFAULT
                      'PARALLEL_DEGREE_LIMIT_ABSOLUTE',
                      queueing_mth IN VARCHAR2 DEFAULT 'FIFO_TIMEOUT',
                      mgmt_mth IN VARCHAR2 DEFAULT 'EMPHASIS',
                      sub_plan IN BOOLEAN DEFAULT FALSE,
                      max_iops IN NUMBER DEFAULT NULL,
                      max_mbps IN NUMBER DEFAULT NULL

                      );

I tried to create a new plan using one of these two parameters, but it returned an error for each value I tried.
It turned out that the only valid value to MAX_IOPS and MAX_MBPS is null.

begin
  dbms_resource_manager.create_plan(plan => 'PLAN_EX_1'
                                   ,comment => 'Plan Example 1'
                                   ,max_iops => 100);
end;
/
begin
*
ERROR at line 1:
ORA-29355: NULL or invalid MAX_IOPS argument specified
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
ORA-06512: at "SYS.DBMS_RESOURCE_MANAGER", line 38
ORA-06512: at line 2

I've confirmed it with Oracle support and their answer was:

"This is currently an expected behaviour, we can not explicitly set either max_iops or max_mbps to any value other than null, that's why these parameters are currently not included in the documentation."

So here is my guess:
It looks like Oracle are going to enhance Resource Manager to allow I/O limit, but unlike CPU limit through Plan Directive (mgmt_pN), it's going to be a limit in Plan level, so you can create a sub plan with limited I/O resources allocated. Maybe the allocation method parameter (mgmt_mth) will get more valid options except for EMPHASIS and RATIO.

I'll keep track of it as soon as Oracle Database 11g Release 2 will be announced and I'll update.

You are welcome to leave a comment and/or share your opinion about this topic.

Aviad

Categories: APPS Blogs

Coding Horror: Hardware is Cheap, Programmers are Expensive

Tahiti Views - Mon, 2008-12-22 00:17
You've probably heard both sides of this argument: throw hardware at performance problems, no no, improve the code. Jeff Atwood at Coding Horror comes down on the "more hardware" side in this post:Coding Horror: Hardware is Cheap, Programmers are ExpensiveUsually I agree with Jeff, but I John Russellhttp://www.blogger.com/profile/17089970732272081637noreply@blogger.com2

I’m an Oracle ACE Award Winner now !

Sabdar Syed - Thu, 2008-12-18 03:35
Dear All,

I’m pleased to inform you all that I have received an email from Oracle ACE Program Committee that I have been nominated for the prominent Oracle ACE Award. I’m honored to be the second person working in the Saudi Arabia to be an Oracle ACE.

Hope you are curious to know who the first person is working in the Saudi Arabia to be an Oracle ACE. Well, he is none other than my elder brother & mentor – Mr. Syed Jaffar Hussain . We are proud to be the Oracle ACE brothers from one family members and I can say, this kind of achievement is a unique combination, isn’t it?

Take a look at my Oracle ACE Profile – Sabdar Syed.

I would like to convey my sincere thanks to my elder brother, colleagues, ex-colleagues, blog viewers, friends and well-wishers for their support to aim this award.

Thanks

Thanks to Mr. Mohan Dutt, OCP Advisor, for publishing about my achievement, Oracle ACE Award, in his blog - OCP Blog Community.

Take a look at - Featured Expert Wins Oracle ACE Award!

Your precious comments are welcomed!

Regards,
Sabdar Syed.
http://sabdarsyed.blogspot.com

Download Free Oracle Certification Dumps Here!!

Sabdar Syed - Wed, 2008-12-17 01:38

This blog post is for them; those who are desperately looking for free Oracle Certification dumps.

Well, you might be surprised to see the message of the post is irrelevant to the subject. All my intension is to bring your attention towards How bad it is? Cheating the Oracle Certifications by simply reading the exam dumps”.

Mr. Paul Sorensen, Director of Oracle Certification, and other certification team members have launched the Oracle Certification Blog , where they blog about everything on Oracle Certification. Interestingly, there are a couple of blog posts on Cheating Hurts the Oracle Certification Program” and others.

For list of blog posts on Cheating, then take a look at – Cheating Oracle Certifications . Do not forget to read the comments of every post in the above link.

Quick Links to Prepare for Oracle Certification

Keywords:

OCA, OCP, OCE, Dumps, Brain dumps, Practice Questions, Sample Questions, Cheat sheet, Test papers.

Regards,
Sabdar Syed.

10g optimizer case study: Runtime Execution issues with View merging

Gaurav Verma - Tue, 2008-12-16 10:46

Preface

After the 10g upgrade for our client’s 11i Application system, we were a little gung-ho, but the self-congratulations were short-lived as we discovered that a bulk load of custom concurrent programs were taking longer.

Aswath Rao, headshot aswath brighter who had solved many performance issues before using the HOTSOS tool, handled it smartly. In this article, we talk about how he approached the problem, and what else could have been looked into.

Essentially, after some SQL plan analysis, he found that the merging of some views was causing a poorly executing plan (which was probably looking good to the 10g optimizer – the stats were recent) to be selected and finally got around it by using a couple time tested and clever hints for avoiding merging. He had to try and test quite a bit, but avoiding the merging worked.

First thoughts..

When I first heard about it, my first thoughts went to an initialization variable called optimizer_secure_view_merging and when I searched on Google, I came across http://oracledb.wordpress.com/2007/04/10/execution-plans-differents-with-different-users/, which talks about something similar. Now, interestingly, if you run the bde_chk_cbo.sql script (obtainable from Metalink), it says that this parameter should be set to FALSE. The note 216205.1 – Mandatory parameters for 11i Apps corroborates the same. It was set to its expected value, in which Oracle does not use view merging or predicate move-around. Still, the view merging was definitely happening.

Where’s the bottleneck?

So there was this custom concurrent program called XXMRPSTKTRANS and its runtime parameters had suddenly changed (we had the history of previous concurrent programs), an sql  of the format INSERT INTO XXCI.XXMRP_STOCK_TRANSFER_RPT_ITEMS <followed by a select query> was taking approximately 97 % of the total concurrent request’s runtime.  The query used 967 million Logical IOs.

Lets look at the Hotsos profiling output:

Hotsos output1

skew graph

Lets look deeper..

From the sql traces, it could be seen that this query was dragging us down:

SELECT PLANS.COMPILE_DESIGNATOR,
ITEMS.ORGANIZATION_ID,
ITEMS.ORGANIZATION_CODE,
ITEMS.PLANNER_CODE,
ITEMS.INVENTORY_ITEM_ID,
ITEMS.DESCRIPTION, ITEMS.
FULL_LEAD_TIME, MSS.SAFETY_STOCK_QUANTITY, NULL
FROM MRP_PLANS_SC_V PLANS,
MRP_SYSTEM_ITEMS ITEMS,
MTL_CATEGORY_SETS CATSETS,
MTL_CATEGORIES CAT,
MTL_ITEM_CATEGORIES ITEMC,
(SELECT INVENTORY_ITEM_ID, ORGANIZATION_ID,
SAFETY_STOCK_QUANTITY,EFFECTIVITY_DATE
FROM MTL_SAFETY_STOCKS MSS1
WHERE EFFECTIVITY_DATE = (SELECT MAX(EFFECTIVITY_DATE)
FROM MTL_SAFETY_STOCKS
WHERE INVENTORY_ITEM_ID = MSS1.INVENTORY_ITEM_ID
AND ORGANIZATION_ID = MSS1.ORGANIZATION_ID
GROUP BY INVENTORY_ITEM_ID, ORGANIZATION_ID ) ) MSS
WHERE PLANS.COMPILE_DESIGNATOR = ITEMS.COMPILE_DESIGNATOR
AND PLANS.PLANNED_ORGANIZATION = ITEMS.ORGANIZATION_ID
AND CATSETS.CATEGORY_SET_ID = ITEMC.CATEGORY_SET_ID
AND CATSETS.STRUCTURE_ID = CAT.STRUCTURE_ID
AND CAT.CATEGORY_ID = ITEMC.CATEGORY_ID
AND ITEMC.INVENTORY_ITEM_ID = ITEMS.INVENTORY_ITEM_ID
AND ITEMC.ORGANIZATION_ID = ITEMS.ORGANIZATION_ID
AND ITEMS.INVENTORY_ITEM_ID = MSS.INVENTORY_ITEM_ID(+)
AND ITEMS.ORGANIZATION_ID = MSS.ORGANIZATION_ID(+)
AND CATSETS.CATEGORY_SET_NAME = :B8
AND ( (:B7 IS NOT NULL AND ITEMS.PLANNER_CODE = :B7 ) OR (:B7 IS NULL) )
AND PLANS.COMPILE_DESIGNATOR = :B6
AND ((:B5 IS NOT NULL AND LENGTH(:B5 ) < 4 AND ITEMS.ORGANIZATION_CODE = :B5 )
OR (:B5 IS NOT NULL AND LENGTH(:B5 ) > 3 AND ITEMS.ORGANIZATION_CODE IN
(SELECT LOOKUP_CODE FROM
FND_LOOKUP_VALUES WHERE
LOOKUP_TYPE = :B5 AND
LANGUAGE = 'US' ) )
OR (:B5 IS NULL ) )
AND ((:B4 IS NOT NULL AND CAT.SEGMENT1 = :B4 )OR (:B4 IS NULL))
AND ((:B3 IS NOT NULL AND CAT.SEGMENT2 = :B3 )OR (:B3 IS NULL))
AND ((:B2 IS NOT NULL AND CAT.SEGMENT3 = :B2 )OR (:B2 IS NULL))
AND ((:B1 IS NOT NULL AND CAT.SEGMENT4 = :B1 ) OR (:B1 IS NULL));

Playing with plans – Worst, Better, Best..

Having determined the sql on which the future tuning efforts must be focused on, lets dive into the different sql plans observed (with and without hints) and the accompanying logical I/O stats.



  • WITHOUT HINTS - 35 MILLION CONSISTENT GETS (Not good)

Lets talk about the out-of-the-box plan (no hints) first. Obviously, there is a lot of detail here and is difficult to be absorbed at a single glance, but the key thing to note here is that the cost is actually very low (103), so on paper this looks like a great plan. Sure, there are a couple Full table scans, but those tables are really small. The caveat with this plan is that its performance is really bad in real time due to high index scans, which further contribute to excessive CPU delays.

Default plan



  • ORDERED HINT - 15 MILLION CONSISTENT GETS

It got a little better with the ORDERED hint, although the plan cost went up a bit:

ordered hint plan



  • NO_MERGE(CATSETS) NO_MERGE(CAT) NO_MERGE (PLANS) - 26952 CONSISTENT GETS

Better than the ORDERED hint was the NO_MERGE hint for each of the views, which brought down the consistent gets to ~27,000, all the way from 35 million! Whoa, that was a drastic change. Note that the cost of this plan is 380, as compared to 1634 of the ORDERED plan’s (cost). Interesting observation.

no merge hint plan

Moral of the story?

Well, the NO_MERGE hint definitely made things better, when used on all the views or inline views in the query. The 10g optimizer was definitely having a challenge around this part. Also, while the overall compile time plan was showing lesser cost, the runtime performance was very skewed. 

I am also forced to think that if we had played with optimizer_index_cost_adj parameter a bit at the session level, maybe we could have made some indexes look more expensive and hence pushed them down the priority list for the optimizer. But since this idea did not get tested, its just a thought at this point (it wouldn’t have been a sustainable solution anyway, but I’ve seen it being effective at session level before and can be tried in unwieldy situations).

In retrospect, note that the MRP_PLANS_SC_V internally refers to three more views! This is probably what makes this sql plan a little more special.

SQL> select REFERENCED_NAME,REFERENCED_TYPE,REFERENCED_LINK_NAME,DEPENDENCY_TYPE from dba_dependencies where name like 'MRP_PLANS_SC_V' and REFERENCED_TYPE like 'VIEW';

REFERENCED_NAME           REFERENCED_TYPE           REFERENCED_LINK_NAME      DEPENDENCY_TYPE

------------------------- ------------------------- ------------------------- -------------------------


MFG_LOOKUPS               VIEW                                                HARD


MRP_DESIGNATORS_VIEW      VIEW                                                HARD


MRP_ORGANIZATIONS_V       VIEW                                                HARD

A digressive (?) discussion..

As this wasn’t the only sql tuning issue the customer encountered, this might as well be pointing to an innate issue with the 10g optimizer workload stats that are stored in the sys.aux_stats$ table. Although, the customer had gathered a representative workload sample of  system stats using DBMS_STATS.GATHER_SYSTEM_STATS(‘START’) and DBMS_STATS.GATHER_SYSTEM_STATS(‘STOP’), the 10g optimizer does not seem to be churning out realistic execution-wise sql plans.

An interesting face is that the customer’s production machine has 32 CPUs, 192G of RAM and a really fast I/O sub-system, so maybe the workload parameters estimated through the APIs were overly optimistic? Maybe during the sampling, the optimizer API said.. Wow, This is cool. I can really do a lot. It sounds like a possible cause, but one would argue that the oracle API would be smart enough to account for it. Interestingly, the values for CPUSPEEDNW (the parameter used without workload simulation) and CPUSPEED are not very far from each other.

Here is how the stats look right now:

SQL> select sname, pname, pval1 from sys.aux_stats$;

SNAME         PNAME      PVAL1

------------- ---------- ----------


SYSSTATS_INFO STATUS


SYSSTATS_INFO DSTART


SYSSTATS_INFO DSTOP


SYSSTATS_INFO FLAGS      0


SYSSTATS_MAIN CPUSPEEDNW 983.218164


SYSSTATS_MAIN IOSEEKTIM  10


SYSSTATS_MAIN IOTFRSPEED 4096


SYSSTATS_MAIN SREADTIM   3.866


SYSSTATS_MAIN MREADTIM   5.203


SYSSTATS_MAIN CPUSPEED   972


SYSSTATS_MAIN MBRC       7


SYSSTATS_MAIN MAXTHR     79357952


SYSSTATS_MAIN SLAVETHR   3072

13 rows selected.

One should be aware that these parameters can be set manually using DBMS_STATS.SET_SYSTEM_STATS API. While you do find some general information on the web, unfortunately, not a lot of specific case study material is available on how manually changing each parameter affects the optimizer behavior. I think this is also where most of us fail to understand the true potential of 10g optimizer, both before and after the upgrade.

I sincerely wish that someone authoritative from the product or server technologies support team comes up with informative material on them.

Semester Project Finally Finished

Padraig O'Sullivan - Tue, 2008-12-16 09:15
We just finished our semester project yesterday for the class I am taking on High Performance Computing. It was a pretty interesting project based on the topic of software fault injection.More details can be found in the project report here.Padraighttp://www.blogger.com/profile/17562327461254304451noreply@blogger.com0

Teaching a horse not to eat

Moans Nogood - Mon, 2008-12-15 17:49
My friend Ole told me this story many moons ago - and many times since:

A man decided to teach his horse not to eat anymore. Gradually he reduced the amount of food the horse got each day, and the programme worked really well.

Unfortunately, just as he had finally taught the horse not to eat at all something unforeseen and tragic happened to the horse and it died suddenly.

I was reminded of the story the other day when I was studying this article in my beloved The Economist (I've subscribed non-stop since 1983):

Stopping in a hurry

Dec 11th 2008
From The Economist print edition

Cars are getting better at avoiding collisions. Before long they may be communicating with each other to make roads safer.... and somewhere in the article this is stated:

"Jan Ivarsson, head of safety at Volvo, believes it should be possible to build a car in which people will not be killed or injured."

On the other hand I can read in various newspapers that Volvo is not doing too well, and may in fact soon be either sold or closed, just like Saab. Or maybe Sweden will try to put those two together and create a small (by international standards) entity that might survive with state funding and what have you.

So you have this carmaker - Volvo - who has been making cars safer and safer and safer over the last several decades, and JUST as they're sensing the possibility of making the perfectly safe car - in which people will not get killed - the carmaker Volvo unfortunately died. Like the horse.

In my own, little world I have also been witnessing how perfect the databases are getting, how much they can do, how much stuff you can put into them in order to save on the application coding and development side - and how coders, developers and programmers have stopped using them. Just as databases were getting damn near perfect ... people stopped using them.

I have for several years now claimed that any computer technology that reached a state of perfection, a plateau of predictability & stability and a high level of usefulness ... will be replaced with something more chaotic and hence much less productive. I have seen no exceptions.

I now realise it is connected: Technology reaching maturty, car safety reaching its logical conclusion - and feeding of horses.

xTuple Postbooks GUI client on OpenSolaris

Siva Doe - Sun, 2008-12-14 19:16

Postbooks from xTuple is an open source edition of their ERP and Accounting software. They have their GUI client available for Linux, Windows and Mac. No Solaris though. With the required components for Postbook, namely Postgres and Trolltech Qt, is known to work in OpenSolaris, I took a plunge to get the GUI client working on my OpenSolaris 2008.11.

It was not very tough to get it going. We need to set up OS 0811 to get it going. First, install SunStudio and family by installing the 'ss-dev' pkg.

pfexec pkg install ss-dev

Next in line is Postgres. I decided on installing version 8.3, even though Postbooks will work with 8.2 also. There is a list of packages that is required for 8.3.

pfexec pkg install -v SUNWpostgr-83-server SUNWpostgr-83-libs SUNWpostgr-83-client \\
SUNWpostgr-jdbc SUNWpostgr-83-docs SUNWpostgr-83-contrib SUNWpostgr-83-devel \\
SUNWpostgr-83-pl SUNWpostgr-83-tcl SUNWpgbouncer-pg83

As for the Qt version 4, I took the easy way out by using spec-files-extra (SFE) to build Qt for me. Please see Building JDS on OpenSolaris for instructions to set up the build environment for building Qt out of SFE. The SFEqt4.spec file, has to be tweaked for building the required Postgres SQL driver plugin, that is required for Postbooks. In the spec files directory, there is a subdirectory called 'ext-sources'. Within that, edit the file 'qt4-solaris-cc-ss12' and change the QMAKE_CXXFLAGS and QMAKE_LIBS variables as follows.

QMAKE_CXXFLAGS          = $$QMAKE_CFLAGS -library=stlport4 -I/usr/postgres/8.3/include
QMAKE_LIBS              = -L/usr/postgres/8.3/lib -R/usr/postgres/8.3/lib

Now, build and install SFEqt4.spec.

/opt/jdsbld/bin/pkgtool -v --download --autodeps build SFEqt4.spec

This will install two packages SFEqt4 and SFEqt4-devel in your system. Now on to the next step of building Postbooks.

Postbooks GUI client requires the source code for OpenRPT, which is an open sourced SQL report writer. Download it from OpenRPT sourceforge site. Since, Qt4 is built using SunStudio, there is a small change required for OpenRPT to get it built on SunStudio. In the file OpenRPT/renderer/graph.cpp, change line 780 to look like these two lines.

QColor cMap(_colorMap[sd->color]);
graph.setSetColor(snum, &cMap);

When extracting OpenRPT source code, you will have a directory like 'openrpt-3.0.0Beta-source', or something like that. Create a symbolic link 'openrpt' for it.

ln -s openrpt-3.0.0Beta-source openrpt

This is required as the GUI client looks for 'openrpt' at some level above its source directory. Then build the client, by running 'qmake' followed by 'gmake'.

qmake
gmake

Next is GUI client itself. Download the source from Postbooks sourceforge site. Extract it at the same level as that of 'openrpt'. For building under SunStudio, one file needs to be changed. Edit the file 'xtuple/guiclient/postPurchaseOrdersByAgent.cpp', line 90, to replace '__PRETTY_FUNCTION__' with the name of the method itself (hardcode with ::sPost). The build procedure is the same as above: 'qmake' followed by 'gmake'.

Now the GUI client can be launched by 'bin/xtuple'. I encountered a missing 'libiconv.so.2' library. I overcame this by running

env LD_LIBRARY_PATH=/usr/gnu/lib bin/xtuple

I know, using LD_LIBRARY_PATH is unsafe. May be including '/usr/gnu/lib' in the spec file change might solve this problem.

Now you should see the login screen and you should be able to log in to the configured Postgres database. Here is the obligatory screenshot.

Hope this was helpful for you. Comments and feedback are most welcome.

Should I go for Oracle 10g OCA/OCP or 11g OCA/OCP Certification?

Sabdar Syed - Sat, 2008-12-13 03:39
Hello All,

Choosing whether to go for Oracle 10g OCA/OCP or 11g OCA/OCP Certification is becoming complex to the Novice or Newbie DBAs. And also, I have seen a couple of posts asking such similar certification doubts more frequently in the OTN Forums, when they are not really sure or confused.

Well, this is ever been told by everyone that going for latest version certification is good and ideal. But, what I advise is, first go for Oracle 10g OCA and OCP, then upgrade to Oracle 11g OCP.

Following are my views on why to go for Oracle 10g OCA/OCP initially rather than 11g OCA/OCP directly.

  • As we all know that newer version (11g) does include older version features plus new features and bug fixes of older version issues.
  • Retirement date for Oracle 10g Certification has not yet been announced by Oracle. Moreover, Oracle Database 11g: Administration I (OCA) exam is only on production i.e. regular and Oracle Database 11g: Administration II (OCP) exam is not yet gone for Production i.e. still Beta Exam.
  • Oracle Database 10g is still being used as Production for the business in all most all the organizations in the globe. But very less companies are using the Oracle Database 11g for their business, as Oracle 11g is still a base release 11.1.0.6 and yet to go for standard release 11g (11.2.X.X) shortly. This means that Oracle 11g is not fully deployed or used for Production purpose yet.
  • Oracle Database 10g Release (10.2) still has Oracle primary and extended (Metalink) support for few more years from now, after that Oracle 10g will also be de-supported by Oracle.
  • Both versioned (10g and 11g) certifications have two exams – Administration I (OCA) and Administration II (OCP). Each exam fees of them is $125 US i.e. there is no vary.
  • It’s mandatory for the OCP candidates to undergo one approved course from the Approved list of Oracle University Courses for hands on course requirement. This applies to both Oracle 10g and 11g Certification takers.
  • For Oracle 10g OCP Certification holders, there is only one exam 1Z0-050 - Oracle Database 11g: New Features for Administrators given to upgrade directly to the Oracle 11g OCP. No course or hands on course requirement form is to be submitted to achieve the Oracle 11g OCP (Upgrade) Certification.
  • In this way, one will have both Oracle 10g and 11g Certifications in hand, and can show the same in their resume or CV. This also fulfills the requirement where the companies are looking for the candidates those are having enough experience with Oracle 10g and 11g, and holding multiple certifications in it.
One can go for direct Oracle 11g Certification under the following circumstances.

  • If you are forced, by your company or manager, to undergo Oracle 11g Course and take Oracle 11g Certification Exams, for their specific requirement on Oracle Database 11g related projects.
  • When there is no Oracle 10g course listed in the training institute in your city, instead only Oracle 11g Courses are available.
  • When unable to afford to take Oracle 11g Upgrade exam.
  • If my above views are not Okay with you -:)

Note: Folks, above are only my views, and need not to be the same with others. So, it’s left up you to decide whether to go for Oracle 10g OCA/OCP or 11g OCA/OCP Certification. For any information or doubts, then refer the Oracle Certification Program link and navigate yourself to the options to know more about Beta Exams, Retirements, List of Exams, Upgrade your Certification, Hands on Course Requirement etc.,

Your comments are welcomed if this article information helps you.

Regards,
Sabdar Syed,
http://sabdarsyed.blogspot.com

My UKOUG 2008

Oracle Apex Notebook - Fri, 2008-12-12 11:39
Last week I went to Birmingham UK to attend the UKOUG 2008 conference. João came along with me and I can speak for both of us when I say that we had a great time. We found Birmingham to be a very clean and organized city and it was also good because we managed to stay in London for the weekend where we had the time to visit some of the most famous touristic attractions.The conference itself was
Categories: Development

'Publish to APEX' from SQL Developer 1.5.3

Anthony Rayner - Fri, 2008-12-12 05:22
Since SQL Developer 1.2.1 and APEX 3.0.1, we've had some useful integration between APEX and SQL Developer, the ability to import and deploy applications, browse the APEX metadata, remote debug PL/SQL in your applications and more. With the latest release of SQL Developer 1.5.3, it is now possible to create a quick and simple APEX application from within SQL Developer (thanks to a bug fix). This is possible through the 'Publish to APEX' feature and creates a simple 1 page APEX application containing an Interactive Report Region based upon a SQL statement.


(Note: Requires APEX 3.1 or above.)


This feature allows you to take any grid of data, right click on it and select to 'Publish to APEX'. (Note a 'grid of data' includes results from executing a SQL statement, results of pre-defined or user-defined reports from the 'Reports' tab, specific table / view properties such as columns, data, constraints etc. and I'm sure there are more.) Upon selecting 'Publish to APEX', the following dialog is displayed:


This dialog allows you to specify 5 properties:
1) Workspaces- The workspace where you want the application to be created (this list will only display workspaces that are associated with the schema of the context of your current grid of data).
2) Application Name - The name of the application that will be created.
3) Theme - The theme for your new application, specifying look and feel.
4) Page Name - The name of the page that will be created.
5) SQL - The SQL that will be used to generate an interactive report region within the page. This defaults to the SQL used to build the original grid of data, but can be changed.

Upon clicking 'Apply' SQL Developer will create the application and show a dialog with some basic creation information such as application name and ID. This creates a 1 page application in the workspace defined, containing an interactive report region with a source of the SQL specified in the dialog.


A few more points to note about this application:
- The application's authentication scheme defaults to 'Database Account Credentials' meaning that you'll need to authenticate into the application using valid database account username and password. This can obviously be changed to something else if required.
- The application will be created in an application group called 'Published from SQL Developer'.
- The interactive report region only displays the first ten columns of the report by default, but again this can easily be changed via the interactive reports menu bar 'Actions' green cog drop down, then select 'Select Columns'.

Here is an example of the application that is generated. I selected to publish data that showed access attempts to my workspace (selecting from the APEX metadata view apex_workspace_access_log). I then used interactive report features to refine my data to show all the failed login attempts for applications within a workspace, grouped by application:


Have fun publishing to APEX!!!
Categories: Development

Celebrating Second Year Anniversary of the Blog Launch

Sabdar Syed - Thu, 2008-12-11 23:22

Dear My Blog Readers,


It's been two years that I have lanuched and updated my blog by today. The blog originally started on 12-Dec-2006.

During the last year, I wrote and published good articles. I would like to thank all of my blog viewrs who made my blog popular by reading the interesting posts in m blog.

I hope, I would write more and more good articles which my blog viwers definitely like them and definitely get benifit from them.

Feel free to give comments, suggestions, and advices to improve this blog with more articles.

Once again thank you all.

Regards,

Sabdar Syed,

http://sabdarsyed.blogspot.com

Oracle SQL Developer 1.5.3 released

Oracle Apex Notebook - Thu, 2008-12-11 18:52
I've just noticed that, the new version of Oracle SQL Developer (1.5.3) is now available for download.Reading the release notes I can see the major functionality introduced, beyond the traditional list of bug fixes, is the translation to Japanese. The good news is that the next SQL Developer 1.5.4 release will be translated to other 7 languages: Spanish, Italian, German, French, Brazilian
Categories: Development

Browse JAR, WAR and EAR files in Explorer

Brenden Anstey - Tue, 2008-12-09 04:06

Found this really neat method if getting the Windows Zip Explorer Compressed (zipped) Folders extension to read JAR, WAR and EAR files in Windows Explorer.


Instructions
Open notepad and paste the below Registry Entries into a text file named ear-jar-war.reg, save it on your desktop and then run it. This will associate the three types with the the Zip shell extension: Compressed (zipped) Folders
If any of the extensions are associated with something else, Compressed (zipped) Folders will now appear in the recommended programs for these file types so you can associate them manually.

Windows Registry Editor Version 5.00

[HKEY_CLASSES_ROOT\.jar]
"Content Type"="application/x-zip-compressed"
"PerceivedType"="compressed"
@="CompressedFolder"

[HKEY_CLASSES_ROOT\.jar\CompressedFolder]

[HKEY_CLASSES_ROOT\.jar\CompressedFolder\ShellNew]
"Data"=hex:50,4b,05,06,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00

[HKEY_CLASSES_ROOT\.jar\OpenWithProgids]
"CompressedFolder"=""

[HKEY_CLASSES_ROOT\.jar\PersistentHandler]
@="{098f2470-bae0-11cd-b579-08002b30bfeb}"

[HKEY_CLASSES_ROOT\.war]
"Content Type"="application/x-zip-compressed"
"PerceivedType"="compressed"
@="CompressedFolder"

[HKEY_CLASSES_ROOT\.war\CompressedFolder]

[HKEY_CLASSES_ROOT\.war\CompressedFolder\ShellNew]
"Data"=hex:50,4b,05,06,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00

[HKEY_CLASSES_ROOT\.war\OpenWithProgids]
"CompressedFolder"=""

[HKEY_CLASSES_ROOT\.war\PersistentHandler]
@="{098f2470-bae0-11cd-b579-08002b30bfeb}"

[HKEY_CLASSES_ROOT\.ear]
"Content Type"="application/x-zip-compressed"
"PerceivedType"="compressed"
@="CompressedFolder"

[HKEY_CLASSES_ROOT\.ear\CompressedFolder]

[HKEY_CLASSES_ROOT\.ear\CompressedFolder\ShellNew]
"Data"=hex:50,4b,05,06,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00

[HKEY_CLASSES_ROOT\.ear\OpenWithProgids]
"CompressedFolder"=""

[HKEY_CLASSES_ROOT\.ear\PersistentHandler]
@="{098f2470-bae0-11cd-b579-08002b30bfeb}"

Oh and of course back up your registry before running. As you can see from above it only adds some keys to HKEY_CLASSES_ROOT, tested in XP, but at your own risk and all that ;)

A big 'Thank You'....

Lisa Dobson - Mon, 2008-12-08 14:12
..to everyone who voted me onto the UKOUG Board of Directors!There seems to have been a bit of a shake up this year with 4 new Directors being elected, which I believe is the most number of new Directors in one year.We will officially take up our posts on 15th January and I'm really looking forward to the next 2 years - although I've been well advised that there is a lot of work ahead!I didn't Lisahttp://www.blogger.com/profile/16434297444320005874noreply@blogger.com5

Pages

Subscribe to Oracle FAQ aggregator