Feed aggregator

Displaying and reading the execution plans for a SQL statement

Inside the Oracle Optimizer - Thu, 2008-02-07 14:37
Generating and displaying the execution plan of a SQL statement is a common task for most DBAs, SQL developers, and preformance experts as it provides them information on the performance characteristics of a SQL statement. An execution plan shows the detailed steps necessary to execute a SQL statement. These steps are expressed as a set of database operators that consumes and produces rows. The order of the operators and their implentation is decided by the query optimizer using a combination of query transformations and physical optimization techniques.

While the display is commonly shown in a tabular format, the plan is in fact tree-shaped. For example, consider the following query based on the SH schema (Sales History):



select prod_category, avg(amount_sold)
from sales s, products p
where p.prod_id = s.prod_id
group by prod_category;


The tabular representation of this query's plan is:



------------------------------------------
Id Operation Name
------------------------------------------
0 SELECT STATEMENT
1 HASH GROUP BY
2 HASH JOIN
3 TABLE ACCESS FULL PRODUCTS
4 PARTITION RANGE ALL
5 TABLE ACCESS FULL SALES
------------------------------------------

While the tree-shaped representation of the plan is:

GROUP BY
|
JOIN
_____|_______
| |
ACCESS ACCESS
(PRODUCTS) (SALES)




When you read a plan tree you should start from the bottom up. In the above example begin by looking at the access operators (or the leaves of the tree). In this case the access operators are implemented using full table scans. The rows produced by these tables scans will be consumed by the join operator. Here the join operator is a hash-join (other alternatives include nested-loop or sort-merge join). Finally the group-by operator implemented here using hash (alternative would be sort) consumes rows produced by the join-opertor.

The execution plan generated for a SQL statement is just one of the many alternative execution plans considered by the query optimizer. The query optimizer selects the execution plan with the lowest cost. Cost is a proxy for performance, the lower is the cost the better is the performance. The cost model used by the query optimizer accounts for the IO, CPU, and network usage in the query.

There are two different methods you can use to look at the execution plan of a SQL statement:


  1. EXPLAIN PLAN command - This displays an execution plan for a SQL statement without actually executing the statement.

  2. V$SQL_PLAN - A dictionary view introduced in Oracle 9i that shows the execution plan for a SQL statement that has been compiled into a cursor in the cursor cache.


Under certain conditions the plan shown when using EXPLAIN PLAN can be different from the plan shown using V$SQL_PLAN. For example, when the SQL statement contains bind variables the plan shown from using EXPLAIN PLAN ignores the bind variable values while the plan shown in V$SQL_PLAN takes the bind variable values into account in the plan generation process.

Displaying an execution plan has been made easier after the introduction of the dbms_xplan package in Oracle 9i and by the enhancements made to it in subsequent releases. This packages provides several PL/SQL procedures to display the plan from different sources:


  1. EXPLAIN PLAN command

  2. V$SQL_PLAN

  3. Automatic Workload Repository (AWR)

  4. SQL Tuning Set (STS)

  5. SQL Plan Baseline (SPM)


The following examples illustrate how to generate and display an execution plan for our original SQL statement using the different functions provided in the dbms_xplan package.

Example 1 Uses the EXPLAIN PLAN command and the dbms_xplan.display function.


SQL> EXPLAIN PLAN FOR
2 select prod_category, avg(amount_sold)
3 from sales s, products p
4 where p.prod_id = s.prod_id
5 group by prod_category;

Explained.



SQL> select plan_table_output
2 from table(dbms_xplan.display('plan_table',null,'basic'));

------------------------------------------
Id Operation Name
------------------------------------------
0 SELECT STATEMENT
1 HASH GROUP BY
2 HASH JOIN
3 TABLE ACCESS FULL PRODUCTS
4 PARTITION RANGE ALL
5 TABLE ACCESS FULL SALES
------------------------------------------


The arguments are for dbms_xplan.display are:


  • plan table name (default 'PLAN_TABLE'),

  • statement_id (default null),

  • format (default 'TYPICAL')


More details can be found in $ORACLE_HOME/rdbms/admin/dbmsxpln.sql.

Example 2 Generating and displaying the execution plan for the last SQL statement executed in a session:



SQL> select prod_category, avg(amount_sold)
2 from sales s, products p
3 where p.prod_id = s.prod_id
4 group by prod_category;

no rows selected


SQL> select plan_table_output
2 from table(dbms_xplan.display_cursor(null,null,'basic'));

------------------------------------------
Id Operation Name
------------------------------------------
0 SELECT STATEMENT
1 HASH GROUP BY
2 HASH JOIN
3 TABLE ACCESS FULL PRODUCTS
4 PARTITION RANGE ALL
5 TABLE ACCESS FULL SALES
------------------------------------------


The arguments used by dbms_xplay.dispay_cursor are:


  • SQL ID (default null, null means the last SQL statement executed in this session),

  • child number (default 0),

  • format (default 'TYPICAL')


The details are in $ORACLE_HOME/rdbms/admin/dbmsxpln.sql.


Example 3 Displaying the execution plan for any other statement requires the SQL ID to be provided, either directly or indirectly:

  1. Directly:

    SQL> select plan_table_output from
    2 table(dbms_xplan.display_cursor('fnrtqw9c233tt',null,'basic'));


  2. Indirectly:

    SQL> select plan_table_output
    2 from v$sql s,
    3 table(dbms_xplan.display_cursor(s.sql_id,
    4 s.child_number, 'basic')) t
    5 where s.sql_text like 'select PROD_CATEGORY%';


Example 4 - Displaying an execution plan corresponding to a SQL Plan Baseline. SQL Plan Baselines have been introduced in Oracle 11g to support the SQL Plan Management feature (SPM). In order to illustrate such a case we need to create a SQL Plan Baseline first.


SQL> alter session set optimizer_capture_sql_plan_baselines=true;

Session altered.

SQL> select prod_category, avg(amount_sold)
2 from sales s, products p
3 where p.prod_id = s.prod_id
4 group by prod_category;

no rows selected

If the above statement has been executed more than once, a SQL Plan Baseline will be created for it and you can verified this using the follows query:


SQL> select SQL_HANDLE, PLAN_NAME, ACCEPTED
2 from dba_sql_plan_baselines
3 where sql_text like 'select prod_category%';

SQL_HANDLE PLAN_NAME ACC
------------------------------ ------------------------------ ---
SYS_SQL_1899bb9331ed7772 SYS_SQL_PLAN_31ed7772f2c7a4c2 YES


The execution plan for the SQL Plan Baseline created above can be displayed either directly or indirectly:

  1. Directly
    select t.* from
    table(dbms_xplan.display_sql_plan_baseline('SYS_SQL_1899bb9331ed7772',
    format => 'basic')) t


  2. Indirectly
    select t.*
    from (select distinct sql_handle
    from dba_sql_plan_baselines
    where sql_text like 'select prod_category%') pb,
    table(dbms_xplan.display_sql_plan_baseline(pb.sql_handle,
    null,'basic')) t;



The output of either of these two statements is:



----------------------------------------------------------------------------
SQL handle: SYS_SQL_1899bb9331ed7772
SQL text: select prod_category, avg(amount_sold) from sales s, products p
where p.prod_id = s.prod_id group by prod_category
----------------------------------------------------------------------------

----------------------------------------------------------------------------
Plan name: SYS_SQL_PLAN_31ed7772f2c7a4c2
Enabled: YES Fixed: NO Accepted: YES Origin: AUTO-CAPTURE
----------------------------------------------------------------------------

Plan hash value: 4073170114

---------------------------------------------------------
Id Operation Name
---------------------------------------------------------
0 SELECT STATEMENT
1 HASH GROUP BY
2 HASH JOIN
3 VIEW index$_join$_002
4 HASH JOIN
5 INDEX FAST FULL SCAN PRODUCTS_PK
6 INDEX FAST FULL SCAN PRODUCTS_PROD_CAT_IX
7 PARTITION RANGE ALL
8 TABLE ACCESS FULL SALES
---------------------------------------------------------


Formatting


The format argument is highly customizable and allows you to see as little (high-level) or as much (low-level) details as you need / want in the plan output. The high-level options are:

  1. Basic
    The plan includes the operation, options, and the object name (table, index, MV, etc)
  2. Typical
    It includes the information shown in BASIC plus additional optimizer-related internal information such as cost, size, cardinality, etc. These information are shown for every operation in the plan and represents what the optimizer thinks is the operation cost, the number of rows produced, etc. It also shows the predicates evaluation by the operation. There are two types of predicates: ACCESS and FILTER. The ACCESS predicates for an index are used to fetch the relevant blocks because they apply to the search columns. The FILTER predicates are evaluated after the blocks have been fetched.
  3. All
    It includes the information shown in TYPICAL plus the lists of expressions (columns) produced by every operation, the hint alias and query block names where the operation belongs. The last two pieces of information can be used as arguments to add hints to the statement.
The low-level options allow the inclusion or exclusion of find details, such as predicates and cost.
For example,


select plan_table_output
from table(dbms_xplan.display('plan_table',null,'basic +predicate +cost'));

-------------------------------------------------------
Id Operation Name Cost (%CPU)
-------------------------------------------------------
0 SELECT STATEMENT 17 (18)
1 HASH GROUP BY 17 (18)
* 2 HASH JOIN 15 (7)
3 TABLE ACCESS FULL PRODUCTS 9 (0)
4 PARTITION RANGE ALL 5 (0)
5 TABLE ACCESS FULL SALES 5 (0)
-------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("P"."PROD_ID"="S"."PROD_ID")



select plan_table_output from
table(dbms_xplan.display('plan_table',null,'typical -cost -bytes'));

----------------------------------------------------------------------------
Id Operation Name Rows Time Pstart Pstop
----------------------------------------------------------------------------
0 SELECT STATEMENT 4 00:00:01
1 HASH GROUP BY 4 00:00:01
* 2 HASH JOIN 960 00:00:01
3 TABLE ACCESS FULL PRODUCTS 766 00:00:01
4 PARTITION RANGE ALL 960 00:00:01 1 16
5 TABLE ACCESS FULL SALES 960 00:00:01 1 16
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("P"."PROD_ID"="S"."PROD_ID")

Note Section


In addition to the plan, the package displays notes in the NOTE section, such as that dynamic sampling was used during query optimization or that star transformation was applied to the query.
For example, if the table SALES did not have statistics then the optimizer will use dynamic sampling and the plan display will report it as follows (see '+note' detail in the query):


select plan_table_output
from table(dbms_xplan.display('plan_table',null,'basic +note'));

------------------------------------------
Id Operation Name
------------------------------------------
0 SELECT STATEMENT
1 HASH GROUP BY
2 HASH JOIN
3 TABLE ACCESS FULL PRODUCTS
4 PARTITION RANGE ALL
5 TABLE ACCESS FULL SALES
------------------------------------------

Note
-----
- dynamic sampling used for this statement


Bind peeking



The query optimizer takes into account the values of bind variable values when generation an execution plan. It does what is generally called bind peeking. See the first post in this blog about the concept of bind peeking and its impact on the plans and the performance of SQL statements.
As stated earlier the plan shown in V$SQL_PLAN takes into account the values of bind variables while the one shown from using EXPLAIN PLAN does not. Starting with 10gR2, the dbms_xplan package allows the display of the bind variable values used to generate a particular cursor/plan. This is done by adding '+peeked_binds' to the format argument when using display_cursor().
This is illustrated with the following example:



variable pcat varchar2(50)
exec :pcat := 'Women'

select PROD_CATEGORY, avg(amount_sold)
from sales s, products p
where p.PROD_ID = s.PROD_ID
and prod_category != :pcat
group by PROD_CATEGORY;

select plan_table_output
from table(dbms_xplan.display_cursor(null,null,'basic +PEEKED_BINDS'));

------------------------------------------
Id Operation Name
------------------------------------------
0 SELECT STATEMENT
1 HASH GROUP BY
2 HASH JOIN
3 TABLE ACCESS FULL PRODUCTS
4 PARTITION RANGE ALL
5 TABLE ACCESS FULL SALES
------------------------------------------

Peeked Binds (identified by position):
--------------------------------------

1 - :PCAT (VARCHAR2(30), CSID=2): 'Women'
Categories: DBA Blogs, Development

UK Oracle Apps Contractors, Consultants and Blogger Meetup in London

Richard Byrom - Wed, 2008-02-06 21:44

Next week we’re having a UK Oracle Apps Contractors, Consultants and Blogger Meetup at the Tapas Brindisa in London. If you fit into any of those categories and would be interested in coming along and meeting myself and other Oracle Apps Fans for a drink and chat then please join us at 7pm on the 13th February 2008. 

The event was initiated by Simon Tomey of BeLife and I’ll be joining in along with some of his colleagues and friends. Although its a social I’ll be there with my laptop and internet connection so we can always do some techy stuff.

If you need any further information don’t hesitate to contact me. I’ve also created an event in Yahoo’s upcoming calendar tool so that you can automatically add the details to whatever calendar software or application you are using or let us know whether you plan to attend. 

What CPU Jan 2008 Patch will not tell you

Virag Sharma - Tue, 2008-02-05 09:55

What CPU Jan 2008 Patch will not tell you

File system space :- opatch suppose to check files system space as a part of Running prerequisite checks... and start apply patch when enough space is there.

Patching component oracle.rdbms, 11.1.0.6.0...

Updating archive file "/u01/app/oracle/product/11.1.0.6/lib/libserver11.a" with "lib/libserver11.a/kzsr.o"
Updating archive file "/u01/app/oracle/product/11.1.0.6/lib/libserver11.a" with "lib/libserver11.a/kzu.o"

The following actions have failed:

Archive not applied /home/oracle/patch/CPU2008/6646866/6650135/files/lib/libserver11.a/kzsr.o to /u01/app/oracle/product/11.1.0.6/lib/libserver11.a... ''

Archive not applied /home/oracle/patch/CPU2008/6646866/6650135/files/lib/libserver11.a/kzu.o to /u01/app/oracle/product/11.1.0.6/lib/libserver11.a... ''

Do you want to proceed? [yn]

Well I came out with N option. When we started opatch again , it gave error some thing like spacecheck fail.

So make sure you have enough space on file system.

In case of RAC database, CPU patch readme.txt will not tell you that one need to set cluster_database = false , before starting database in upgrade mode(“startup upgrade”). For compiling views and objects.

So make sure, in RAC environment you have set cluster_database = false before starting database in upgrade mode

<?xml:namespace prefix = o />

Categories: DBA Blogs

Local vs. remote connection performance

Yasin Baskan - Tue, 2008-02-05 04:13

It has been asked several times in several places; is there a performance difference between running a query locally in a client on the server and running the same query in a remote client?

The obvious answer given by the respondents including myself is: "if you do not return thousands of rows through the network, there must not be any difference". This type of response is opposed to what I believe; even if the answer seems obvious test it before you make any suggestions.

Tanel Poder got the same question and did what is needed to be done, he tested it and showed that there was a difference. In this great post of his.

His tests use a database on Solaris, sqlplus clients on Windows and Linux. I have tested the same using a database on Linux and the same behavior is observed there too.

Lesson learned again and again: test your suggestion even if the answer seems obvious.

Rolling invalidations Addendum

Fairlie Rego - Mon, 2008-02-04 19:47
If you enable cursor trace for the testcase in my previous post using the method described in http://el-caro.blogspot.com/search?q=unshared

The generated trace in udump will have the following

************************************************************
kksCheckCursor: pinning child #0 in shared mode 7d42bd0e8 7d7e87600
Failed sharing : Rolling invalidation
kksUnlockChild: releasing child
Failed sharing : 800000000000
kksSearchChildList: no suitable child found
Creating new child object #1
kksLoadChild: reload 0 child_reload 0
kksLoadChild: reload 0 child_reload 0
Compilation environment difference Failed sharing : 0
Change in cursor environment
************************************************************

Also the parameter _optimizer_invalidation_period does not work for statements using PQ. This can be tested by changing the degree of the table to 4 (say) and running the same test. In this case a new child cursor is generated on the first re-execution of the statement after stats are gathered even when the auto_invalidate flag is used.

Add new node to apps

Fadi Hasweh - Mon, 2008-02-04 11:04
A customer want to add new web/forms node to his current multi-node apps 11.5.10.2 production system also we have to configure a web cache load balancer.
The current system is 2 nodes (db node, apps tire node).
I am currently doing the configuration on test environment, I followed note (
How to Add a New Node to The Application Tier, and Make It a Web Server Node Only, Note: 464018.1 same as instructions of "Part 4: Adding a new node to an existing system" under "Section 4: Advanced Cloning Options" of Note 230672.1"Cloning Oracle Applications Release 11iwith Rapid Clone".

I am currently cloning the apps tire to the new server after that I will run adcfgclone.pl then adaddnode.pl in order to configure it. One important note is to disable SQL*Net Access so that the new node can access the database.
after that i will try to configure the webcache as load balancer.
I will update you once done.

Enjoy
Fadi

UTL_FILE_DIR issue after applying patch 5985992 'TXK AUTOCONFIG RUP Q'

Aviad Elbaz - Mon, 2008-02-04 09:17

Do you have more than 240 characters in your utl_file_dir?

if so, you should read this before applying patch 5985992 'TXK AUTOCONFIG RUP Q (Jul/Aug 2007)'.

After applying this patch, AutoConfig on database tier failed with the following error:

[PROFILE PHASE]
  AutoConfig could not successfully execute the following scripts:
    Directory: [RDBMS_ORACLE_HOME]/appsutil/install/[context_name] 
      afdbprf.sh              INSTE8_PRF         1

AutoConfig is exiting with status 1

It wasn't clear why afdbprf.sh failed so I ran it manually from ssh terminal and I got this:

$ [RDBMS_ORACLE_HOME]/appsutil/install/[context_name]/afdbprf.sh

afdbprf.sh started at Tue Jan 29 17:43:21 IST 2008

The environment settings are as follows ...

       ORACLE_HOME : ....
       .........
       .........

Application Id : 0
Profile Name : BIS_DEBUG_LOG_DIRECTORY
Level Id : 10001
New Value : /usr/tmp
Old Value : /usr/tmp
declare
*
ERROR at line 1:
ORA-12899: value too large for column
"APPLSYS"."FND_PROFILE_OPTION_VALUES"."PROFILE_OPTION_VALUE" (actual: 486,
maximum: 240)
ORA-06512: at line 44
ORA-06512: at line 139

Looking into afdbprf.sql  (executed by afdbprf.sh) reveals the problem:

.......

--Setting  BIS_DEBUG_LOG_DIRECTORY
           set_profile(0, 'BIS_DEBUG_LOG_DIRECTORY',
                       10001, 0,
                       '&BIS_DEBUG_LOG_DIR',
                       NULL);

-- This profile option was earlier set in addbprf.sql via bug 2843457, Now moved here for bug 5722442
--
-- Set up UTL_FILE_LOG profile option
--
          set_profile(1, 'UTL_FILE_LOG',
                      10001, 0, '/usr/tmp,........[more than 240 characters..... :-) ]',
                      NULL);

In order to fix bug 5722442, the UTL_FILE_LOG updated with the value of s_db_util_filedir from the database context file and a new bug was created (not a bug according to Bug 6404909).

If the s_db_util_filedir contains more than 240 characters it can't be stored in a profile since the PROFILE_OPTION_VALUE column defined as varchar2(240).

The s_db_util_filedir initialized when creating the database context file by adbldxml.pl, and set up with the value of utl_file_dir database parameter.

I'm not sure why it should be updated with all directories within utl_file_dir and not with the relevant directories only...?! 
The UTL_FILE_LOG profile (or "Stored procedure log directory") wasn't updated with AutoConfig before applying this patch, so I'm not sure regarding the purpose of this profile.
Before applying this patch I have this profile set to a directory that doesn't exist...

The solution according to Note:458511.1 - "After patch 5985992 AutoConfig On Database Tier Fails with script afdbprf.sh" is to change the value of s_db_util_filedir in the database context file to a value less the 240 characters and run AutoConfig again.

And it works, of course...

What should I do if I my utl_file_dir contains more than 240 characters?!

Actually I don't have a good answer to this question but I will try to locate all the places on which this issue might affect when s_db_util_filedir will be updated with a value less than 240 chars.

  1. AutoConfig on the DB Tier creates the [SID]_APPS_BASE.ora file under $RDBMS_ORACLE_HOME/dbs (if it's not already exist), which contains the utl_file_dir database parameter generated respectively to the s_db_util_filedir from the database context file.
    So if you plan to rebuild your init.ora with AutoConfig you will need to update the utl_file_dir manually.
  2. When executing adbldxml.pl on DB Tier to rebuild the database context file, the s_db_util_filedir will be filled with the utl_file_dir database parameter - that might contain more than 240 chars.
    Therefore, before executing AutoConfig you should edit the new context file and shorten the value of s_db_util_filedir.
  3. The UTL_FILE_LOG profile ("Stored procedure log directory")  will be updated with s_db_util_filedir shortened value.

You are welcome to leave a comment or update with additional information regarding this issue.

Aviad

Categories: APPS Blogs

Database version control

Yasin Baskan - Mon, 2008-02-04 07:33

Coding horror is one of the software development blogs I keep a close eye on.

Jeff Atwood posted a nice piece about database version control recently. Database version control is maybe one of the most important and unfortunately most overlooked things in software development. The post is a good read including the links he provides.

Buying marketshare

Hampus Linden - Sun, 2008-02-03 12:55
Ok, I'm way waay late on commenting on the two most recent IT company takeovers .

Sun buys MySQL
Tricky one, I have to honestly say I have a hard time seeing where MySQL fits in to Sun's portfolio.
Sun is one of Oracle biggest partners (the biggest after Dell?) and they are one of the biggest backers of PostgreSQL, then even employee a few of head lead Postgres developers.
Sun has got two options with MySQL, they can either invest a lot of RnD in MySQL and actually turn it in to a really good product. MySQL is without a doubt a product with incredible potential, but is today pretty much a over-sized way to store CSV index files (bit harsh perhaps but hey).
Or they can ride out the profits by selling LAMP-stack boxes and let MySQL sustain itself, I doubt they can make up for the billion dollars they payed for it though.
Oh, and what will Sun make of the (rather evil) closed source MySQL Enterprise Monitor administration product from MySQL. Sun having new closed source software? Errr!

Some advice for sun,
1) Add real constraint support. This is a show stopper for many, and the InnoDB approach isn't good enough. MySQL needs native solid from the ground up constraint support, independent of storage engine, version bla bla. If people don't want the "performance loss" of constraints, don't add the constraints!
2) Sort out proper I/O management, tablespaces, clean partitioning all that stuff. MySQL has improved a lot on this point over the last 12 months. But more work is needed, especially if they are planning on selling MySQL powered Thumpers.
3) 100% solid transactional support. MVCC would be fun. I've seen a few to many transactional problems in MySQL. Some sort of logging/undo/redo system is needed.

On top of that I wouldn't mind a clean hot-backup tool, requires proper transactional support (with something like SCN's etc), 3rd party scripting language support (PgPerl anyone?).

That's about 250.000 RnD hours, go get busy!

Oracle buys BEA
This is most likely very good news. I'm a big fan of the Oracle database (duh) and a fan of Weblogic Application Server.
What I'm not a big fan of is Oracle Application Server (oc4j).
I hope Oracle realize that Weblogic is the superior product and works with Weblogic as the front runner and port oc4j features to Weblogic and not the other way around. I doubt that will be the case though. If not, I hope they don't kill off Weblogic any time soon.

Hmm, that's all for now.

Spam, Spam filters, Being Spammer, Being Filtered-out ...

Pawel Barut - Sun, 2008-02-03 11:46
Written by Paweł Barut
My thoughts about Spam. Some time agou I wrote about spam in comments on my blog, but this time it will be about email spam. This is something that from time to time irritates me a lot. Spam is something that nobody wants to see in his mails. And to solve this problem there are many spam filters, IP Block lists and other solutions. But none of them is 100% accurate. And this is what causes problems. Spam filers should be solving problems, but many times creates new ones.
In ideal situation spam filter eliminates 100% of spam, and passes 100% of emails that are expected by users. But it's not true. I will now show example situations, that lead me to conclusion, that spam filter are useless.
Situation 1.
Spam filter did not recognized spam mail, and I have to manually figure out that this is spam. So I need to one more click to delete message.
Situation 2.
Spam filter deletes mail that was intended for me. This was false alarm as it wasn't spam.
Situation 3.
I've send email to customer/friend. His spam filter blocked it. I did not received any delivery failure message.

In my opinion situation 2 and 3 are very dangerous and I would like to avoid any of those situations. In my opinion those situations makes spam filters useless. It is especially dangerous if this block is done by service provider, and when you cannot see list of spam being filtered out. This is what really annoys my and makes me angry. In fact it makes whole email system unreliable (I do not want to say useless), as you never know if you recipient get your email or not.

I do not know what is solution for this. I can see few options, but none of them is perfect:
  1. Each and every email should be signed digitally by sender, and additionally by his service provider. Spam filers should be able to verify this and honor such signing, and not consider this to be spam. Of course spammers could find way to sign theirs mail too, and vanish this approach.
  2. Everybody should use "return receipt" to confirm mail delivery. Well, quite simple, but personally I never allow my mailer to send confirmations, as I do not want to reveal when I've read mail.
  3. Make mail system payable. So for every mail you send you have to pay small amount of money. $0.01 per email should not be problem for real email users, but could cost fortune for spammers. For this money service providers should ensure that your mail will reach recipient.
  4. Use captcha to validate that email is send by real user. I could work as this: when spam filter suspects spam, it sends back email to sender with link to web page on which user will have to provide answer to captcha to make his mail pass throu spam filter.

At the end I would like to ask you: How do you deal with spam?

Cheers Paweł

--
Related Articles on Paweł Barut blog:

Categories: Development

Secrets of Happiness

Virag Sharma - Sun, 2008-02-03 10:26
Secrets of Happiness

I was traveling from my from Agra (= City of Taj Mahal, 30 Miles away from my home town ) to Hyderabad. My Train AP Express was late so thought to pick some book. As usual I picked some my favorites books/magazines like Reader Digest , Champak ( Famous Kids book in India ). While purchasing books/magazines, saw book with title “Secrets of Happiness - Tanushree Podder”. Title looks very odd to me because , I feel how one can define happiness. Well , Just picked the book and browse some page , it look good. It is not different from other book like “Mega Living- Robin Sharma” , “Who will Cry when you die Robin Sharma”. English look typical Indian English. Some of the stories we already heard in our childhood from Grand Mother , Mom , aunty etc. But it is really nice to re-visit those stories. Writer presentation look good and that make book more interesting. I started reading book from Agra and keep reading book till Gwalior, Everybody in train want to sleep , since I was reading book light was on and everybody eye brow in train getting tight. Finally switched off light , but finished book before reaching Hyderabad. I feel book is worth to read , that’s why thought to write about this book.

Check book excerpt [click Here]

Summary
  1. What you put in life , you get back
  2. No situation is good / bad / ugly , it is our believe that colored our perception about situation and we feel accordingly(good / bad / ugly) about situation. So change our believe , thought things will improve/change, otherwise same thought/believe same result
  3. Keep It Simple and Straight ( KISS) .....................................

Apart from this there are two more book, that really worth to read

Monk Who Sold His Ferrari – Robin Sharma

Follow Your Heart
- Andrew Matthews

I read above book frequently , and feel , If I would have got these book 6 Year Back ……. :-)

><?xml:namespace prefix = o />


Categories: DBA Blogs

Questions needed answers

Fadi Hasweh - Fri, 2008-02-01 04:36
Dears apps dbas,
I hope thing are well with you, I received the following questions from a friend of mine and I promised him to answer them and wanted to share them with you maybe you can help me answer most of them.
Appreciate your help.

(we have some of the answers by dave (thank you) in the comments and all of them under http://www.teachmeoracle.com/forum/viewtopic.php?t=4102)

1. How do we know that particular instance is cloned or normal installed?
2. How can you know that how many modules are already implemented in this instance?
3. How to enable archive log without shutting down your database?
4. How can we know that whether we already applied latest AUTOCONFIG patch or not at our instance?
5. Is this possible to clone a database from hotbackup? If yes plz tell how?
6. Suppose your database size is 2000GB now you want to clone a particular one datafile or tablespace. Plz tell how co clone a datafile or tablespace?
7. You are applying a patch but suddenly it HANGS but log file didn’t showing any error what should be the reason for that HANG?
8. How to clone from multimode to single node?
9. How to apply patch on forms/reports server?

Thanks again for help
Fadi

SM Automatically Shutsdown on Signout

Mark Vakoc - Thu, 2008-01-31 16:35
Hi All

We have recently taken a few instances where the Server Manager process stops when you sign in and out of the Server Manager machine, or you connect remotely to the Server Manager machine and then disconnect from it. This issue has been SARd under 8688882.

SOLUTION:
This is actually a known Oracle issue with OC4J as documented in Metalink document 245609.1. For Server Manager, there are two methods to correct this issue. The first method is recommended since it does not involve editing the registry manually and it also ensures that the Server Manager install script gets modified so if it is rerun in the future, it will automatically add the service correctly. Method 2, however, is likely the quickest work around.
Method 1:
1) Make the following change in the installManagementConsoleService.bat which is located in your JDE_HOME\bin directory of the Server Manager machine:
from:"--StartParams=-Xmx512m;-Djde.home=%JDE_HOME%;-jar;oc4j.jar"
to:"--StartParams=-Xmx512m;-Xrs;-Djde.home=%JDE_HOME%;-jar;oc4j.jar"
Note the addition of -Xrs. This change REQUIRES that -Xrs come just after -Xmx512m.
2) Ensure that the Server Manager service is currently stopped.
3) Open a command prompt, and go to your JDE_HOME\bin directory.
Run:uninstallManagementConsoleService.bat
4) After the service uninstalls successfully,
Run:installManagementConsoleService.bat PASSWORD
where PASSWORD is your original jde_admin password from the server manager installation.
5) Start the service. It should now remain running after you log out.

Method 2:
1) Open the registry editor
2) Locate the following registry key:HKEY_LOCAL_MACHINE\SOFTWARE\Apache Software Foundation\Procrun 2.0\SCFMngmtConsole1\Parameters\Start
where SCFMngmtConsole1 is the last part of the display name of the service
Set the "Params" value to:
-Xmx512m
-Xrs
-Djde.home=C:\jde_home
-jaroc4j.jar
(note the addition of -Xrs)
This change REQUIRES that -Xrs come just after -Xmx512m.
3) Start the service. It should now remain running after you log out.

What is the difference between a gadget and an application?

Oracle EPM Smart Space - Thu, 2008-01-31 11:33

When talking to people about Smart Space I hear this question come up all the time. I have found that most people have very different views on this topic so take what I have to offer as merely another opinion. In my earlier post I talked about the definition of a gadget and stated the following:


Gadgets (or Widgets) are mini applications that expose key content (bits of data) or features generally from a larger (full) application and they deliver these features or data in a simple and visually pleasing manner.


So when I read this I key in on some key concepts that help me to differentiate between an application and a gadget. First, I wrote that gadgets are "mini applications", and to me this means that they are smaller than an application and, at times, related to a full application. When I say smaller I mean smaller in two ways, smaller in physical footprint and smaller in the screen real estate that the gadget takes up. Second, gadgets focus on "key content"… "or features" where an application will have many features and tons of content. Lastly the gadget should present this information in "a simple and visually pleasing manner". In other words when a gadget is giving me information I should not have to guess at what it is telling me, the presentation of the data is just as important as the data itself.


Here are a few examples:


In Smart Space there is a nice search gadget that lets me search for content in Hyperion Reporting and Analysis (System 9). It is very simple, just enter a search term and get results. This is the kind of search I do 99% of the time and that is why this makes a great gadget. If I want to get more advanced I could open the Hyperion Reporting and Analysis application in my browser and navigate the search page to perform the search with a number of other key options. The gadget takes up very little room on my desktop and covers most if not all of my Hyperion Reporting and Analysis search needs, but the application is there when I need it.


In our beta I wrote a notepad gadget that is great for taking quick notes and having them always visible on my desktop but I would not want to write this blog entry using it. For writing emails, document or blog posts I want to use an application like Word that is full of great features for writing.


In the Smart Space Key Contacts gadget I can limit my list of users that I communicate with, down, from the long list that includes people I have seldom contact with, to a much more focused list. At a glance I can see who is available to chat, and with a single click I can start my IM application. In this case the gadget provides visual indication of my key contacts that are available and launches me from the gadget experience to the application experience.


Here is an example from the consumer gadget world and this should drive my point about presentation of the data. I will use images to demonstrate this:


Both deal with system monitoring but the gadget gives me the basics and at a glance tells me what I need to know. (My CPU is fine but memory consumption is a bit high) If I want features and details then I go ahead and open the application.


To conclude I want to keep things simple, so when creating a gadget don't try to satisfy every use case otherwise you will have an application on your hands, make sure that you are building something a user wants to run on their desktop all the time, and make sure what you present has the right design for a user to 'get it' at a glance. I have found that these same Ideas can be applied to almost any application and I think about these concepts whenever I am building a new gadget.

Categories: Development

Detect numbers with TRANSLATE() - Take two

Jared Still - Thu, 2008-01-31 04:03
Last week I wrote about using TRANSLATE to detect numbers in data Using The TRANSLATE() function...

Andrew Clarke at Radio Free Tooting pointed out the shortcomings of using TRANSLATE() to detect numbers.

As I said earlier, all I needed to do was detect if the characters in a string were all digits or not, and I wanted it to be very fast.

But Andrew's remarks got me thinking - could translate be used to detect more complex numbers?

Here's the short list of requirements:

* Detect integers
* Detect numbers with decimal point ( 4.6, 0.2, .7)
* Detect negative and positive ( leading + or - )
* Reject text with more than 1 '.', such as an IP address ( 127.0.0.1 )
* Reject anything with alpha text

And comma's are considered as text. 99,324.1 would be alpha.

If you need to do this on 10g, no problem, as a regular expression can handle it.

Fist create some test data:

drop table number_test;

create table number_test( alphacol varchar2(20));

insert into number_test values('.5');
insert into number_test values('1');
insert into number_test values('2');
insert into number_test values(' 3');
insert into number_test values('4 ');
insert into number_test values('3.14159');
insert into number_test values('127.0.0.1');
insert into number_test values('+34.45');
insert into number_test values('-54.43');
insert into number_test values('this is a test');
insert into number_test values('th1s is 4 t3st');
insert into number_test values('.');
commit;

Now select only columns where the value is a number:

select alphacol
from number_test
where regexp_instr(trim(alphacol),'^[-+]?[0-9]*(\.?[0-9]+)?$') > 0
order by 1

SQL> /

ALPHACOL
--------------------
3
+34.45
-54.43
1
2
3.14159
4

7 rows selected.

That seems to work.

But what if you're stuck doing this on 9i? REGEXP_INSTR is not available.

You can use the user defined function IS_NUMBER(), which works well, but is very slow if used on large amounts of data.

Might we be able to use and abuse the TRANSLATE() function to speed this up? Here's a bit of convoluted SQL that works well on the limited test data:

select alphacol, alpha2
from
(
select alphacol,
-- is there a sign +- ? - remove it
decode(substr(alpha2,1,1),
'-',substr(alpha2,2),
'+',substr(alpha2,2),
alpha2
) alpha2
from (
select
alphacol,
-- remove a single '.' if it/they exists
replace(substr(alphacol,1,instr(alphacol,'.')),'.') || substr(alphacol,instr(alphacol,'.')+1) alpha2
from (
select trim(alphacol) alphacol
from number_test
)
)
)
where substr('||||||||||||||||||||||||||||||||',1,length(alpha2)) = translate(alpha2,'0123456789','||||||||||')
/


(Sorry about formatting - I seem to lose all formatting when I paste SQL)

Output from this nasty bit of SQL is identical to that when using REGEXP_INSTR:

ALPHACOL ALPHA2
-------------------- ----------------------------------------
.5 5
1 1
2 2
3 3
4 4
3.14159 314159
+34.45 3445
-54.43 5443

8 rows selected.

To make the TRANLATE() function do what is needed, a lot of data manipulation had to be done in the SQL. There is so much work being done now that it now takes nearly as long to run as does the IS_NUMBER() function, so there isn't much point in using TRANSLATE().

Runstats results:

SQL> @th5
.047739 secs
.037447 secs
PL/SQL procedure successfully completed.

If nothing else, this was an interesting exercise.
Categories: DBA Blogs

Rolling invalidations

Fairlie Rego - Wed, 2008-01-30 03:17
There have been discussions which I have seen related to the feature of auto invalidation in dbms_stats. A couple of references are

http://forums.oracle.com/forums/thread.jspa?threadID=592771&tstart=30
and
http://www.orafaq.com/maillist/oracle-l/2006/10/10/0429.htm

I have tested the relevant parameter “_optimizer_invalidation_period” on 10.2.0.3 and believe that this is working as expected

Let us take the below testcase where the parameter (it is dynamic) is set to a value of 120

SQL> show parameter optimizer_inva

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
_optimizer_invalidation_period integer 120

We have the following sql statement

11:00:00 SQL> select * from source where rownum<2;

OBJ# LINE SOURCE
---------- ---------- ---------------------------------------------------------------------------
194107 171 -- *Action: Start a new job, or attach to an existing job that has a


1 row selected.

Elapsed: 00:00:00.12
11:00:00 SQL> select * from source where rownum<2;

OBJ# LINE SOURCE
---------- ---------- ---------------------------------------------------------------------------
194107 171 -- *Action: Start a new job, or attach to an existing job that has a


1 row selected.

Elapsed: 00:00:00.00
11:00:00 SQL> select a.child_number,LAST_LOAD_TIME, to_char(LAST_ACTIVE_TIME,'dd-mon-yyyy hh24:mi:ss') ,b.invalidations from
v$SQL_SHARED_CURSOR a, v$sql b where a.sql_id='954g5yyw5tn1s' and a.child_address=b.child_address ;

CHILD_NUMBER LAST_LOAD_TIME TO_CHAR(LAST_ACTIVE_ INVALIDATIONS
------------ ------------------- -------------------- -------------
0 2008-01-29/11:00:00 29-jan-2008 11:00:00 0

1 row selected.

Elapsed: 00:00:00.14
11:00:00 SQL>
11:00:00 SQL> select executions, invalidations,child_number from v$sql where sql_id='954g5yyw5tn1s';

EXECUTIONS INVALIDATIONS CHILD_NUMBER
---------- ------------- ------------
2 0 0

1 row selected.

Now we gather stats on the table with the auto_invalidate parameter passed to the API.

11:00:00 SQL> exec dbms_stats.gather_table_stats('REGOFA','SOURCE',no_invalidate => DBMS_STATS.AUTO_INVALIDATE);

PL/SQL procedure successfully completed.

Elapsed: 00:00:01.50

Then we keep executing the sql statement of interest to check when the new cursor will be generated.

Elapsed: 00:00:01.50
11:00:13 SQL> select a.child_number,LAST_LOAD_TIME, to_char(LAST_ACTIVE_TIME,'dd-mon-yyyy hh24:mi:ss') ,b.invalidations from
v$SQL_SHARED_CURSOR a, v$sql b where a.sql_id='954g5yyw5tn1s' and a.child_address=b.child_address ;

CHILD_NUMBER LAST_LOAD_TIME TO_CHAR(LAST_ACTIVE_ INVALIDATIONS
------------ ------------------- -------------------- -------------
0 2008-01-29/11:00:00 29-jan-2008 11:00:09 0

1 row selected.

Elapsed: 00:00:00.05
11:00:13 SQL> select executions, invalidations,child_number from v$sql where sql_id='954g5yyw5tn1s';

EXECUTIONS INVALIDATIONS CHILD_NUMBER
---------- ------------- ------------
3 0 0

1 row selected.

Elapsed: 00:00:00.00
11:00:13 SQL> select * from v$sql_shared_cursor where sql_id='954g5yyw5tn1s';

SQL_ID ADDRESS CHILD_ADDRESS CHILD_NUMBER U S O O S L S E B P
------------- ---------------- ---------------- ------------ - - - - - - - - - -
I S T A B D L T R I I R L I O S M U T N F A I T D L D B P C S R P T M B M R O P
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
M F L
- - -
954g5yyw5tn1s 00000007D3BBCBD8 00000007D5644028 0 N N N N N N N N N N
N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N
N N N


1 row selected.
…….
11:00:37 SQL> select * from source where rownum<2;

OBJ# LINE SOURCE
---------- ---------- ---------------------------------------------------------------------------
194107 171 -- *Action: Start a new job, or attach to an existing job that has a


1 row selected.

Elapsed: 00:00:00.01
11:00:39 SQL> select a.child_number,LAST_LOAD_TIME, to_char(LAST_ACTIVE_TIME,'dd-mon-yyyy hh24:mi:ss') ,b.invalidations from
v$SQL_SHARED_CURSOR a, v$sql b where a.sql_id='954g5yyw5tn1s' and a.child_address=b.child_address ;

CHILD_NUMBER LAST_LOAD_TIME TO_CHAR(LAST_ACTIVE_ INVALIDATIONS
------------ ------------------- -------------------- -------------
0 2008-01-29/11:00:00 29-jan-2008 11:00:25 0
1 2008-01-29/11:00:37 29-jan-2008 11:00:37 0

2 rows selected.

Elapsed: 00:00:00.04
11:00:39 SQL> select executions, invalidations,child_number from v$sql where sql_id='954g5yyw5tn1s';

EXECUTIONS INVALIDATIONS CHILD_NUMBER
---------- ------------- ------------
7 0 0
1 0 1

2 rows selected.

Elapsed: 00:00:00.00
11:00:39 SQL> select * from v$sql_shared_cursor where sql_id='954g5yyw5tn1s';

SQL_ID ADDRESS CHILD_ADDRESS CHILD_NUMBER U S O O S L S E B P
------------- ---------------- ---------------- ------------ - - - - - - - - - -
I S T A B D L T R I I R L I O S M U T N F A I T D L D B P C S R P T M B M R O P
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
M F L
- - -
954g5yyw5tn1s 00000007D3BBCBD8 00000007D5644028 0 N N N N N N N N N N
N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N
N N N

954g5yyw5tn1s 00000007D3BBCBD8 00000007D3753DC0 1 N N N N N N N N N N
N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N Y N N
N N N

So somewhere between 11:00:00 and 11:00:39 (within the 2 minute window) a new child cursor has been generated with roll_invalid_mismatch set to ‘Y”

I have tested for the following values of _optimizer_invalidation_period and I see consistent results

120
210
600
1800
18000

Hence this would be an ideal way to avoid a hard parse storm

Estimating the Network Band Width Required for Standby Database

Madan Mohan - Wed, 2008-01-30 00:15
For Better DR (Disaster Recovery) Site setup it is important to know the required Bandwidth Link Between the Primary and DR Site.

By using the Below formula, we can estimate the required Bandwidth Based on the Peak redo rate.

Required bandwidth = ((Redo rate in bytes per second / 0.7) * 8) / 1,000,000

= bandwidth in Mbps.

How to find a Redo Rate for a Database:-
*********************************************

Redo Rate can be found out from the Statspack report. During the peak duration of your business, run a Statspack snapshot at periodic intervals. For example, you may run it three times during your peak hours, each time for a five-minute duration. The Statspack snapshot report will include a "Redo size" line under the "Load Profile" section near the beginning of the report. This line includes the "Per Second" and "Per Transaction" measurements for the redo size in bytes during the snapshot interval. Make a note of the "Per Second" value. Take the highest "Redo size" "Per Second" value of these three snapshots, and that is your peak redo generation rate. For example, this highest "Per Second" value may be 394,253 bytes or 385 KB.

Req'd Bandwidth = ((394253 / 0.7) * 8) / 1,000,000
= 4.5 Mbps

Pages

Subscribe to Oracle FAQ aggregator