Feed aggregator

Spam, Spam filters, Being Spammer, Being Filtered-out ...

Pawel Barut - Sun, 2008-02-03 11:46
Written by Paweł Barut
My thoughts about Spam. Some time agou I wrote about spam in comments on my blog, but this time it will be about email spam. This is something that from time to time irritates me a lot. Spam is something that nobody wants to see in his mails. And to solve this problem there are many spam filters, IP Block lists and other solutions. But none of them is 100% accurate. And this is what causes problems. Spam filers should be solving problems, but many times creates new ones.
In ideal situation spam filter eliminates 100% of spam, and passes 100% of emails that are expected by users. But it's not true. I will now show example situations, that lead me to conclusion, that spam filter are useless.
Situation 1.
Spam filter did not recognized spam mail, and I have to manually figure out that this is spam. So I need to one more click to delete message.
Situation 2.
Spam filter deletes mail that was intended for me. This was false alarm as it wasn't spam.
Situation 3.
I've send email to customer/friend. His spam filter blocked it. I did not received any delivery failure message.

In my opinion situation 2 and 3 are very dangerous and I would like to avoid any of those situations. In my opinion those situations makes spam filters useless. It is especially dangerous if this block is done by service provider, and when you cannot see list of spam being filtered out. This is what really annoys my and makes me angry. In fact it makes whole email system unreliable (I do not want to say useless), as you never know if you recipient get your email or not.

I do not know what is solution for this. I can see few options, but none of them is perfect:
  1. Each and every email should be signed digitally by sender, and additionally by his service provider. Spam filers should be able to verify this and honor such signing, and not consider this to be spam. Of course spammers could find way to sign theirs mail too, and vanish this approach.
  2. Everybody should use "return receipt" to confirm mail delivery. Well, quite simple, but personally I never allow my mailer to send confirmations, as I do not want to reveal when I've read mail.
  3. Make mail system payable. So for every mail you send you have to pay small amount of money. $0.01 per email should not be problem for real email users, but could cost fortune for spammers. For this money service providers should ensure that your mail will reach recipient.
  4. Use captcha to validate that email is send by real user. I could work as this: when spam filter suspects spam, it sends back email to sender with link to web page on which user will have to provide answer to captcha to make his mail pass throu spam filter.

At the end I would like to ask you: How do you deal with spam?

Cheers Paweł

--
Related Articles on Paweł Barut blog:

Categories: Development

Secrets of Happiness

Virag Sharma - Sun, 2008-02-03 10:26
Secrets of Happiness

I was traveling from my from Agra (= City of Taj Mahal, 30 Miles away from my home town ) to Hyderabad. My Train AP Express was late so thought to pick some book. As usual I picked some my favorites books/magazines like Reader Digest , Champak ( Famous Kids book in India ). While purchasing books/magazines, saw book with title “Secrets of Happiness - Tanushree Podder”. Title looks very odd to me because , I feel how one can define happiness. Well , Just picked the book and browse some page , it look good. It is not different from other book like “Mega Living- Robin Sharma” , “Who will Cry when you die Robin Sharma”. English look typical Indian English. Some of the stories we already heard in our childhood from Grand Mother , Mom , aunty etc. But it is really nice to re-visit those stories. Writer presentation look good and that make book more interesting. I started reading book from Agra and keep reading book till Gwalior, Everybody in train want to sleep , since I was reading book light was on and everybody eye brow in train getting tight. Finally switched off light , but finished book before reaching Hyderabad. I feel book is worth to read , that’s why thought to write about this book.

Check book excerpt [click Here]

Summary
  1. What you put in life , you get back
  2. No situation is good / bad / ugly , it is our believe that colored our perception about situation and we feel accordingly(good / bad / ugly) about situation. So change our believe , thought things will improve/change, otherwise same thought/believe same result
  3. Keep It Simple and Straight ( KISS) .....................................

Apart from this there are two more book, that really worth to read

Monk Who Sold His Ferrari – Robin Sharma

Follow Your Heart
- Andrew Matthews

I read above book frequently , and feel , If I would have got these book 6 Year Back ……. :-)

><?xml:namespace prefix = o />


Categories: DBA Blogs

Questions needed answers

Fadi Hasweh - Fri, 2008-02-01 04:36
Dears apps dbas,
I hope thing are well with you, I received the following questions from a friend of mine and I promised him to answer them and wanted to share them with you maybe you can help me answer most of them.
Appreciate your help.

(we have some of the answers by dave (thank you) in the comments and all of them under http://www.teachmeoracle.com/forum/viewtopic.php?t=4102)

1. How do we know that particular instance is cloned or normal installed?
2. How can you know that how many modules are already implemented in this instance?
3. How to enable archive log without shutting down your database?
4. How can we know that whether we already applied latest AUTOCONFIG patch or not at our instance?
5. Is this possible to clone a database from hotbackup? If yes plz tell how?
6. Suppose your database size is 2000GB now you want to clone a particular one datafile or tablespace. Plz tell how co clone a datafile or tablespace?
7. You are applying a patch but suddenly it HANGS but log file didn’t showing any error what should be the reason for that HANG?
8. How to clone from multimode to single node?
9. How to apply patch on forms/reports server?

Thanks again for help
Fadi

SM Automatically Shutsdown on Signout

Mark Vakoc - Thu, 2008-01-31 16:35
Hi All

We have recently taken a few instances where the Server Manager process stops when you sign in and out of the Server Manager machine, or you connect remotely to the Server Manager machine and then disconnect from it. This issue has been SARd under 8688882.

SOLUTION:
This is actually a known Oracle issue with OC4J as documented in Metalink document 245609.1. For Server Manager, there are two methods to correct this issue. The first method is recommended since it does not involve editing the registry manually and it also ensures that the Server Manager install script gets modified so if it is rerun in the future, it will automatically add the service correctly. Method 2, however, is likely the quickest work around.
Method 1:
1) Make the following change in the installManagementConsoleService.bat which is located in your JDE_HOME\bin directory of the Server Manager machine:
from:"--StartParams=-Xmx512m;-Djde.home=%JDE_HOME%;-jar;oc4j.jar"
to:"--StartParams=-Xmx512m;-Xrs;-Djde.home=%JDE_HOME%;-jar;oc4j.jar"
Note the addition of -Xrs. This change REQUIRES that -Xrs come just after -Xmx512m.
2) Ensure that the Server Manager service is currently stopped.
3) Open a command prompt, and go to your JDE_HOME\bin directory.
Run:uninstallManagementConsoleService.bat
4) After the service uninstalls successfully,
Run:installManagementConsoleService.bat PASSWORD
where PASSWORD is your original jde_admin password from the server manager installation.
5) Start the service. It should now remain running after you log out.

Method 2:
1) Open the registry editor
2) Locate the following registry key:HKEY_LOCAL_MACHINE\SOFTWARE\Apache Software Foundation\Procrun 2.0\SCFMngmtConsole1\Parameters\Start
where SCFMngmtConsole1 is the last part of the display name of the service
Set the "Params" value to:
-Xmx512m
-Xrs
-Djde.home=C:\jde_home
-jaroc4j.jar
(note the addition of -Xrs)
This change REQUIRES that -Xrs come just after -Xmx512m.
3) Start the service. It should now remain running after you log out.

What is the difference between a gadget and an application?

Oracle EPM Smart Space - Thu, 2008-01-31 11:33

When talking to people about Smart Space I hear this question come up all the time. I have found that most people have very different views on this topic so take what I have to offer as merely another opinion. In my earlier post I talked about the definition of a gadget and stated the following:


Gadgets (or Widgets) are mini applications that expose key content (bits of data) or features generally from a larger (full) application and they deliver these features or data in a simple and visually pleasing manner.


So when I read this I key in on some key concepts that help me to differentiate between an application and a gadget. First, I wrote that gadgets are "mini applications", and to me this means that they are smaller than an application and, at times, related to a full application. When I say smaller I mean smaller in two ways, smaller in physical footprint and smaller in the screen real estate that the gadget takes up. Second, gadgets focus on "key content"… "or features" where an application will have many features and tons of content. Lastly the gadget should present this information in "a simple and visually pleasing manner". In other words when a gadget is giving me information I should not have to guess at what it is telling me, the presentation of the data is just as important as the data itself.


Here are a few examples:


In Smart Space there is a nice search gadget that lets me search for content in Hyperion Reporting and Analysis (System 9). It is very simple, just enter a search term and get results. This is the kind of search I do 99% of the time and that is why this makes a great gadget. If I want to get more advanced I could open the Hyperion Reporting and Analysis application in my browser and navigate the search page to perform the search with a number of other key options. The gadget takes up very little room on my desktop and covers most if not all of my Hyperion Reporting and Analysis search needs, but the application is there when I need it.


In our beta I wrote a notepad gadget that is great for taking quick notes and having them always visible on my desktop but I would not want to write this blog entry using it. For writing emails, document or blog posts I want to use an application like Word that is full of great features for writing.


In the Smart Space Key Contacts gadget I can limit my list of users that I communicate with, down, from the long list that includes people I have seldom contact with, to a much more focused list. At a glance I can see who is available to chat, and with a single click I can start my IM application. In this case the gadget provides visual indication of my key contacts that are available and launches me from the gadget experience to the application experience.


Here is an example from the consumer gadget world and this should drive my point about presentation of the data. I will use images to demonstrate this:


Both deal with system monitoring but the gadget gives me the basics and at a glance tells me what I need to know. (My CPU is fine but memory consumption is a bit high) If I want features and details then I go ahead and open the application.


To conclude I want to keep things simple, so when creating a gadget don't try to satisfy every use case otherwise you will have an application on your hands, make sure that you are building something a user wants to run on their desktop all the time, and make sure what you present has the right design for a user to 'get it' at a glance. I have found that these same Ideas can be applied to almost any application and I think about these concepts whenever I am building a new gadget.

Categories: Development

Detect numbers with TRANSLATE() - Take two

Jared Still - Thu, 2008-01-31 04:03
Last week I wrote about using TRANSLATE to detect numbers in data Using The TRANSLATE() function...

Andrew Clarke at Radio Free Tooting pointed out the shortcomings of using TRANSLATE() to detect numbers.

As I said earlier, all I needed to do was detect if the characters in a string were all digits or not, and I wanted it to be very fast.

But Andrew's remarks got me thinking - could translate be used to detect more complex numbers?

Here's the short list of requirements:

* Detect integers
* Detect numbers with decimal point ( 4.6, 0.2, .7)
* Detect negative and positive ( leading + or - )
* Reject text with more than 1 '.', such as an IP address ( 127.0.0.1 )
* Reject anything with alpha text

And comma's are considered as text. 99,324.1 would be alpha.

If you need to do this on 10g, no problem, as a regular expression can handle it.

Fist create some test data:

drop table number_test;

create table number_test( alphacol varchar2(20));

insert into number_test values('.5');
insert into number_test values('1');
insert into number_test values('2');
insert into number_test values(' 3');
insert into number_test values('4 ');
insert into number_test values('3.14159');
insert into number_test values('127.0.0.1');
insert into number_test values('+34.45');
insert into number_test values('-54.43');
insert into number_test values('this is a test');
insert into number_test values('th1s is 4 t3st');
insert into number_test values('.');
commit;

Now select only columns where the value is a number:

select alphacol
from number_test
where regexp_instr(trim(alphacol),'^[-+]?[0-9]*(\.?[0-9]+)?$') > 0
order by 1

SQL> /

ALPHACOL
--------------------
3
+34.45
-54.43
1
2
3.14159
4

7 rows selected.

That seems to work.

But what if you're stuck doing this on 9i? REGEXP_INSTR is not available.

You can use the user defined function IS_NUMBER(), which works well, but is very slow if used on large amounts of data.

Might we be able to use and abuse the TRANSLATE() function to speed this up? Here's a bit of convoluted SQL that works well on the limited test data:

select alphacol, alpha2
from
(
select alphacol,
-- is there a sign +- ? - remove it
decode(substr(alpha2,1,1),
'-',substr(alpha2,2),
'+',substr(alpha2,2),
alpha2
) alpha2
from (
select
alphacol,
-- remove a single '.' if it/they exists
replace(substr(alphacol,1,instr(alphacol,'.')),'.') || substr(alphacol,instr(alphacol,'.')+1) alpha2
from (
select trim(alphacol) alphacol
from number_test
)
)
)
where substr('||||||||||||||||||||||||||||||||',1,length(alpha2)) = translate(alpha2,'0123456789','||||||||||')
/


(Sorry about formatting - I seem to lose all formatting when I paste SQL)

Output from this nasty bit of SQL is identical to that when using REGEXP_INSTR:

ALPHACOL ALPHA2
-------------------- ----------------------------------------
.5 5
1 1
2 2
3 3
4 4
3.14159 314159
+34.45 3445
-54.43 5443

8 rows selected.

To make the TRANLATE() function do what is needed, a lot of data manipulation had to be done in the SQL. There is so much work being done now that it now takes nearly as long to run as does the IS_NUMBER() function, so there isn't much point in using TRANSLATE().

Runstats results:

SQL> @th5
.047739 secs
.037447 secs
PL/SQL procedure successfully completed.

If nothing else, this was an interesting exercise.
Categories: DBA Blogs

Rolling invalidations

Fairlie Rego - Wed, 2008-01-30 03:17
There have been discussions which I have seen related to the feature of auto invalidation in dbms_stats. A couple of references are

http://forums.oracle.com/forums/thread.jspa?threadID=592771&tstart=30
and
http://www.orafaq.com/maillist/oracle-l/2006/10/10/0429.htm

I have tested the relevant parameter “_optimizer_invalidation_period” on 10.2.0.3 and believe that this is working as expected

Let us take the below testcase where the parameter (it is dynamic) is set to a value of 120

SQL> show parameter optimizer_inva

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
_optimizer_invalidation_period integer 120

We have the following sql statement

11:00:00 SQL> select * from source where rownum<2;

OBJ# LINE SOURCE
---------- ---------- ---------------------------------------------------------------------------
194107 171 -- *Action: Start a new job, or attach to an existing job that has a


1 row selected.

Elapsed: 00:00:00.12
11:00:00 SQL> select * from source where rownum<2;

OBJ# LINE SOURCE
---------- ---------- ---------------------------------------------------------------------------
194107 171 -- *Action: Start a new job, or attach to an existing job that has a


1 row selected.

Elapsed: 00:00:00.00
11:00:00 SQL> select a.child_number,LAST_LOAD_TIME, to_char(LAST_ACTIVE_TIME,'dd-mon-yyyy hh24:mi:ss') ,b.invalidations from
v$SQL_SHARED_CURSOR a, v$sql b where a.sql_id='954g5yyw5tn1s' and a.child_address=b.child_address ;

CHILD_NUMBER LAST_LOAD_TIME TO_CHAR(LAST_ACTIVE_ INVALIDATIONS
------------ ------------------- -------------------- -------------
0 2008-01-29/11:00:00 29-jan-2008 11:00:00 0

1 row selected.

Elapsed: 00:00:00.14
11:00:00 SQL>
11:00:00 SQL> select executions, invalidations,child_number from v$sql where sql_id='954g5yyw5tn1s';

EXECUTIONS INVALIDATIONS CHILD_NUMBER
---------- ------------- ------------
2 0 0

1 row selected.

Now we gather stats on the table with the auto_invalidate parameter passed to the API.

11:00:00 SQL> exec dbms_stats.gather_table_stats('REGOFA','SOURCE',no_invalidate => DBMS_STATS.AUTO_INVALIDATE);

PL/SQL procedure successfully completed.

Elapsed: 00:00:01.50

Then we keep executing the sql statement of interest to check when the new cursor will be generated.

Elapsed: 00:00:01.50
11:00:13 SQL> select a.child_number,LAST_LOAD_TIME, to_char(LAST_ACTIVE_TIME,'dd-mon-yyyy hh24:mi:ss') ,b.invalidations from
v$SQL_SHARED_CURSOR a, v$sql b where a.sql_id='954g5yyw5tn1s' and a.child_address=b.child_address ;

CHILD_NUMBER LAST_LOAD_TIME TO_CHAR(LAST_ACTIVE_ INVALIDATIONS
------------ ------------------- -------------------- -------------
0 2008-01-29/11:00:00 29-jan-2008 11:00:09 0

1 row selected.

Elapsed: 00:00:00.05
11:00:13 SQL> select executions, invalidations,child_number from v$sql where sql_id='954g5yyw5tn1s';

EXECUTIONS INVALIDATIONS CHILD_NUMBER
---------- ------------- ------------
3 0 0

1 row selected.

Elapsed: 00:00:00.00
11:00:13 SQL> select * from v$sql_shared_cursor where sql_id='954g5yyw5tn1s';

SQL_ID ADDRESS CHILD_ADDRESS CHILD_NUMBER U S O O S L S E B P
------------- ---------------- ---------------- ------------ - - - - - - - - - -
I S T A B D L T R I I R L I O S M U T N F A I T D L D B P C S R P T M B M R O P
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
M F L
- - -
954g5yyw5tn1s 00000007D3BBCBD8 00000007D5644028 0 N N N N N N N N N N
N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N
N N N


1 row selected.
…….
11:00:37 SQL> select * from source where rownum<2;

OBJ# LINE SOURCE
---------- ---------- ---------------------------------------------------------------------------
194107 171 -- *Action: Start a new job, or attach to an existing job that has a


1 row selected.

Elapsed: 00:00:00.01
11:00:39 SQL> select a.child_number,LAST_LOAD_TIME, to_char(LAST_ACTIVE_TIME,'dd-mon-yyyy hh24:mi:ss') ,b.invalidations from
v$SQL_SHARED_CURSOR a, v$sql b where a.sql_id='954g5yyw5tn1s' and a.child_address=b.child_address ;

CHILD_NUMBER LAST_LOAD_TIME TO_CHAR(LAST_ACTIVE_ INVALIDATIONS
------------ ------------------- -------------------- -------------
0 2008-01-29/11:00:00 29-jan-2008 11:00:25 0
1 2008-01-29/11:00:37 29-jan-2008 11:00:37 0

2 rows selected.

Elapsed: 00:00:00.04
11:00:39 SQL> select executions, invalidations,child_number from v$sql where sql_id='954g5yyw5tn1s';

EXECUTIONS INVALIDATIONS CHILD_NUMBER
---------- ------------- ------------
7 0 0
1 0 1

2 rows selected.

Elapsed: 00:00:00.00
11:00:39 SQL> select * from v$sql_shared_cursor where sql_id='954g5yyw5tn1s';

SQL_ID ADDRESS CHILD_ADDRESS CHILD_NUMBER U S O O S L S E B P
------------- ---------------- ---------------- ------------ - - - - - - - - - -
I S T A B D L T R I I R L I O S M U T N F A I T D L D B P C S R P T M B M R O P
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
M F L
- - -
954g5yyw5tn1s 00000007D3BBCBD8 00000007D5644028 0 N N N N N N N N N N
N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N
N N N

954g5yyw5tn1s 00000007D3BBCBD8 00000007D3753DC0 1 N N N N N N N N N N
N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N Y N N
N N N

So somewhere between 11:00:00 and 11:00:39 (within the 2 minute window) a new child cursor has been generated with roll_invalid_mismatch set to ‘Y”

I have tested for the following values of _optimizer_invalidation_period and I see consistent results

120
210
600
1800
18000

Hence this would be an ideal way to avoid a hard parse storm

Estimating the Network Band Width Required for Standby Database

Madan Mohan - Wed, 2008-01-30 00:15
For Better DR (Disaster Recovery) Site setup it is important to know the required Bandwidth Link Between the Primary and DR Site.

By using the Below formula, we can estimate the required Bandwidth Based on the Peak redo rate.

Required bandwidth = ((Redo rate in bytes per second / 0.7) * 8) / 1,000,000

= bandwidth in Mbps.

How to find a Redo Rate for a Database:-
*********************************************

Redo Rate can be found out from the Statspack report. During the peak duration of your business, run a Statspack snapshot at periodic intervals. For example, you may run it three times during your peak hours, each time for a five-minute duration. The Statspack snapshot report will include a "Redo size" line under the "Load Profile" section near the beginning of the report. This line includes the "Per Second" and "Per Transaction" measurements for the redo size in bytes during the snapshot interval. Make a note of the "Per Second" value. Take the highest "Redo size" "Per Second" value of these three snapshots, and that is your peak redo generation rate. For example, this highest "Per Second" value may be 394,253 bytes or 385 KB.

Req'd Bandwidth = ((394253 / 0.7) * 8) / 1,000,000
= 4.5 Mbps

An upper bound of the transactions throughputs

Christian Bilien - Mon, 2008-01-28 15:46
Capacity planning fundamental laws are seldom used to identify benchmark flaws, although some of these laws are almost trivial. Worse, some performance assessments provide performance outputs which are individually commented without even realizing that physical laws bind them together. Perhaps the simplest of them all is the Utilization law, which states that the utilization of […]
Categories: DBA Blogs

Essbase and IBM DB2

Dylan Wan - Mon, 2008-01-28 13:40

I read an interesting article, IBM DB2 Minus OLAP from the SQL Server magazine. Essbase used to be OEM-ed and re-branded by IBM as IBM DB2 OLAP server for ten years. The relationship stopped two yeas ago.

Many DB2 customers actually built their custom analytics applications on the top of Essbase.


Categories: BI & Warehousing

Sending A Message - Using Essbase Custom Defined Functions

Oracle EPM Smart Space - Sun, 2008-01-27 19:46
In my last post, I talked about how the messaging capabilities contained within the Smart Space product can be leveraged in non-traditional ways. One of the best ways is by using the Smart Space Java API.

I am a big fan of the best OLAP database on the planet: Essbase. Essbase has a great feature where a developer can write their own custom Java functions that can then be called by the Essbase calculator. I have always wanted to have the ability from within an Essbase calculation script to notify Essbase users that a calculation has completed. The combination of the Smart Space messaging Java API and the Essbase custom defined function (CDF) will now allow me to fulfill that dream.

When writing to the Smart Space Java API, there are a number of JAR files that are required from Smart Space (inform.jar, smack.jar, smackx.jar, smackx-debug.jar and log4j.jar). There are four main operations in the Smart Space API: connecting, disconnecting, creating the message and sending the message. As of today, both a discussion message (a message that uses the discussion dialog) and a notification message (a message that uses the Smart Space toast dialog) can be sent using the Java API.

This 9.3.1 code sample shows the three Smart Space messaging operations exposed in the proper format to be used as an Essbase custom defined function.

package com.hyperion.essbase.cdf.smartspace;

import com.hyperion.smartspace.inform.info.*;
import com.hyperion.smartspace.inform.impl.InformImpl;
import java.util.List;
import java.util.ArrayList;

public class SSMessage {
private static InformImpl inform = new InformImpl();
private static String sServer;

public static void main(com.hyperion.essbase.calculator.Context ctx,String[] args)
{
sendMessage(args[0],args[1],args[2]);
}

public static void connect(String sColabServer, String sUser, String sPassword) {
inform = new InformImpl();
try
{
System.out.println("Trying to connect");
sServer = sColabServer;
inform.connect(sServer, 5222, sUser + "\\40native\\20directory@" + sColabServer, sPassword);
}
catch (Exception e1)
{
System.out.println("Error: " + e1.getMessage().toString());
}
}

public static void disconnect() {
try
{
System.out.println("Disconnecting");
inform.disconnect();
inform = null;
}
catch (Exception e1)
{
System.out.println("Error: " + e1.getMessage().toString());
}
}

public static void sendMessage(String sType, String sToUser, String sMessage) {
MessageType mType = new MessageType();

try
{

if (sType.toUpperCase().equals("DISCUSS")) {
mType = MessageType.Chat;
} else if (sType.toUpperCase().equals("MEET")) {
mType = MessageType.GroupChat;
}
else
{
mType = MessageType.Headline;
}

List recipients = new ArrayList();
recipients.add(sToUser + "\\40native\\20directory@" + sServer);

//send created message
if (inform.isConnected() == true)
{
//create jabber message with recepients list and type
Message message = inform.createMessage("t", "t", recipients, mType);
//create body
message.setBody(sMessage);

inform.sendMessage(message, recipients);
System.out.println("Message sent");
}
else
{
System.out.println("Message not sent");
}

}
catch (Exception e1)
{
System.out.println("Error: " + e1.getMessage().toString());
}
}
}

Once this code has been compiled and registered with Essbase as a CDF, the functions can then be accessed from with an Essbase calculation script. This example script sends a discussion message for every value in the database where there is a negative variance for January Sales using the Sample Basic Essbase database. It also shows how to send a notification message to any user.

Set UpdateCalc Off;Set ClearUpdateStatus After;

RUNJAVA com.hyperion.essbase.cdf.smartspace.SSConnect
"servername"
"essbaseBOT"
"password";

RUNJAVA com.hyperion.essbase.cdf.smartspace.SSMessage
"DISCUSS"
"mlarimer"
"The following items currently have a negative variance to budget for Jan Sales:" ;


Fix (@LevMbrs("Market",0),@LevMbrs("Product",0),"Jan","Actual")

"Sales" (IF("Variance" <>
@JechoString(@Name(@CurrMbr("Market")));
@JsendMessage( "DISCUSS", "mlarimer", @JgetConcatenate(@LIST(@Name(@CurrMbr("Market"))," -> ", @Alias(@CurrMbr("Product"))," = ", @JgetString("Variance"))));
ENDIF);
EndFix

RUNJAVA com.hyperion.essbase.cdf.smartspace.SSMessage "NOTIFY" "mlarimer" "one" ;
RUNJAVA com.hyperion.essbase.cdf.smartspace.SSMessage "NOTIFY" "mlarimer" "two" ;
RUNJAVA com.hyperion.essbase.cdf.smartspace.SSMessage "NOTIFY" "mlarimer" "three" ;

RUNJAVA com.hyperion.essbase.cdf.smartspace.SSDisconnect;


Note: This example is very simplistic as it uses a hardcoded username and password in the script itself. There are a number of ways to use variables that can replace the username and password and thus not have them hardcoded in the script.

I am really excited about the future of a product like Oracle EPM Smart Space. It will enable developers to build cool new solutions that have never been possible to build in the past.


Categories: Development

UKOUG - My presentation slides are now available...

Anthony Rayner - Sun, 2008-01-27 12:22
Just a quick post to mention my slides from my UKOUG presentation, 'Building The Rich User Interface with Oracle Application Express and Ajax' are now available to download.

Sorry about the delay for anyone who wanted to take a look at this sooner, I wanted to tidy up some of the code before making it available and have only just had a chance to look at it.

If you are interested in catching this again, I will be doing a very similar presentation at the UKOUG Combined SIG Day at Baylis House in Slough on 27th February.

So come along and say hi!

Categories: Development

It's here and live!

Peter Khos - Sun, 2008-01-27 12:22
It's Day 3 of our Order Management implementation (Oracle E-Business Suite) weekend. The team started on Friday and we expect to have the business folks in this afternoon to do validation and verification. There is a whole bunch of IT, business and of course, our implementation partner working really, really hard to ensure that the implementation goes well. It is complicated by the fact that Peter Khttp://www.blogger.com/profile/14068944101291927006noreply@blogger.com0

Introducing RuleGen

Jornica - Sun, 2008-01-27 09:15

I've been working with CDM Ruleframe for a few years now. Recently I've attended a presentation about another framework focusing on business rules called RuleGen.

RuleGen is a framework written in PL/SQL that generates  code to maintain data integrity constraints. Right now RuleGen implements table constraints, i.e. at most one president allowed in EMP,  and database constraints, i.e. every department has at least two employees. Enforcing a data integrity constraint is done in two steps. The first step is about administering the affected rows of a transaction (inserts, updates and deletes). The second step is validating the constraint against the affected rows. If the constraint is violated an exception is raised.  You can also say the first step is about WHEN  the constraint is validated and the second step is HOW the constraint is validated.

There are switches to influence the runtime behavior of RuleGen like the execution model: stop on the first constraint violation or continue after the first constraint violation in order to collect a list of constraint violations (like the message stack in CDM Ruleframe). It is also possible to defer checking (in contrary to immediate checking). 

A difference between CDM Ruleframe and RuleGen is the relationship with Oracle Designer. RuleGen is not integrated with Oracle Designer where CDM Ruleframe is. The definition (remember HOW and WHEN) of data integrity constraints is either done with SQL*Plus or with a small APEX application. Another  difference between RuleGen and CDM Ruleframe is there is no PL/SQL coding required with RuleGen. The definition of data integrity constraints is done in SQL queries completely.

In my opinion, the functionality of RuleGen looks very promising. Keep an eye on it!

Small change make difference

Virag Sharma - Sun, 2008-01-27 08:41

Small change make difference.

How small , small things make difference , here is one live example
One of my friend want to learn 11g, so she downloaded 11g and started installing on

Linux Box and created Database manually. Next time when she logged in, she did not know where she installed 11g , since she created database manually , so there was no entry in “oratab”

I remember command “pwdx” on Unix Solaris , which give current working directory of processes

$ uname -a

SunOS mysun 5.8 Generic_xyz sun4u sparc SUNW,Ultra-Enterprise

MYSUN:oracle> (10.1.0.4) /usr/proc/bin

$ ps -aef grep pmon

oracle 2424 1 0 Jan 18 ? 13:39 ora_pmon_test
oracle 8337 13002 0 05:31:47 pts/7 0:00 grep pmon

mysun:oracle> (10.1.0.4) /usr/proc/bin

$ pwdx 2424
2424: /u01/app/oracle/product/10.1.0.4/dbs

<

But on Linux(RHEL 4 ) there is no command like “pwdx” . In linux you can check current working directory of processes ( ORACLE_HOME/dbs i.e lock file location ) in /proc/

<?xml:namespace prefix = o /?>

[root@apps001 proc]# ps -aef grep pmon

oracle 2826 1 0 20:55 ? 00:00:00 xe_pmon_XE
oracle 8268 1 0 21:21 ? 00:00:00 ora_pmon_orcl11g
root 23180 13728 0 23:19 pts/2 00:00:00 grep pmon

[root@apps001 proc]# ls -l /proc/8268

total 0
dr-xr-xr-x 2 oracle oinstall 0 Jan 27 23:20 attr
-r-------- 1 oracle oinstall 0 Jan 27 23:20 auxv
-r--r--r-- 1 oracle oinstall 0 Jan 27 23:19 cmdline
lrwxrwxrwx 1 oracle oinstall 0 Jan 27 23:20 cwd -> /u01/app/oracle/11.1.0/dbs
-r-------- 1 oracle oinstall 0 Jan 27 23:20 environ
lrwxrwxrwx 1 oracle oinstall 0 Jan 27 23:20 exe -> /u01/app/oracle/11.1.0/bin/oracle
dr-x------ 2 oracle oinstall 0 Jan 27 23:20 fd
-rw-r--r-- 1 oracle oinstall 0 Jan 27 23:20 loginuid
-r-------- 1 oracle oinstall 0 Jan 27 23:20 maps
-rw------- 1 oracle oinstall 0 Jan 27 23:20 mem
-r--r--r-- 1 oracle oinstall 0 Jan 27 23:20 mounts
lrwxrwxrwx 1 oracle oinstall 0 Jan 27 23:20 root -> /
-r--r--r-- 1 oracle oinstall 0 Jan 27 23:18 stat
-r--r--r-- 1 oracle oinstall 0 Jan 27 23:20 statm
-r--r--r-- 1 oracle oinstall 0 Jan 27 23:16 status
dr-xr-xr-x 3 oracle oinstall 0 Jan 27 23:20 task
-r--r--r-- 1 oracle oinstall 0 Jan 27 23:20 wchan

>

<

[oracle@apps001 oracle]$ cd /u01/app/oracle/11.1.0/dbs

[oracle@apps001 dbs]$ ls
hc_orcl11g.dat initdw.ora init.ora lkORCL11G orapworcl11g spfileorcl11g.ora

[oracle@apps001 dbs]$ /sbin/fuser *

hc_orcl11g.dat: 8268 8268m 8270 8270m 8274 8274m 8276 8276m 8278 8278m 8282 8282m 8284 8284m 8286 8286m 8288 8288m 8290 8290m 8292 8292m 8294 8294m 8296 8296m 8298 8298m 8300 8302 8312 8312m 8314 8314m 8316 8316m 8320 8322 8394 8394m 22755 25372

lkORCL11G: 8268 8276 8278 8284 8286 8288 8290 8292 8294 8296 8298 8312 8314 8316 8320 8322 8394 22755 25372

I asked her to create one small shell script to source oracle 11g environment variable ,
she wrote following script

ORACLE_HOME=/u01/app/oracle/11.1.0/
export ORACLE_HOME
PATH=$ORACLE_HOME/bin:$PATH
export PATH
ORACLE_SID=orcl11g
export ORACLE_SID

~

"11g.env" 7L, 136C written

[oracle@apps001 ~]$chmod 755 11g.env
[oracle@apps001 ~]$ . ./11g.env

She called me after some time and said , after sourcing 11g environment variable , when she try to connect oracle 11g as sysdba , it says “connect to ideal instance”

[oracle@apps001 ~]$ . ./11g.env

[oracle@apps001 ~]$ sqlplus “/ as sysdba”
SQL*Plus: Release 11.1.0.6.0 - Production on Sun Jan 27 23:45:42 2008
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Connected to an idle instance.

Ctl-d

[oracle@apps001 ~]$ ps -aef grep pmon
oracle 2826 1 0 20:55 ? 00:00:00 xe_pmon_XE
oracle 8268 1 0 21:21 ? 00:00:00 ora_pmon_orcl11g
oracle 24791 24446 0 23:46 pts/4 00:00:00 grep pmon

[oracle@apps001 8268]$ cd /u01/app/oracle/11.1.0/dbs

[oracle@apps001 dbs]$ /sbin/fuser *

hc_orcl11g.dat: 8268 8268m 8270 8270m 8274 8274m 8276 8276m 8278 8278m 8282 8282m 8284 8284m 8286 8286m 8288 8288m 8290 8290m 8292 8292m 8294 8294m 8296 8296m 8298 8298m 8300 8302 8312 8312m 8314 8314m 8316 8316m 8320 8322 8394 8394m 22755 25372

lkORCL11G: 8268 8276 8278 8284 8286 8288 8290 8292 8294 8296 8298 8312 8314 8316 8320 8322 8394 22755 25372

# It means database is up (may be nomount , mount or open mode )
#

When ran “fuser” on lock file oracle BG processes are connected to lock file, It means database is up (may be nomount , mount or open mode )

Checked alert.log , to make sure things are fine. It shows database is OPEN

[oracle@apps001 oracle]$ adrci
ADRCI: Release 11.1.0.6.0 - Beta on Sun Jan 27 23:52:16 2008
Copyright (c) 1982, 2007, Oracle. All rights reserved.

ADR base = "/u01/app/oracle"
adrci> set editor vi
adrci> show alert

Choose the alert log from the following homes to view:

1: diag/tnslsnr/apps001/listener
2: diag/rdbms/orcl2/orcl2
3: diag/rdbms/stdorcl2/stdorcl2
4: diag/rdbms/orcl11g/orcl11g
Q: to quit

Please select option:4

space available in the underlying filesystem or ASM diskgroup.
2008-01-27 21:22:07.366000 +05:30
Completed: ALTER DATABASE OPEN

2008-01-27 21:22:26.499000 +05:30
Starting background process CJQ0
CJQ0 started with pid=26, OS id=8394
2008-01-27 21:23:24.231000 +05:3
Thread 1 advanced to log sequence 3
Current log# 3 seq# 3 mem# 0: /u01/app/oracle/oradata/orcl11g/redo03.log
2008-01-27 21:29:03.616000 +05:30

Finally decided to check shell script , which source 11g environment variable. There suppose to be no issue at script because we able to run “adrci” and able to see alert.log

[oracle@apps001 ~]$vi 11g.env
ORACLE_HOME=/u01/app/oracle/11.1.0/
export ORACLE_HOME
PATH=$ORACLE_HOME/bin:$PATH
export PATH
ORACLE_SID=orcl11g
export ORACLE_SID

~
~
"11g.env" 7L, 136C written


#
# I made little change in script i.e. removed last “/” from ORACLE_HOME
#


vi 11g.env
ORACLE_HOME=/u01/app/oracle/11.1.0
export ORACLE_HOME
PATH=$ORACLE_HOME/bin:$PATH
export PATH
ORACLE_SID=orcl11g
export ORACLE_SID

~
~
"11g.env" 7L, 136C written

After I removed last “/” from ORACLE_HOME , she able to connect to database J

[oracle@apps001 ~]$ . ./11g.env

[oracle@apps001 ~]$ sys

SQL*Plus: Release 11.1.0.6.0 - Production on Mon Jan 28 00:00:54 2008
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL>

So small change make difference.

Categories: DBA Blogs

SQL Injection for parents

Andrew Fraser - Fri, 2008-01-25 18:46

This page has been moved to http://andrewfraserdba.com/?p=51


Categories: DBA Blogs

Server Manager Logging - Part 3

Mark Vakoc - Fri, 2008-01-25 16:46
Managed Home Agent Logging
OK, now on to the good stuff. The managed home agent, herein referred to as simply the agent, is responsible for a majority work performed by Server Manager (SM). This includes:
  • registering/installing E1 managed instances
  • registering and managing IBM WebSphere and Oracle Application Server
  • managing the configuration file(s) for E1 managed instances
  • starting/stopping E1 servers and the J2EE servers
  • performing tools release upgrades/downgrades for E1 servers
  • discovering and sending log files to the management console for viewing
You get the idea. The agent does most of the work. If anything goes wrong during one of these items, or your just an inquisitive person, the place you'll want to look to see what's going on are the agent's log files.
Logging Overview

Before we dig too far into the log files themselves let's get an understanding of the logging system used. Server Manager makes use of the standard java logging framework. This framework is different than the logging engine used by most other E1 software components which are based on the jdelog.properties configuration file.

The java logging framework exposes, and we make use of, seven levels of logging as outlined below in descending order


LevelDescriptionSEVEREA critical error has occurred from the perspective of the agent. Critical errors are non-recoverable errors and require immediate attention. An example would be a critical problem when initializing the agent that would prevent it from starting or functioning properly.WARNINGDenotes an abnormal or unexpected result occurred that is recoverable, from the perspective of the agent. An example would include a failure while changing the tools release of an E1 server. It is a significant problem, however, the agent will recover so is considered a warning.INFODenotes informative messages providing contextual information as to what the agent is doing. An E1 server that is started using SM would have a log message indicating so at the INFO level.CONFIGNot commonly used in SM, a message at the configuration level is simply a means for logging information particular to that installation, such as the platform of the server.FINEA lower level message still intended as human readable that provides insight into what the agent is doing. This can be thought of as a standard "debug" message.FINERAn even lower level trace of debug message. Messages are classified at this level rather than FINE if they are very frequently occurring and either less likely to be of interest. FINESTThe lowest level of logging. These messages may be very frequent, verbose, or cryptic and may only be meaningful to Oracle development.
The division between SEVERE and WARNING is a little muddy. You may see messages that appear as SEVERE that really should be qualified as WARNING based on the above descriptions, and less frequently there may be a WARNING message that should have been classified as SEVERE.

The default logging level for the agent is FINE, which should be fine for nearly all troubleshooting needs. The agent is much less verbose than some of the other E1 products. Keeping the logging level at FINE, FINER, or even FINEST will have negligible impact on performance. There just aren't that many messages emitted to cause a problem.
Agent Log Location
The agent log files are located in the directory <agent_install>logs, where <agent_install> refers to the install location supplied during the managed home installer. The log files are named e1agent_#.log. The logging mechanism will automatically split log files into approximately 10 MB chunks, and up ten log files will be retained. That means the most amount of disk space needed for the agent logs is 100 MB. The '#' in the log name is the index of the log file with zero being the most recent and nine being the oldest. When the current log file, e1agent_0.log, reaches approximately 10 MB the last chunk (e1agent_9.log) will be deleted and all the log files will be renamed with the index incremented. The e1agent_0.log is now e1agent_1.log and a new e1agent_0.log file will be created. Another file, 'e1agent_0.log.lck' may also be present. This file is created by Java as a lock file and may be ignored.

The log files for the agent may be viewed directly through the management console directly. From the management dashboard (start page) select the managed home of interest.
At the bottom of the page for the selected managed home is a log files section.
Selecting a log file will transfer the log from the remote machine to the management console so it may be viewed using the integrated log viewer, as shown below.
Each log message consists of two or more lines. The first line contains the timestamp, originating java class, and the name of the current method from which the message was written. The second line (or multiple lines) contains the actual log message.
Changing Logging Levels

The default level for agent logging is FINE, which is appropriate for most occasions. It may be desirable to change the level to FINER or FINEST to troubleshoot an issue, or move it to a higher level, such as INFO, for some reason. If the agent is running and connected to a management console you may change the level directly from the console. Located on the left hand side of the page for the management agent is a section entitled 'Managed Home Details'. In there is a dropdown for 'Agent Log Level'. The current value will be shown.

To change the level simply select the desired log level from the drop down list. The change will immediately take effect. Changing the log level through this dropdown is not permanent; when the agent is restarted it will resume logging at the FINE level.
If you wish to permanently change the logging level, or you will to set the level to something other than FINER during agent startup, you may add the following line to the <agent_install>/config/agent.properties file.

log.level=FINER

You may use any of the log levels mentioned in the table above, or OFF to prevent all logging.

I do not recommend permanently keeping the log level at OFF, SEVERE, WARNING, CONFIG, or INFO. You never know when you may need the additional information provided by the FINE level. The agent will automatically remove old log files so there should be no concern of log files filling a disk.

Embedded Agent Logging

The E1 enterprise server and E1 HTML server all contain variants of the management agent. In the embedded form the logging messages generated by the management agent are not typically logged. It may be desired to enable the same logging for the embedded management agents. To do so simply add the following line to the <agent_install>/config/agent.properties file.

log.embedded.instances=true

After adding this line and restarting the E1 managed instance series of log files will appear in the <agent_install>/logs directory. The filename of the log files will be the instance name of the E1 server.

It is very rare to need to enable this logging; that said it may be useful to troubleshoot why the embedded agents may not be communicating with the management console.

This form of logging is not available to an E1 web server if it is deployed to a federated (e.g. network deployment) node in IBM WebSphere.
Final Thoughts

Stack Traces: some log messages may contain stack traces. This information is useful for identifying the source of the message. A stack trace is not always indicative of a problem. The level of the message is much more indicative. Stack traces for INFO, FINE, FINER, or FINEST messages are included to provide more information and is not an error.

Pages

Subscribe to Oracle FAQ aggregator