Feed aggregator

Oracle Pro*C on Windows with Express Edition Products

Mark A. Williams - Mon, 2009-06-29 10:34

NOTE: I have edited the intro text here from the original source as a result of some discussions I've had. These discussions are ongoing so I can't post the results as of yet. (3-June-2009 approximately 5:00 PM).

I thought I would take an introductory look at using the Oracle Pro*C precompiler using Express Edition products. Here are the components I will use for this purpose (links valid at time of posting):

  • Oracle Database 10g Express Edition (available here)
  • Oracle Enterprise Linux (available here)
  • Oracle Instant Client 11.1.0.7 Packages for Microsoft Windows 32-bit (available here)
  •     Instant Client Package – Basic
  •     Instant Client Package – SDK
  •     Instant Client Package – Precompiler
  •     Instant Client Package - SQL*Plus
  • Microsoft Windows XP Professional 32-bit with Service Pack 3
  • Microsoft Visual C++ 2008 Express Edition (available here)
  • Windows SDK for Windows Server 2008 and .NET Framework 3.5 (available here)

For the purposes of this discussion you will need to have already installed (or have access to) Oracle Database with the HR sample schema. You will also need to have installed Visual C++ 2008 Express Edition and the Windows SDK on the machine you will use as your development machine. For a walkthrough of installing Visual C++ 2008 Express Edition and the Windows SDK, please see this link. Note that even though the SDK seems to be only for Windows Server 2008 (based on the name) it is supported on XP, Vista, and Windows Server.

In my environment I have installed Oracle Database 10g Express Edition on a host running Oracle Enterprise Linux. The host name is "oel02" (not especially clever, I realize). The Windows XP machine that I will use as the development machine is named "chepstow" (perhaps marginally more clever) and Visual C++ Express Edition and the Windows SDK are already installed. I have downloaded the four Instant Client packages listed above to the "c:\temp" directory on chepstow. The SQL*Plus package is not required; however, I find it convenient so I always install it. So, since I already have a database server and the Microsoft tools are installed, all that remains is to install the Instant Client packages.

Installing the Instant Client Packages

It is incredibly easy to install the Instant Client packages – simply unzip them! I chose to unzip them (on chepstow, my development machine) to the "c:\" directory and this created a new "c:\instantclient_11_1" directory and various sub-directories. I then added the following two directories to the system path:

  • C:\instantclient_11_1
  • C:\instantclient_11_1\sdk

NOTE: I added the two directories to the beginning of the system path and had no other Oracle products installed. See comments for more information about this. (Added 29 June 2009 approximately 11:30 AM)

Setting up the Pro*C Configuration File

I know it is easy to skip reading a README file, but it is important that you do read the PRECOMP_README file in the Instant Client root directory. Pro*C will, by default, look for a configuration file named "pcscfg.cfg" when it is invoked. In the environment that I have created (default installs of all products) Pro*C will want to find this file in the "c:\instantclient_11_1\precomp\admin" directory. However, if you look at your install (if you have done the same as me) you will notice there is no such directory! Therefore you should create this directory ("c:\instantclient_11_1\precomp\admin"). You should then copy the "pcscfg.cfg" file from the "c:\instantclient_11_1\sdk\demo" directory to the "c:\instantclient_11_1\precomp\admin" directory.

The "pcscfg.cfg" file will initially contain the following single line:

define=(WIN32_LEAN_AND_MEAN)

Below this line you add the following four lines:

sys_include=C:\PROGRA~1\MICROS~1.0\VC\include\sys
include=C:\PROGRA~1\MICROS~3\Windows\v6.1\Include
include=C:\PROGRA~1\MICROS~1.0\VC\include
include=C:\instantclient_11_1\sdk\include

Save the file and exit your editor.

Be sure to note that the directory names above are the "short" names to ensure they do not contain spaces. If the directory names contain spaces this will cause problems with the Pro*C precompiler. To help "translate" the directories above, here are the long versions (be sure you do not enter these):

sys_include=C:\Program Files\Microsoft Visual Studio 9.0\VC\include\sys
include=C:\Program Files\Microsoft SDKs\Windows\v6.1\Include
include=C:\Program Files\Microsoft Visual Studio 9.0\VC\include
include=C:\instantclient_11_1\sdk\include

You can find the short names by using "dir /x" in a command-prompt window.

Adding Directories to Visual Studio C++ 2008 Express Edition

Next you should add the Oracle Instant Client include and library directories to Visual Studio. To do this, simply perform the following steps:

  • Select Tools –> Options to open the Options dialog
  • Expand the "Projects and Solutions" node
  • Click the "VC++ Directories" item
  • Under "Show directories for:" select "Include files"
  • Click underneath the last entry in the list (you should get a highlighted line with no text)
  • Click the folder button to create a new line
  • Enter "c:\instantclient_11_1\sdk\include" and press Enter
  • Under "Show directories for:" select "Library files"
  • Click underneath the last entry in the list (you should get a highlighted line with no text)
  • Click the folder button to create a new line
  • Enter "c:\instantclient_11_1\sdk\lib\msvc" and press Enter
  • Click the OK button to save the changes
Create a New Project

WARNING: You should create your project in a directory (and path) that has no spaces in it. If you create the project in a directory or path with spaces you will receive errors during the precompile phase. I used "c:\temp" for this example.

Now create a new project in Visual Studio:

  • Select File –> New Project to open the New Project dialog
  • Select "Win32" as the project type
  • Select "Win32 Console Application" under "Templates"
  • Give the project a name (I used "proctest" in keeping with my clever naming tradition)
  • I always choose to de-select "Create directory for solution" and click OK
  • Click the "Next" button in the application wizard
  • Click the "Empty project" checkbox under "Additional options"
  • Click the "Finish" button
Create the Pro*C Source File

To create the Pro*C source file, perform the following steps:

  • Right-click "Source Files" and select Add –> New Item… from the context menu
  • Select "Code" under "Visual C++"
  • Select "C++ File (.cpp)" under "Visual Studio installed templates" (note that you will not actually create C++ code in this example)
  • Give the file a name such as "proctest.pc" and click "Add"

Here's the Pro*C source I used for this example (this is clearly sample code and lots is left out!):

/*
** suppress certain warnings
*/
#ifdef WIN32
#define _CRT_SECURE_NO_DEPRECATE 1
#endif

#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <sqlca.h>
#include <sqlda.h>
#include <sqlcpr.h>

EXEC SQL BEGIN DECLARE SECTION;

/*
** defines for VARCHAR lengths.
*/
#define UNAME_LEN 30
#define PWD_LEN   30
#define DB_LEN    48
#define FNAME_LEN 32
#define LNAME_LEN 32

/*
** variables for the connection
*/
VARCHAR username[UNAME_LEN];
VARCHAR password[PWD_LEN];
VARCHAR dbname[DB_LEN];

/*
** variables to hold the results
*/
int ctr;
int empid;
VARCHAR fname[FNAME_LEN];
VARCHAR lname[LNAME_LEN];

EXEC SQL END DECLARE SECTION;

/*
** declare error handling function
*/
void sql_error(char *msg)
{
  char err_msg[128];
  size_t buf_len, msg_len;

  EXEC SQL WHENEVER SQLERROR CONTINUE;

  printf("\n%s\n", msg);
  buf_len = sizeof (err_msg);
  sqlglm(err_msg, &buf_len, &msg_len);
  printf("%.*s\n", msg_len, err_msg);

  EXEC SQL ROLLBACK RELEASE;

  exit(EXIT_FAILURE);
}

void main()
{
  /*
  ** Copy the username into the VARCHAR.
  */
  strncpy((char *) username.arr, "hr", UNAME_LEN);
  username.len = strlen("hr");
  username.arr[username.len] = '\0';

  /*
  ** Copy the password.
  */
  strncpy((char *) password.arr, "hr", PWD_LEN);
  password.len = strlen("hr");
  password.arr[password.len] = '\0';

  /*
  ** copy the dbname (using EZCONNECT syntax)
  */
  strncpy((char *) dbname.arr, "oel02/XE", DB_LEN);
  dbname.len = strlen("oel02/XE");
  dbname.arr[dbname.len] = '\0';

  /*
  ** register sql_error() as the error handler.
  */
  EXEC SQL WHENEVER SQLERROR DO sql_error("ORACLE error--\n");

  /*
  ** Connect to database.  Will call sql_error()
  ** if an error occurs when connecting.
  */
  EXEC SQL CONNECT :username IDENTIFIED BY :password USING :dbname;

  printf("\nConnected to ORACLE as user: %s\n\n", username.arr);

  /*
  ** simple select statement
  */
  EXEC SQL DECLARE emps CURSOR FOR
    SELECT   employee_id,
             first_name,
             last_name
    FROM     employees
    ORDER BY last_name,
             first_name;

  /*
  ** open the cursor
  */
  EXEC SQL OPEN emps;

  /*
  ** when done fetching break out of the for loop
  */
  EXEC SQL WHENEVER NOT FOUND DO break;

  /*
  ** simple counter variable
  */
  ctr = 0;

  /*
  ** print a little header
  */
  printf("Employee ID  First Name            Last Name\n");
  printf("===========  ====================  =========================\n");

  /*
  ** fetch all the rows
  */
  for (;;)
  {
    EXEC SQL FETCH emps into :empid, :fname, :lname;

    /*
    ** null-terminate the string values
    */
    fname.arr[fname.len] = '\0';
    lname.arr[lname.len] = '\0';

    /*
    ** print the current values
    */
    printf("%-13d%-22s%-25s\n", empid, fname.arr, lname.arr);

    ctr++;
  }

  /*
  ** close the cursor
  */
  EXEC SQL CLOSE emps;

  /*
  ** provide simple feedback on how many rows fetched
  */
  printf("\nFetched %d employees.\n", ctr);

  /*
  ** disconnect from database
  */
  EXEC SQL ROLLBACK WORK RELEASE;

  /*
  ** have a nice day
  */
  exit(EXIT_SUCCESS);
}

  Add a Reference to the Generated C Source File

The output of the Pro*C precompiler is either C or C++ source code (C in this case). However, because we are working with only a Pro*C source file we need to tell Visual Studio about the file that will be generated. To do this we add a reference to the not yet generated file:

  • Select Project –> Add New Item to open the Add New Item dialog
  • Select "Code" under "Visual C++"
  • Select "C++ File (.cpp)" under "Visual Studio installed templates"
  • Type "proctest.c" in the "Name" textbox and click "Add"
  • Next close the (empty) file after it is created
Add the Pro*C Library File to the Project
  • Select Project –> <project name> Properties… to open the Property Pages dialog
  • Expand the "Configuration Properties" node
  • Expand the "Linker" node
  • Click the "Input" item
  • In the "Additional Dependencies" type "orasql11.lib" and click "OK" to save the changes
Add the Custom Build Step

In order for Visual Studio to be able to invoke the Pro*C executable (proc.exe) to create the C source code file, a custom build step needs to be created:

  • Right-click "proctest.pc" in the Solution Explorer and select "Properties" from the context menu
  • Select "Custom Build Step"
  • For "Command Line" type "proc.exe $(ProjectDir)$(InputName).pc"
  • For "Outputs" type "$(ProjectDir)$(InputName).c"
  • Click "OK" to save the custom build step

This step will cause Visual Studio to invoke proc.exe on the input file (proctest.pc) and create an output file called "proctest.c" which will then be compiled as normal. This is really the key step in the whole process I suppose. This custom build step is the "integration" of Pro*C into Visual Studio.

Build the Sample and Verify

All the hard work is now done and it is time to build the sample!

  • Select Build –> Build Solution

If all has gone well you should see output similar to the following in the output window:

proctest - 0 error(s), 0 warning(s)
========== Build: 1 succeeded, 0 failed, 0 up-to-date, 0 skipped ==========

If there are errors reported you will need to investigate and correct the cause.

Upon completion of a successful build, you can execute the program and verify the results:

C:\temp\proctest\Debug>proctest

Connected to ORACLE as user: hr

Employee ID  First Name            Last Name
===========  ====================  =========================
174          Ellen                 Abel
166          Sundar                Ande
130          Mozhe                 Atkinson

[ snip ]

120          Matthew               Weiss
200          Jennifer              Whalen
149          Eleni                 Zlotkey

Fetched 107 employees.

C:\temp\proctest\Debug>

  Conclusion

Whew! That's a lot of work! As I mentioned at the beginning of this post, this is intended to be an introductory look at using Pro*C and Visual C++ 2008 Express Edition. There is, of course, much more that Pro*C can do and this simple example of selecting from the "employees" table in the "hr" schema is exactly that: a simple example. It is not intended to be a complete tutorial but perhaps it will be helpful in working with Pro*C and Visual Studio if you choose to do so. You should be able to adapt the steps here to using the "full" version of Visual Studio or Oracle Client.

If you made it this far, thanks for stopping by. I hope this was helpful in some regard.

 

NOTE: Some comments below were recently deleted by me at the request of the poster. I have, therefore, deleted my responses to those comments as they made no sense on their own. (1-June-2009 approximately 1:10 PM)

One more date trick

Klein Denkraam - Mon, 2009-06-29 04:11

Tyler Muth has a useful addition to the date functions I have published here before.

If you ever want to know how long ago a date is and you want to display it in ‘human readable’ format you (and I) could use his function.

Like this:

select date_text_format(sysdate - 3/86400) the_date from dual;
select date_text_format(sysdate - 5/1440) the_date from dual;
select date_text_format(sysdate - 1/24) the_date from dual;
select date_text_format(sysdate - 3.141549) the_date from dual;
select date_text_format(sysdate - 15) the_date from dual;
select date_text_format(sysdate - 120) the_date from dual;
select date_text_format(sysdate - 365) the_date from dual;
--------------------------------------------------------------------
3 seconds ago
5 minutes ago
1 hour ago
3 days ago
2 weeks ago
4 months ago
1 year ago

One more for the toolbox.


Are you sure you will be able to activate your standby??

Fairlie Rego - Sun, 2009-06-28 06:50
A couple of weeks I faced a scenario where the standby database crashed

On looking at the alert.log I see the following message in the alert.log of the standby

***********************************************************
Sat Jun 6 06:48:52 2009
Recovery interrupted!
cannot find needed online log for redo thread 1
Some recovered datafiles maybe left media fuzzy
Media recovery may continue but open resetlogs may fail
Sat Jun 6 06:48:53 2009
Errors in file /u01/app/oracle/admin/TEST/bdump/test1_mrp0_24533.trc:
ORA-10576: Give up restoring recovered datafiles to consistent state: some error occurred
ORA-16037: user requested cancel of managed recovery operation
Some recovered datafiles maybe left media fuzzy
Media recovery may continue but open resetlogs may fail
Sat Jun 6 06:48:54 2009
Waiting for MRP0 pid 24533 to terminate
************************************************************

Hmmm... this means that if the standby does not have the redo and cannot get it from the primary you will not be able to online media fuzzy files using supported methods

The same issue is explained in Bug 5956646
as an architectural limitation.

This is a very unlikely scenario but a possibility none the less

Concepts Guide: 5/27 - Schema Objects

Charles Schultz - Wed, 2009-06-24 15:16
I found that reading the Guide is quite hard if you are already tired. ;-)

As always, I like pictures. Figure 5-1 on page 5-3 does justice. Although they would make their point more clear by labelling the schemas.

Was not aware of intra-block chaining (pp 5-5) - interesting concept. Especially since it does not affect performance (does not increase the number of IO calls).

Figure 5-3 is good in that it really helps to see the various pieces of a block and the row, including the headers.

As much as I hate how nulls are handled in Oracle, the one good thing is that I like how Oracle just does not even record information for null-trailing rows (ie, column-length not stored in block). Except, of course, if you have LONG data - DO NOT USE LONG! =)

I was not aware how table compression actually worked. Now that I know a little more about it, I am surprised the guide did not mention any limitations. For example, if a block has 100% unique data (uncompressable), would the symbol table still be built and populated? If not, what is the cut-off? At what point does Oracle deem compression worthwhile, pragamatically?

I have never seen a practical use for nested tables, but I'll keep my eyes open. I jumped to 27-7 as referenced just to see what it said. I still have never seen a practical use for nested tables.

The fact that sessions are "bound" to temp tables was new to me; I did not realize you could not do ddl on a temp table if is already bound to a session. Kinda makes sense, though. I wonder why they do not simply call it "locking"? =) Afterall, that is how normal tables are handled.

Ojbect Views really confuse me, not being familiar with the concept. And this being the Concepts Guide, I found that the short little blurb did not really help me much at all.

I also did not realize that one could put declaritive constraints on views; interesting way to propagate metadata information for restricted environments.

The short paragraph on Materialized View Logs did not do the concept any justice. I get the impression that either space and/or time was restrained when this section was done. =)

The intro section to Dimensions left my head whirling. I am not a Warehousing guy by any means; while I appreciate the extra background and the quasi-example, I find that it deep-dives too quick for me. And using an example of a year being tied to a year-row is just the most absurd thing I have ever heard. Why not a practical, real-life example that "anyone" can grasp?

Good discussion for sequences; I like how the good is balanced with the bad - the "Caution" is stronger than I expected, but I think very important and am glad to see that the authors made it stand out.

Nice long section on indexes. *grin* I always find it strange to find in 10g documentation references to "analyze table", when most of the time I believe they really mean collect stats, for which we are told to use dbms_stats instead. The intro to deterministic (user-defined) indexes was quite interesting. I would personally consider this an example of "black magic" in Oracle. Another one of those cases where there is a strong lack of practical examples.

Figure 5-7 starts out looking like a great pictoral example. However I found it to be quite confusing. Actually, what I really want to see is how an index is built, starting with one row. At the very least, it would be helpful to augment the figure with text explaining the function of the values for the branch blocks. However, excellent information on how searches are mathematically bounded (big-O notation).

Excellent piece on bitmap indexes; great examples, wonderful discourse. I appreciate the balanced approach to addressing the pros and cons of bitmap indexes, which may at the outset seem to be the pancea of query performance issues. The sidebar on cardinality was very well done as well.

The section on Index-organized tables was also quite interesting, however I wonder why, if they are so highly recommended for OLTP applications, why are they not more popular?

Application Domain indexes, and the Oracle Data Cartridge in general, are another area of black magic that I fear ever going back to. I dabbled in it once when attempting to define custom statistics for a function and never looked back. =) I am sure they have their place on some "True Expert"'s shelf, but not here....

Like IOTs, the Concepts Guide does a good job selling Clusters and Hash Clusters as beneficial methods, but I do not see many folks using it in Real Life. Is it merely the learning curve that keeps the standard DBA away from these features? We have a lot of third-party apps; shall we assume that the vendors simply do not have the expertise to utilize these fun but advanced toys?

Interesting stuff none-the-less.

(Integrity) Constraints in a datawarehouse

Klein Denkraam - Wed, 2009-06-24 02:51

In data warehouse land it is not very common to see constraints in the database. I never felt very comfortable with that, but until now I did not get around to analysing why I felt that way. Until I read this article by Tom Kyte. In the article Tom Kyte shows that the CBO (Cost Based Optimizer) can profit from the information that is derived from the presence of constraints by generating better query plans. Better in this case is defined as ‘producing result sets faster’. The examples in the article are not exactly ‘real world’ data warehouse examples. Following Tom Kyte’s line of reasoning I do agree that constraints are capable of improving the performance of queries.

The reasons for not having constraints in a data warehouse are along the lines of ‘I have checked the integrity when I did my ETL, so why would I need constraints to confirm that? And besides, constraints would only delay my ETL because they have to be checked before they are really enabled’. I see a couple of flaws in this reasoning:

  • I suspect that most constraints in a data warehouse cannot be enabled when actually applied. The quality of the ETL might be good, but is it just as good as a constraint would be? I think not.
  • Enabling constraints might take time, but how often do you have to check constraints? Only when doing the ETL, of course. I hope that in your DWH, doing ETL will be during a small part of the time your DWH is being used. Otherwise your DWH will have a problem. The rest of the time your DWH will be used for querying and Tom Kyte just showed that querying can be sped up by applying constraints.

Summing up my pros and cons of applying constraints.

Pro:

  • it will improve the data quality of the DWH
  • it can speed up the queries in your DWH (querying it is the purpose of your DWH anyway)

Con:

  • it will take more time to do your ETL (which is only a means to create your DWH)

My conclusion is that I wil try to incorporate as many constraints as possible in my next DWH. It also means I will have to be smart enough to enable the constraints at just the right moment during my ETL to have an acceptable loading performance.


The Humble PL/SQL Exception (Part 1a) - The Structure of Stored Subprograms

Tahiti Views - Sun, 2009-06-21 23:52
As I said in my previous post, The Humble PL/SQL Exception (Part 1) - The Disappearing RETURN, there are a lot of nuances surrounding exception handling. That post attracted some comments that I thought deserved a followup post rather than just another comment in response.oraclenerd said (excerpted):I'm going to have to disagree with you on the internal procedure (in the declaration section) John Russellhttp://www.blogger.com/profile/17089970732272081637noreply@blogger.com1

Microsoft To Deprecate System.Data.OracleClient

Mark A. Williams - Fri, 2009-06-19 10:10
I found the following to be an interesting announcement:

System.Data.OracleClient Update

It looks like Microsoft have decided to deprecate System.Data.OracleClient beginning with the .NET 4.0 release.

Of course, I'm more than a little biased when it comes to anything related to Oracle.

For more information and to download ODP.NET, please see the Oracle Data Provider for .NET center on Oracle Technology Network (OTN).

opatch problem on Windows

Yasin Baskan - Fri, 2009-06-19 08:30
There is a note in Metalink that explains that on Windows having space characters in your ORACLE_HOME variable, the patch location or JDK location causes an error when running opatch. Yesterday I saw a strange problem that is similar to the above case.

If your opatch directory contains space characters you get a strange error. Even if the above conditions were not present we got an error like this:

C:\Documents and Settings\test\Desktop\OPatch>opatch lsinventory
Exception in thread "main" java.lang.NoClassDefFoundError: and

OPatch failed with error code = 1

Metalink returns no results for this error. This error is caused by the space characters in "Documents and Settings". When you move the opatch directory to another directory which does not contain space in its name opatch runs without this problem.

Just a note to help in case someone gets the same error.

DBFS

Yasin Baskan - Fri, 2009-06-19 05:32
Yesterday I attended Kevin Closson's Exadata technical deep dive webcast series part 4. It is now available to download here. In there he talks about DBFS which is a filesystem on top of the Oracle database which can store normal files like text files. DBFS is provided with Exadata and is used to store staging files for the ETL/ELT process. This looks very promising, he sites several tests he conducted and gives performance numbers too. Watch the webcast if you haven't yet.

The Extra Hurdle for Marketing Through Social Media: You Gotta Make 'em Feel

Ken Pulverman - Thu, 2009-06-18 20:47

So we've been chatting recently with a vendor, Corporate Visions. They follow the


approach that a message that sticks is one that's wrapped in emotion. It's amazing to see when this technique is executed well. This video that a friend pointed me to is not new new, in fact 150k plus people have already seen it. But I think the folks at Grasshopper.com (actually the agency they hired) really nailed this approach.




It's interesting to note how intertwined the notion of making a message stick, something good salespeople have known how to do forever, and our expectations associated with new and social media.

Clearly we all want to feel something, and we all have very high expectations of social media in this regard. I think this notion is perhaps an extension of my last post, The Importance of Being Earnest.

So....I now have a request.

Please add comments to this blog with links to messages that you think were made to stick - messages wrapped in emotion. I wanna see what you got.

Go ahead, try to make me cry.... or laugh. Actually, I have a strong preference for laughing.

The Humble PL/SQL Exception (Part 1) - The Disappearing RETURN

Tahiti Views - Thu, 2009-06-18 00:59
Exception handling in PL/SQL is a big subject, with a lot of nuances. Still, you have to start somewhere. Let's take one simple use case for exceptions, and see if it leads to some thoughts about best practices. (Hopefully, this is not the last post in this particular series.)One common pattern I find in PL/SQL procedures is a series of tests early on...if not_supposed_to_even_be_here() then John Russellhttp://www.blogger.com/profile/17089970732272081637noreply@blogger.com2

Unleash Oracle ODCI API - OOW09 Voting Session

Marcelo Ochoa - Wed, 2009-06-17 07:28
Oracle Open World Voting session is a new way to create the conference session agenda.
I have submited two speaker session, one named "Unleash Oracle ODCI API" that is ready for voting at Oracle Mix comunity.
Oracle Data Cartridge API is provided to implement many powerful functionality such as new Domain Indexes, pipeline tables, and aggregated functions.
The presentation will include an introduction to this API showing many of his features using as example the code of a Lucene Domain Index which is a mix between Java running inside the OJVM and Oracle Object types.
Lucene Domain Index is an open source project which integrates the Apache Lucene IR library as a new Domain Index, providing features has free text searching, faceting, highlighting, filtering at index level, multi table/column indexes and more for 10g/11g databases.
Basically I would like to introduce this exciting API which allows developers to interact directly with the RDBMS engine and adding some examples in Java that are not included into the Oracle Documentation.
Well if you want to see this session at OOW09 please click here, see you there....

New in Oracle VM 2.1.5: Web Services API

Sergio's Blog - Tue, 2009-06-16 00:18

Last week, ULN was updated with Oracle VM 2.1.5 RPMs. One of the main new features in Oracle VM 2.1.5 is a web services-based API to perform any of the operations in Oracle VM Manager, for example, create a server pool, add servers, or create virtual machines. Read the Oracle VM Web Services API documentation. ISOs will be published on edelivery.oracle.com/oraclevm soon.

Categories: DBA Blogs

The Importance of Being Earnest in Social Media; or What Facebook and Twitter Should Do to Save Themselves from Becoming Irrelevant

Ken Pulverman - Mon, 2009-06-15 18:57
So who hates their personal e-mail inbox? I do! Yahoo has just become complete spam. I have maxed out the number of filters on the paid account, tagged every possible item that even remotely looks like spam as spam and yet it keeps coming.

According to KGB.com 183 Billion e-mail messages are sent per day. I would give them a shout out, but it looks like this number is from 2007. Also they didn't give me an answer to the second half of my question for my 99 cent text message investment which was - how much of this is spam? Maybe the KGB just doesn't know. Afterall, they lost some of their best spooks to capitalism after the iron curtain fell.

Well wikipedia does and it is free! Wikipedia led me to this reference from the New York Times. Spamalot? Why yes we do. 94% of the time as it turns out. So approximately 2,000,000 e-mail messages are sent every second of every day and 1,880,000 are pure crap that we don't want.

My financee's brother actually works at the post office. He told me that the only thing that is really keeping them alive is junk mail. In fact, like e-mail it is the bulk of what they move these days. I got on the USPS' marketers spam list and they send me all sorts of paper materials telling me how green they are. They actually sent me a large express mail envelope to tell me they weren't going to be sending me the T-shirt they offered me. That they sent later in another large package, in the wrong size of course. Forget about solar power and hydrogen cars. It seems the greenist thing the US Government could do is close the Post Office. (Sorry future brother-in-law. I'll help you bounce pack with a new startup that sells spam filters on late night infomercials using Swedish models that austensibly made the stuff...oops that one has been done Remind me to stop staying up to watch Craig Ferguson.)

So where am I going with this? Well the Post Office is dying a slow death at a rate of one cent price hikes a year and service cutbacks until we all give up. E-mail is almost dead on arrival. Myspace and Friendster lost their mojo before they even tried to stretch to reach my demographic. What do they all have in common? They are filled with crap!

Recently I've been experimenting with feeding content to Twitter. (see The Need for Feed). I am trying to use the technique for good - serving up interesting data sources that people can actually use. I have become painfully aware of the potential to use these techniques for evil though. Last week two guys I went to high school with both crapped in the walled garden that is supposed to be my Facebook account on the same day. They both posted some BS about this new energy drink called efusjon. It's a multi-level marketing company selling some acai berry sugar water. Supposed to save your life not just dangerously elevate your sugar levels and rot your teeth. Apparently part of their "viral" marketing was to get some dudes from my high school to spam me with their fake musings about this garbage.

There you have it. The beginning of the end. One day you'll nod knowingly when your using Farcebluch.com instead.

Attention all entrepreneurs of Silicon Valley - this is your shining opportunity. Build us a social communication platform that keeps this crap out! Of course we need to buy things to keep whatever it is we are talking about afloat, but can't you at least try to address our interests? If Facebook did this they would know that the only acai berry I consume is made into Pinkberry style frozen yogurt. That's unrealistically specific for the time being, but you get my point.

So what does it mean to be earnest in Social media? It means making a college try to be relevant. Sure we can't all possibly keep up with the information demands of the hungry new communication mediums alone, but we have to try to keep content flowing that is at least interesting to our audience.

I am going to offer up The Cocktail Party Rule for Social Media.

If it is not a reasonable leap from the context or the topic in a group chat at a cocktail party, don't go there.

I send a link to this blog to our corporate Twitter account. I work at Oracle Corporation and market our CRM solutions. I think it is a reasonable leap that someone interested in CRM may be wrestling with the same new marketing concepts I blog about.

On the other hand, if a group of guys is gathered around the punch bowl, Mojito vat, beer tub, or Franzia box (depending on what kind of cocktail party you are having) talking about whether the Palm Pre has a snowball's chance in hell of tarnishing Apple's shine, you don't bring up the fact that your wife, the tennis coach just started selling some acai berry fizzy water out of her trunk.

It's a nonsequitor and it is annoying. It's worse than annoying in fact. It's that feeling of trepidation every time you open up your Yahoo inbox or your mailbox for that matter.

So what does this all mean? The power is in your hands. It's in all of our hands. Just use the The Cocktail Party Rule for Social Media and we'll all be fine, and we won't stop having to change communication mediums every 6-12 months. ....or will we?

See you on Farcebluch.

Be Alert!

Nigel Thomas - Mon, 2009-06-15 16:00
Here's a tale of woe from an organisation I know - anonymised to protect the guilty.

A couple of weeks after a major hardware and operating system upgrade, there was a major foul-up during a weekend batch process. What went wrong? What got missed in the (quite extensive) testing?

The symptom was that batch jobs run under concurrent manager were running late. Very late. In fact, they hadn't run. The external scheduling software had attempted to launch them, but failed. Worse than that, there had been no alerting over the weekend. Operators should have been notified of the failure of critical processes by an Enterprise Management (EM) tool.

Cut to the explanation:

As part of the O/S upgrade, user accounts on the server are now set to be locked out if three or more failed attempts to login are made. Someone in operations-land triggered a lockout on a unix account used to run the concurrent manager. And he didn't report it to anyone to reset it. So that explained the concurrent manager failures.

The EM software that should have woken up the operators also failed. Surprise, surprise: it was using the same (locked-out) unix account.

And finally, the alerting rules recognised all kinds of warnings and errors, but noone had considered the possibility that the EM system itself would fail.

Well, it's only a business system; though a couple of C-level execs got their end of month reports a couple of days late, and there were plenty of red faces, nobody actually died...

Just keep an eye out for those nasty corner cases!

Time for a Change – Upcoming Announcements – Millionth Hit

Venkat Akrishnan - Mon, 2009-06-15 12:35

Well, as the saying goes “Change is the only Constant”, there are quite a few changes that are coming up on this blog(well not blog alone!!!) in the near future. I would be in a position to make an announcement in a week or so. And i am very much looking forward to that. One thing that i can say for sure is the fact that you can expect more of my blog entries in the future:-). More on that next week.

And as luck would have it, while i was writing this, the blog registered its first Millionth hit (of a total of 302 blog entries). I would have to express and extend my thanks to anyone and everyone who have been visiting this blog ever since its inception on 18th of July 2007. I believe the blog has come a long way since then. I have written at least two blog entries every week since i started, barring a couple of months when i did not even write a single one. When i started to write on BI EE there were only a couple of people writing about it like Mark(who was very well known in the Oracle BI Community even at that time) and Adrian(actually myself and Adrian were discussing this in the BI Forum). Then came along John who was also very active on the BI Forums. And then came people like Alex(Siebel + BI EE) , Christian (BI EE + Essbase) and others who have been working on these products for long but just now started to blog about them.

In the coming future, i would be primarily focusing on Hyperion Essbase(i would say this has been a tool that has been really close to my heart that i have not blogged much about), EPM Integration, Hyperion Planning/EPMA integration, BI EE – Essbase Integration (more use cases). Hopefully you have found this blog useful and thanks for stopping by.


Categories: BI & Warehousing

When Backwards Compatibility Goes Too Far

Tahiti Views - Sat, 2009-06-13 19:46
I couldn't help but notice this new article, about holdovers from the earliest days of DOS and even CP/M still showing up in Windows-based development:Zombie Operating Systems and ASP.NET MVCPersonally, I really enjoyed working on the IBM C/C++ compiler back in the day, targeting Windows 95. They licensed the Borland resource editor and I adapted the RTF-format online help, with no RTF specs, John Russellhttp://www.blogger.com/profile/17089970732272081637noreply@blogger.com0

ODTUG 2009

Susan Duncan - Sat, 2009-06-13 04:05
I can hardly believe it's another year (of few posts to my blog) and another ODTUG Kaleidoscope conference is almost upon us. This year the conference is in Monterey so I'm packing my bags and off to Oracle Headquarters in San Francisco tomorrow - then down to the conference on June 20th

If you have the opportunity I'd urge you to try and make it there too. The 'fun' starts off on Saturday when there is a community service day. Last year we painted school classrooms in New Orleans, this year we are helping to restore habitat at Martin Dunes, California’s largest and most intact dune ecosystem. So I'm packing plenty of sunscreen as my pale English skin isn't used to the California sun! More fun after the first day of sessions Sunday - with the second ODTUG Jam Session. Those of you who know Grant Ronald and I know that we are much too shy and retiring to join in that ;-)

But of course, that's not all the fun. The conference is full of interesting and diverse sessions - and I should know, I was part of the panel reviewing papers for the Editor's Choice award - I spent a few evenings reading papers on everything from project management to Oracle to the Holy Grail.

As for me, I'm really excited to be doing two sessions -

5000 tables, 100 schemas, 2000 developers: This will showcase some of the team-working features such as standards and version management, and reporting and impact analysis and the highly usable and scalable data modeling in JDeveloper. I've got some great new functionality to reveal - reporting on your data models, user defined validation and declarative compare of versioned database objects

Tooling up for ALM 2.0 with Oracle Team Productivity Center: If you were lucky enough to be at Oracle World or the UK Oracle User Group conference last year you might have seen a very early incarnation of this project that I've been working on. At ODTUG I'm going to be demoing the very latest code and showing you how to use your ALM repositories from within JDeveloper and how to integrate artifacts from those (maybe) disparate repositories together through Oracle Team Productivity Center. All this and team management too!

Another goal I have for the conference week is to talk to as many JDeveloper users as possible about team working, ALM and SDLC - and to ensure that I get feedback to take back and work on more functionality in JDeveloper to compliment the great application development tool we have

I look forward to seeing you there - or if not, finding other ways to talk to you!


Back to Top

Fusion Tables

Charles Schultz - Fri, 2009-06-12 13:24
So I admit it, I read slashdot (who doesn't?? *grin*). While some topics I really do not care about, for some reason "Oracle" in the headline does. =) And I am not opposed to Oracle-bashing, because I do a fair share myself.


I love how folks at Google Labs come up with all this crazy stuff. And not just GL, but Apple and lots of other places as well. The way technology moves is absolutely spellbinding, and I mean that in the most literal sense possible. *grin*

What I hate is techno-marketing gibberish:
"So now we have an n-cube, a four-dimensional space, and in that space we can now do new kinds of queries which create new kinds of products and new market opportunities"
Ok so I can grapple with n-cube or 4-space. Show me a query that can create a new kind of product. Heck, show me a query that can make an old product! Create new market opportunities?!? Come on, everything in the galaxy is a market opportunity. You couldn't hit a house fly with a query. And I mean that in the most literal sense. *wink*

Purge old files on Linux/Unix using “find” command

Aviad Elbaz - Wed, 2009-06-10 01:30

I've noticed that one of our interface directories has a lot of old files, some of them were more than a year old. I checked it with our implementers and it turns out that we can delete all files that are older than 60 days.

I decided to write a (tiny) shell script to purge all files older than 60 days and schedule it with crontab, this way I won't deal with it manually. I wrote a find command to identify and delete those files. I started with the following command:

find /interfaces/inbound -mtime +60 -type f -maxdepth 1 -exec rm {} \;

It finds and deletes all files in directory /interface/inbound that are older than 60 days.
"-maxdepth 1" -> find files in current directory only. Don't look for files in sub directories.

After packing it in a shell script I got a request to delete "csv" files only. No problem... I added the "-name" to the find command:

find /interfaces/inbound -name "*.csv" -mtime +60 -type f -maxdepth 1 -exec rm {} \;

All csv files in /interface/inbound that are older than 60 days will be deleted.

But then, the request had changed, and I was asked to delete "*.xls" files further to "*.csv" files. At this point things went complicated for me since I'm not a shell script expert...

I tried several things, like add another "-name" to the find command:

find /interfaces/inbound -name "*.csv" -name "*.xls" -mtime +60 -type f -maxdepth 1 -exec rm {} \;

But no file was deleted. Couple of moments later I understood that I'm trying to find csv files which is also xls files... (logically incorrect of course).

After struggling a liitle with the find command, I managed to make it works:

find /interfaces/inbound \( -name "*.csv" -o -name "*.xls" \) -mtime +60 -type f -maxdepth 1 -exec rm {} \;

:-)

Aviad

Categories: APPS Blogs

Pages

Subscribe to Oracle FAQ aggregator