Feed aggregator

(Integrity) Constraints in a datawarehouse

Klein Denkraam - Wed, 2009-06-24 02:51

In data warehouse land it is not very common to see constraints in the database. I never felt very comfortable with that, but until now I did not get around to analysing why I felt that way. Until I read this article by Tom Kyte. In the article Tom Kyte shows that the CBO (Cost Based Optimizer) can profit from the information that is derived from the presence of constraints by generating better query plans. Better in this case is defined as ‘producing result sets faster’. The examples in the article are not exactly ‘real world’ data warehouse examples. Following Tom Kyte’s line of reasoning I do agree that constraints are capable of improving the performance of queries.

The reasons for not having constraints in a data warehouse are along the lines of ‘I have checked the integrity when I did my ETL, so why would I need constraints to confirm that? And besides, constraints would only delay my ETL because they have to be checked before they are really enabled’. I see a couple of flaws in this reasoning:

  • I suspect that most constraints in a data warehouse cannot be enabled when actually applied. The quality of the ETL might be good, but is it just as good as a constraint would be? I think not.
  • Enabling constraints might take time, but how often do you have to check constraints? Only when doing the ETL, of course. I hope that in your DWH, doing ETL will be during a small part of the time your DWH is being used. Otherwise your DWH will have a problem. The rest of the time your DWH will be used for querying and Tom Kyte just showed that querying can be sped up by applying constraints.

Summing up my pros and cons of applying constraints.

Pro:

  • it will improve the data quality of the DWH
  • it can speed up the queries in your DWH (querying it is the purpose of your DWH anyway)

Con:

  • it will take more time to do your ETL (which is only a means to create your DWH)

My conclusion is that I wil try to incorporate as many constraints as possible in my next DWH. It also means I will have to be smart enough to enable the constraints at just the right moment during my ETL to have an acceptable loading performance.


Avoiding pls-00436 with forall

Adrian Billington - Mon, 2009-06-22 03:00
Workarounds to the FORALL PLS-00436 implementation restriction. July 2005 (updated June 2009)

Pl/sql functions and cbo costing

Adrian Billington - Mon, 2009-06-22 03:00
Associating statistics with PL/SQL functions for greater CBO accuracy. June 2009

The Humble PL/SQL Exception (Part 1a) - The Structure of Stored Subprograms

Tahiti Views - Sun, 2009-06-21 23:52
As I said in my previous post, The Humble PL/SQL Exception (Part 1) - The Disappearing RETURN, there are a lot of nuances surrounding exception handling. That post attracted some comments that I thought deserved a followup post rather than just another comment in response.oraclenerd said (excerpted):I'm going to have to disagree with you on the internal procedure (in the declaration section) John Russellhttp://www.blogger.com/profile/17089970732272081637noreply@blogger.com1

Microsoft To Deprecate System.Data.OracleClient

Mark A. Williams - Fri, 2009-06-19 10:10
I found the following to be an interesting announcement:

System.Data.OracleClient Update

It looks like Microsoft have decided to deprecate System.Data.OracleClient beginning with the .NET 4.0 release.

Of course, I'm more than a little biased when it comes to anything related to Oracle.

For more information and to download ODP.NET, please see the Oracle Data Provider for .NET center on Oracle Technology Network (OTN).

opatch problem on Windows

Yasin Baskan - Fri, 2009-06-19 08:30
There is a note in Metalink that explains that on Windows having space characters in your ORACLE_HOME variable, the patch location or JDK location causes an error when running opatch. Yesterday I saw a strange problem that is similar to the above case.

If your opatch directory contains space characters you get a strange error. Even if the above conditions were not present we got an error like this:

C:\Documents and Settings\test\Desktop\OPatch>opatch lsinventory
Exception in thread "main" java.lang.NoClassDefFoundError: and

OPatch failed with error code = 1

Metalink returns no results for this error. This error is caused by the space characters in "Documents and Settings". When you move the opatch directory to another directory which does not contain space in its name opatch runs without this problem.

Just a note to help in case someone gets the same error.

DBFS

Yasin Baskan - Fri, 2009-06-19 05:32
Yesterday I attended Kevin Closson's Exadata technical deep dive webcast series part 4. It is now available to download here. In there he talks about DBFS which is a filesystem on top of the Oracle database which can store normal files like text files. DBFS is provided with Exadata and is used to store staging files for the ETL/ELT process. This looks very promising, he sites several tests he conducted and gives performance numbers too. Watch the webcast if you haven't yet.

The Extra Hurdle for Marketing Through Social Media: You Gotta Make 'em Feel

Ken Pulverman - Thu, 2009-06-18 20:47

So we've been chatting recently with a vendor, Corporate Visions. They follow the


approach that a message that sticks is one that's wrapped in emotion. It's amazing to see when this technique is executed well. This video that a friend pointed me to is not new new, in fact 150k plus people have already seen it. But I think the folks at Grasshopper.com (actually the agency they hired) really nailed this approach.




It's interesting to note how intertwined the notion of making a message stick, something good salespeople have known how to do forever, and our expectations associated with new and social media.

Clearly we all want to feel something, and we all have very high expectations of social media in this regard. I think this notion is perhaps an extension of my last post, The Importance of Being Earnest.

So....I now have a request.

Please add comments to this blog with links to messages that you think were made to stick - messages wrapped in emotion. I wanna see what you got.

Go ahead, try to make me cry.... or laugh. Actually, I have a strong preference for laughing.

Setting cardinality for pipelined and table functions

Adrian Billington - Thu, 2009-06-18 14:17
Various methods for setting accurate cardinality statistics for table/pipelined functions. June 2009

The Humble PL/SQL Exception (Part 1) - The Disappearing RETURN

Tahiti Views - Thu, 2009-06-18 00:59
Exception handling in PL/SQL is a big subject, with a lot of nuances. Still, you have to start somewhere. Let's take one simple use case for exceptions, and see if it leads to some thoughts about best practices. (Hopefully, this is not the last post in this particular series.)One common pattern I find in PL/SQL procedures is a series of tests early on...if not_supposed_to_even_be_here() then John Russellhttp://www.blogger.com/profile/17089970732272081637noreply@blogger.com2

Unleash Oracle ODCI API - OOW09 Voting Session

Marcelo Ochoa - Wed, 2009-06-17 07:28
Oracle Open World Voting session is a new way to create the conference session agenda.
I have submited two speaker session, one named "Unleash Oracle ODCI API" that is ready for voting at Oracle Mix comunity.
Oracle Data Cartridge API is provided to implement many powerful functionality such as new Domain Indexes, pipeline tables, and aggregated functions.
The presentation will include an introduction to this API showing many of his features using as example the code of a Lucene Domain Index which is a mix between Java running inside the OJVM and Oracle Object types.
Lucene Domain Index is an open source project which integrates the Apache Lucene IR library as a new Domain Index, providing features has free text searching, faceting, highlighting, filtering at index level, multi table/column indexes and more for 10g/11g databases.
Basically I would like to introduce this exciting API which allows developers to interact directly with the RDBMS engine and adding some examples in Java that are not included into the Oracle Documentation.
Well if you want to see this session at OOW09 please click here, see you there....

New in Oracle VM 2.1.5: Web Services API

Sergio's Blog - Tue, 2009-06-16 00:18

Last week, ULN was updated with Oracle VM 2.1.5 RPMs. One of the main new features in Oracle VM 2.1.5 is a web services-based API to perform any of the operations in Oracle VM Manager, for example, create a server pool, add servers, or create virtual machines. Read the Oracle VM Web Services API documentation. ISOs will be published on edelivery.oracle.com/oraclevm soon.

Categories: DBA Blogs

The Importance of Being Earnest in Social Media; or What Facebook and Twitter Should Do to Save Themselves from Becoming Irrelevant

Ken Pulverman - Mon, 2009-06-15 18:57
So who hates their personal e-mail inbox? I do! Yahoo has just become complete spam. I have maxed out the number of filters on the paid account, tagged every possible item that even remotely looks like spam as spam and yet it keeps coming.

According to KGB.com 183 Billion e-mail messages are sent per day. I would give them a shout out, but it looks like this number is from 2007. Also they didn't give me an answer to the second half of my question for my 99 cent text message investment which was - how much of this is spam? Maybe the KGB just doesn't know. Afterall, they lost some of their best spooks to capitalism after the iron curtain fell.

Well wikipedia does and it is free! Wikipedia led me to this reference from the New York Times. Spamalot? Why yes we do. 94% of the time as it turns out. So approximately 2,000,000 e-mail messages are sent every second of every day and 1,880,000 are pure crap that we don't want.

My financee's brother actually works at the post office. He told me that the only thing that is really keeping them alive is junk mail. In fact, like e-mail it is the bulk of what they move these days. I got on the USPS' marketers spam list and they send me all sorts of paper materials telling me how green they are. They actually sent me a large express mail envelope to tell me they weren't going to be sending me the T-shirt they offered me. That they sent later in another large package, in the wrong size of course. Forget about solar power and hydrogen cars. It seems the greenist thing the US Government could do is close the Post Office. (Sorry future brother-in-law. I'll help you bounce pack with a new startup that sells spam filters on late night infomercials using Swedish models that austensibly made the stuff...oops that one has been done Remind me to stop staying up to watch Craig Ferguson.)

So where am I going with this? Well the Post Office is dying a slow death at a rate of one cent price hikes a year and service cutbacks until we all give up. E-mail is almost dead on arrival. Myspace and Friendster lost their mojo before they even tried to stretch to reach my demographic. What do they all have in common? They are filled with crap!

Recently I've been experimenting with feeding content to Twitter. (see The Need for Feed). I am trying to use the technique for good - serving up interesting data sources that people can actually use. I have become painfully aware of the potential to use these techniques for evil though. Last week two guys I went to high school with both crapped in the walled garden that is supposed to be my Facebook account on the same day. They both posted some BS about this new energy drink called efusjon. It's a multi-level marketing company selling some acai berry sugar water. Supposed to save your life not just dangerously elevate your sugar levels and rot your teeth. Apparently part of their "viral" marketing was to get some dudes from my high school to spam me with their fake musings about this garbage.

There you have it. The beginning of the end. One day you'll nod knowingly when your using Farcebluch.com instead.

Attention all entrepreneurs of Silicon Valley - this is your shining opportunity. Build us a social communication platform that keeps this crap out! Of course we need to buy things to keep whatever it is we are talking about afloat, but can't you at least try to address our interests? If Facebook did this they would know that the only acai berry I consume is made into Pinkberry style frozen yogurt. That's unrealistically specific for the time being, but you get my point.

So what does it mean to be earnest in Social media? It means making a college try to be relevant. Sure we can't all possibly keep up with the information demands of the hungry new communication mediums alone, but we have to try to keep content flowing that is at least interesting to our audience.

I am going to offer up The Cocktail Party Rule for Social Media.

If it is not a reasonable leap from the context or the topic in a group chat at a cocktail party, don't go there.

I send a link to this blog to our corporate Twitter account. I work at Oracle Corporation and market our CRM solutions. I think it is a reasonable leap that someone interested in CRM may be wrestling with the same new marketing concepts I blog about.

On the other hand, if a group of guys is gathered around the punch bowl, Mojito vat, beer tub, or Franzia box (depending on what kind of cocktail party you are having) talking about whether the Palm Pre has a snowball's chance in hell of tarnishing Apple's shine, you don't bring up the fact that your wife, the tennis coach just started selling some acai berry fizzy water out of her trunk.

It's a nonsequitor and it is annoying. It's worse than annoying in fact. It's that feeling of trepidation every time you open up your Yahoo inbox or your mailbox for that matter.

So what does this all mean? The power is in your hands. It's in all of our hands. Just use the The Cocktail Party Rule for Social Media and we'll all be fine, and we won't stop having to change communication mediums every 6-12 months. ....or will we?

See you on Farcebluch.

Be Alert!

Nigel Thomas - Mon, 2009-06-15 16:00
Here's a tale of woe from an organisation I know - anonymised to protect the guilty.

A couple of weeks after a major hardware and operating system upgrade, there was a major foul-up during a weekend batch process. What went wrong? What got missed in the (quite extensive) testing?

The symptom was that batch jobs run under concurrent manager were running late. Very late. In fact, they hadn't run. The external scheduling software had attempted to launch them, but failed. Worse than that, there had been no alerting over the weekend. Operators should have been notified of the failure of critical processes by an Enterprise Management (EM) tool.

Cut to the explanation:

As part of the O/S upgrade, user accounts on the server are now set to be locked out if three or more failed attempts to login are made. Someone in operations-land triggered a lockout on a unix account used to run the concurrent manager. And he didn't report it to anyone to reset it. So that explained the concurrent manager failures.

The EM software that should have woken up the operators also failed. Surprise, surprise: it was using the same (locked-out) unix account.

And finally, the alerting rules recognised all kinds of warnings and errors, but noone had considered the possibility that the EM system itself would fail.

Well, it's only a business system; though a couple of C-level execs got their end of month reports a couple of days late, and there were plenty of red faces, nobody actually died...

Just keep an eye out for those nasty corner cases!

Time for a Change – Upcoming Announcements – Millionth Hit

Venkat Akrishnan - Mon, 2009-06-15 12:35

Well, as the saying goes “Change is the only Constant”, there are quite a few changes that are coming up on this blog(well not blog alone!!!) in the near future. I would be in a position to make an announcement in a week or so. And i am very much looking forward to that. One thing that i can say for sure is the fact that you can expect more of my blog entries in the future:-). More on that next week.

And as luck would have it, while i was writing this, the blog registered its first Millionth hit (of a total of 302 blog entries). I would have to express and extend my thanks to anyone and everyone who have been visiting this blog ever since its inception on 18th of July 2007. I believe the blog has come a long way since then. I have written at least two blog entries every week since i started, barring a couple of months when i did not even write a single one. When i started to write on BI EE there were only a couple of people writing about it like Mark(who was very well known in the Oracle BI Community even at that time) and Adrian(actually myself and Adrian were discussing this in the BI Forum). Then came along John who was also very active on the BI Forums. And then came people like Alex(Siebel + BI EE) , Christian (BI EE + Essbase) and others who have been working on these products for long but just now started to blog about them.

In the coming future, i would be primarily focusing on Hyperion Essbase(i would say this has been a tool that has been really close to my heart that i have not blogged much about), EPM Integration, Hyperion Planning/EPMA integration, BI EE – Essbase Integration (more use cases). Hopefully you have found this blog useful and thanks for stopping by.


Categories: BI & Warehousing

When Backwards Compatibility Goes Too Far

Tahiti Views - Sat, 2009-06-13 19:46
I couldn't help but notice this new article, about holdovers from the earliest days of DOS and even CP/M still showing up in Windows-based development:Zombie Operating Systems and ASP.NET MVCPersonally, I really enjoyed working on the IBM C/C++ compiler back in the day, targeting Windows 95. They licensed the Borland resource editor and I adapted the RTF-format online help, with no RTF specs, John Russellhttp://www.blogger.com/profile/17089970732272081637noreply@blogger.com0

ODTUG 2009

Susan Duncan - Sat, 2009-06-13 04:05
I can hardly believe it's another year (of few posts to my blog) and another ODTUG Kaleidoscope conference is almost upon us. This year the conference is in Monterey so I'm packing my bags and off to Oracle Headquarters in San Francisco tomorrow - then down to the conference on June 20th

If you have the opportunity I'd urge you to try and make it there too. The 'fun' starts off on Saturday when there is a community service day. Last year we painted school classrooms in New Orleans, this year we are helping to restore habitat at Martin Dunes, California’s largest and most intact dune ecosystem. So I'm packing plenty of sunscreen as my pale English skin isn't used to the California sun! More fun after the first day of sessions Sunday - with the second ODTUG Jam Session. Those of you who know Grant Ronald and I know that we are much too shy and retiring to join in that ;-)

But of course, that's not all the fun. The conference is full of interesting and diverse sessions - and I should know, I was part of the panel reviewing papers for the Editor's Choice award - I spent a few evenings reading papers on everything from project management to Oracle to the Holy Grail.

As for me, I'm really excited to be doing two sessions -

5000 tables, 100 schemas, 2000 developers: This will showcase some of the team-working features such as standards and version management, and reporting and impact analysis and the highly usable and scalable data modeling in JDeveloper. I've got some great new functionality to reveal - reporting on your data models, user defined validation and declarative compare of versioned database objects

Tooling up for ALM 2.0 with Oracle Team Productivity Center: If you were lucky enough to be at Oracle World or the UK Oracle User Group conference last year you might have seen a very early incarnation of this project that I've been working on. At ODTUG I'm going to be demoing the very latest code and showing you how to use your ALM repositories from within JDeveloper and how to integrate artifacts from those (maybe) disparate repositories together through Oracle Team Productivity Center. All this and team management too!

Another goal I have for the conference week is to talk to as many JDeveloper users as possible about team working, ALM and SDLC - and to ensure that I get feedback to take back and work on more functionality in JDeveloper to compliment the great application development tool we have

I look forward to seeing you there - or if not, finding other ways to talk to you!


Back to Top

Fusion Tables

Charles Schultz - Fri, 2009-06-12 13:24
So I admit it, I read slashdot (who doesn't?? *grin*). While some topics I really do not care about, for some reason "Oracle" in the headline does. =) And I am not opposed to Oracle-bashing, because I do a fair share myself.


I love how folks at Google Labs come up with all this crazy stuff. And not just GL, but Apple and lots of other places as well. The way technology moves is absolutely spellbinding, and I mean that in the most literal sense possible. *grin*

What I hate is techno-marketing gibberish:
"So now we have an n-cube, a four-dimensional space, and in that space we can now do new kinds of queries which create new kinds of products and new market opportunities"
Ok so I can grapple with n-cube or 4-space. Show me a query that can create a new kind of product. Heck, show me a query that can make an old product! Create new market opportunities?!? Come on, everything in the galaxy is a market opportunity. You couldn't hit a house fly with a query. And I mean that in the most literal sense. *wink*

Purge old files on Linux/Unix using “find” command

Aviad Elbaz - Wed, 2009-06-10 01:30

I've noticed that one of our interface directories has a lot of old files, some of them were more than a year old. I checked it with our implementers and it turns out that we can delete all files that are older than 60 days.

I decided to write a (tiny) shell script to purge all files older than 60 days and schedule it with crontab, this way I won't deal with it manually. I wrote a find command to identify and delete those files. I started with the following command:

find /interfaces/inbound -mtime +60 -type f -maxdepth 1 -exec rm {} \;

It finds and deletes all files in directory /interface/inbound that are older than 60 days.
"-maxdepth 1" -> find files in current directory only. Don't look for files in sub directories.

After packing it in a shell script I got a request to delete "csv" files only. No problem... I added the "-name" to the find command:

find /interfaces/inbound -name "*.csv" -mtime +60 -type f -maxdepth 1 -exec rm {} \;

All csv files in /interface/inbound that are older than 60 days will be deleted.

But then, the request had changed, and I was asked to delete "*.xls" files further to "*.csv" files. At this point things went complicated for me since I'm not a shell script expert...

I tried several things, like add another "-name" to the find command:

find /interfaces/inbound -name "*.csv" -name "*.xls" -mtime +60 -type f -maxdepth 1 -exec rm {} \;

But no file was deleted. Couple of moments later I understood that I'm trying to find csv files which is also xls files... (logically incorrect of course).

After struggling a liitle with the find command, I managed to make it works:

find /interfaces/inbound \( -name "*.csv" -o -name "*.xls" \) -mtime +60 -type f -maxdepth 1 -exec rm {} \;

:-)

Aviad

Categories: APPS Blogs

Pages

Subscribe to Oracle FAQ aggregator