Feed aggregator

opatch problem on Windows

Yasin Baskan - Fri, 2009-06-19 08:30
There is a note in Metalink that explains that on Windows having space characters in your ORACLE_HOME variable, the patch location or JDK location causes an error when running opatch. Yesterday I saw a strange problem that is similar to the above case.

If your opatch directory contains space characters you get a strange error. Even if the above conditions were not present we got an error like this:

C:\Documents and Settings\test\Desktop\OPatch>opatch lsinventory
Exception in thread "main" java.lang.NoClassDefFoundError: and

OPatch failed with error code = 1

Metalink returns no results for this error. This error is caused by the space characters in "Documents and Settings". When you move the opatch directory to another directory which does not contain space in its name opatch runs without this problem.

Just a note to help in case someone gets the same error.


Yasin Baskan - Fri, 2009-06-19 05:32
Yesterday I attended Kevin Closson's Exadata technical deep dive webcast series part 4. It is now available to download here. In there he talks about DBFS which is a filesystem on top of the Oracle database which can store normal files like text files. DBFS is provided with Exadata and is used to store staging files for the ETL/ELT process. This looks very promising, he sites several tests he conducted and gives performance numbers too. Watch the webcast if you haven't yet.

The Extra Hurdle for Marketing Through Social Media: You Gotta Make 'em Feel

Ken Pulverman - Thu, 2009-06-18 20:47

So we've been chatting recently with a vendor, Corporate Visions. They follow the

approach that a message that sticks is one that's wrapped in emotion. It's amazing to see when this technique is executed well. This video that a friend pointed me to is not new new, in fact 150k plus people have already seen it. But I think the folks at Grasshopper.com (actually the agency they hired) really nailed this approach.

It's interesting to note how intertwined the notion of making a message stick, something good salespeople have known how to do forever, and our expectations associated with new and social media.

Clearly we all want to feel something, and we all have very high expectations of social media in this regard. I think this notion is perhaps an extension of my last post, The Importance of Being Earnest.

So....I now have a request.

Please add comments to this blog with links to messages that you think were made to stick - messages wrapped in emotion. I wanna see what you got.

Go ahead, try to make me cry.... or laugh. Actually, I have a strong preference for laughing.

The Humble PL/SQL Exception (Part 1) - The Disappearing RETURN

Tahiti Views - Thu, 2009-06-18 00:59
Exception handling in PL/SQL is a big subject, with a lot of nuances. Still, you have to start somewhere. Let's take one simple use case for exceptions, and see if it leads to some thoughts about best practices. (Hopefully, this is not the last post in this particular series.)One common pattern I find in PL/SQL procedures is a series of tests early on...if not_supposed_to_even_be_here() then John Russellhttp://www.blogger.com/profile/17089970732272081637noreply@blogger.com2

Unleash Oracle ODCI API - OOW09 Voting Session

Marcelo Ochoa - Wed, 2009-06-17 07:28
Oracle Open World Voting session is a new way to create the conference session agenda.
I have submited two speaker session, one named "Unleash Oracle ODCI API" that is ready for voting at Oracle Mix comunity.
Oracle Data Cartridge API is provided to implement many powerful functionality such as new Domain Indexes, pipeline tables, and aggregated functions.
The presentation will include an introduction to this API showing many of his features using as example the code of a Lucene Domain Index which is a mix between Java running inside the OJVM and Oracle Object types.
Lucene Domain Index is an open source project which integrates the Apache Lucene IR library as a new Domain Index, providing features has free text searching, faceting, highlighting, filtering at index level, multi table/column indexes and more for 10g/11g databases.
Basically I would like to introduce this exciting API which allows developers to interact directly with the RDBMS engine and adding some examples in Java that are not included into the Oracle Documentation.
Well if you want to see this session at OOW09 please click here, see you there....

New in Oracle VM 2.1.5: Web Services API

Sergio's Blog - Tue, 2009-06-16 00:18

Last week, ULN was updated with Oracle VM 2.1.5 RPMs. One of the main new features in Oracle VM 2.1.5 is a web services-based API to perform any of the operations in Oracle VM Manager, for example, create a server pool, add servers, or create virtual machines. Read the Oracle VM Web Services API documentation. ISOs will be published on edelivery.oracle.com/oraclevm soon.

Categories: DBA Blogs

The Importance of Being Earnest in Social Media; or What Facebook and Twitter Should Do to Save Themselves from Becoming Irrelevant

Ken Pulverman - Mon, 2009-06-15 18:57
So who hates their personal e-mail inbox? I do! Yahoo has just become complete spam. I have maxed out the number of filters on the paid account, tagged every possible item that even remotely looks like spam as spam and yet it keeps coming.

According to KGB.com 183 Billion e-mail messages are sent per day. I would give them a shout out, but it looks like this number is from 2007. Also they didn't give me an answer to the second half of my question for my 99 cent text message investment which was - how much of this is spam? Maybe the KGB just doesn't know. Afterall, they lost some of their best spooks to capitalism after the iron curtain fell.

Well wikipedia does and it is free! Wikipedia led me to this reference from the New York Times. Spamalot? Why yes we do. 94% of the time as it turns out. So approximately 2,000,000 e-mail messages are sent every second of every day and 1,880,000 are pure crap that we don't want.

My financee's brother actually works at the post office. He told me that the only thing that is really keeping them alive is junk mail. In fact, like e-mail it is the bulk of what they move these days. I got on the USPS' marketers spam list and they send me all sorts of paper materials telling me how green they are. They actually sent me a large express mail envelope to tell me they weren't going to be sending me the T-shirt they offered me. That they sent later in another large package, in the wrong size of course. Forget about solar power and hydrogen cars. It seems the greenist thing the US Government could do is close the Post Office. (Sorry future brother-in-law. I'll help you bounce pack with a new startup that sells spam filters on late night infomercials using Swedish models that austensibly made the stuff...oops that one has been done Remind me to stop staying up to watch Craig Ferguson.)

So where am I going with this? Well the Post Office is dying a slow death at a rate of one cent price hikes a year and service cutbacks until we all give up. E-mail is almost dead on arrival. Myspace and Friendster lost their mojo before they even tried to stretch to reach my demographic. What do they all have in common? They are filled with crap!

Recently I've been experimenting with feeding content to Twitter. (see The Need for Feed). I am trying to use the technique for good - serving up interesting data sources that people can actually use. I have become painfully aware of the potential to use these techniques for evil though. Last week two guys I went to high school with both crapped in the walled garden that is supposed to be my Facebook account on the same day. They both posted some BS about this new energy drink called efusjon. It's a multi-level marketing company selling some acai berry sugar water. Supposed to save your life not just dangerously elevate your sugar levels and rot your teeth. Apparently part of their "viral" marketing was to get some dudes from my high school to spam me with their fake musings about this garbage.

There you have it. The beginning of the end. One day you'll nod knowingly when your using Farcebluch.com instead.

Attention all entrepreneurs of Silicon Valley - this is your shining opportunity. Build us a social communication platform that keeps this crap out! Of course we need to buy things to keep whatever it is we are talking about afloat, but can't you at least try to address our interests? If Facebook did this they would know that the only acai berry I consume is made into Pinkberry style frozen yogurt. That's unrealistically specific for the time being, but you get my point.

So what does it mean to be earnest in Social media? It means making a college try to be relevant. Sure we can't all possibly keep up with the information demands of the hungry new communication mediums alone, but we have to try to keep content flowing that is at least interesting to our audience.

I am going to offer up The Cocktail Party Rule for Social Media.

If it is not a reasonable leap from the context or the topic in a group chat at a cocktail party, don't go there.

I send a link to this blog to our corporate Twitter account. I work at Oracle Corporation and market our CRM solutions. I think it is a reasonable leap that someone interested in CRM may be wrestling with the same new marketing concepts I blog about.

On the other hand, if a group of guys is gathered around the punch bowl, Mojito vat, beer tub, or Franzia box (depending on what kind of cocktail party you are having) talking about whether the Palm Pre has a snowball's chance in hell of tarnishing Apple's shine, you don't bring up the fact that your wife, the tennis coach just started selling some acai berry fizzy water out of her trunk.

It's a nonsequitor and it is annoying. It's worse than annoying in fact. It's that feeling of trepidation every time you open up your Yahoo inbox or your mailbox for that matter.

So what does this all mean? The power is in your hands. It's in all of our hands. Just use the The Cocktail Party Rule for Social Media and we'll all be fine, and we won't stop having to change communication mediums every 6-12 months. ....or will we?

See you on Farcebluch.

Be Alert!

Nigel Thomas - Mon, 2009-06-15 16:00
Here's a tale of woe from an organisation I know - anonymised to protect the guilty.

A couple of weeks after a major hardware and operating system upgrade, there was a major foul-up during a weekend batch process. What went wrong? What got missed in the (quite extensive) testing?

The symptom was that batch jobs run under concurrent manager were running late. Very late. In fact, they hadn't run. The external scheduling software had attempted to launch them, but failed. Worse than that, there had been no alerting over the weekend. Operators should have been notified of the failure of critical processes by an Enterprise Management (EM) tool.

Cut to the explanation:

As part of the O/S upgrade, user accounts on the server are now set to be locked out if three or more failed attempts to login are made. Someone in operations-land triggered a lockout on a unix account used to run the concurrent manager. And he didn't report it to anyone to reset it. So that explained the concurrent manager failures.

The EM software that should have woken up the operators also failed. Surprise, surprise: it was using the same (locked-out) unix account.

And finally, the alerting rules recognised all kinds of warnings and errors, but noone had considered the possibility that the EM system itself would fail.

Well, it's only a business system; though a couple of C-level execs got their end of month reports a couple of days late, and there were plenty of red faces, nobody actually died...

Just keep an eye out for those nasty corner cases!

Time for a Change – Upcoming Announcements – Millionth Hit

Venkat Akrishnan - Mon, 2009-06-15 12:35

Well, as the saying goes “Change is the only Constant”, there are quite a few changes that are coming up on this blog(well not blog alone!!!) in the near future. I would be in a position to make an announcement in a week or so. And i am very much looking forward to that. One thing that i can say for sure is the fact that you can expect more of my blog entries in the future:-). More on that next week.

And as luck would have it, while i was writing this, the blog registered its first Millionth hit (of a total of 302 blog entries). I would have to express and extend my thanks to anyone and everyone who have been visiting this blog ever since its inception on 18th of July 2007. I believe the blog has come a long way since then. I have written at least two blog entries every week since i started, barring a couple of months when i did not even write a single one. When i started to write on BI EE there were only a couple of people writing about it like Mark(who was very well known in the Oracle BI Community even at that time) and Adrian(actually myself and Adrian were discussing this in the BI Forum). Then came along John who was also very active on the BI Forums. And then came people like Alex(Siebel + BI EE) , Christian (BI EE + Essbase) and others who have been working on these products for long but just now started to blog about them.

In the coming future, i would be primarily focusing on Hyperion Essbase(i would say this has been a tool that has been really close to my heart that i have not blogged much about), EPM Integration, Hyperion Planning/EPMA integration, BI EE – Essbase Integration (more use cases). Hopefully you have found this blog useful and thanks for stopping by.

Categories: BI & Warehousing

When Backwards Compatibility Goes Too Far

Tahiti Views - Sat, 2009-06-13 19:46
I couldn't help but notice this new article, about holdovers from the earliest days of DOS and even CP/M still showing up in Windows-based development:Zombie Operating Systems and ASP.NET MVCPersonally, I really enjoyed working on the IBM C/C++ compiler back in the day, targeting Windows 95. They licensed the Borland resource editor and I adapted the RTF-format online help, with no RTF specs, John Russellhttp://www.blogger.com/profile/17089970732272081637noreply@blogger.com0

ODTUG 2009

Susan Duncan - Sat, 2009-06-13 04:05
I can hardly believe it's another year (of few posts to my blog) and another ODTUG Kaleidoscope conference is almost upon us. This year the conference is in Monterey so I'm packing my bags and off to Oracle Headquarters in San Francisco tomorrow - then down to the conference on June 20th

If you have the opportunity I'd urge you to try and make it there too. The 'fun' starts off on Saturday when there is a community service day. Last year we painted school classrooms in New Orleans, this year we are helping to restore habitat at Martin Dunes, California’s largest and most intact dune ecosystem. So I'm packing plenty of sunscreen as my pale English skin isn't used to the California sun! More fun after the first day of sessions Sunday - with the second ODTUG Jam Session. Those of you who know Grant Ronald and I know that we are much too shy and retiring to join in that ;-)

But of course, that's not all the fun. The conference is full of interesting and diverse sessions - and I should know, I was part of the panel reviewing papers for the Editor's Choice award - I spent a few evenings reading papers on everything from project management to Oracle to the Holy Grail.

As for me, I'm really excited to be doing two sessions -

5000 tables, 100 schemas, 2000 developers: This will showcase some of the team-working features such as standards and version management, and reporting and impact analysis and the highly usable and scalable data modeling in JDeveloper. I've got some great new functionality to reveal - reporting on your data models, user defined validation and declarative compare of versioned database objects

Tooling up for ALM 2.0 with Oracle Team Productivity Center: If you were lucky enough to be at Oracle World or the UK Oracle User Group conference last year you might have seen a very early incarnation of this project that I've been working on. At ODTUG I'm going to be demoing the very latest code and showing you how to use your ALM repositories from within JDeveloper and how to integrate artifacts from those (maybe) disparate repositories together through Oracle Team Productivity Center. All this and team management too!

Another goal I have for the conference week is to talk to as many JDeveloper users as possible about team working, ALM and SDLC - and to ensure that I get feedback to take back and work on more functionality in JDeveloper to compliment the great application development tool we have

I look forward to seeing you there - or if not, finding other ways to talk to you!

Back to Top

Fusion Tables

Charles Schultz - Fri, 2009-06-12 13:24
So I admit it, I read slashdot (who doesn't?? *grin*). While some topics I really do not care about, for some reason "Oracle" in the headline does. =) And I am not opposed to Oracle-bashing, because I do a fair share myself.

I love how folks at Google Labs come up with all this crazy stuff. And not just GL, but Apple and lots of other places as well. The way technology moves is absolutely spellbinding, and I mean that in the most literal sense possible. *grin*

What I hate is techno-marketing gibberish:
"So now we have an n-cube, a four-dimensional space, and in that space we can now do new kinds of queries which create new kinds of products and new market opportunities"
Ok so I can grapple with n-cube or 4-space. Show me a query that can create a new kind of product. Heck, show me a query that can make an old product! Create new market opportunities?!? Come on, everything in the galaxy is a market opportunity. You couldn't hit a house fly with a query. And I mean that in the most literal sense. *wink*

Purge old files on Linux/Unix using “find” command

Aviad Elbaz - Wed, 2009-06-10 01:30

I've noticed that one of our interface directories has a lot of old files, some of them were more than a year old. I checked it with our implementers and it turns out that we can delete all files that are older than 60 days.

I decided to write a (tiny) shell script to purge all files older than 60 days and schedule it with crontab, this way I won't deal with it manually. I wrote a find command to identify and delete those files. I started with the following command:

find /interfaces/inbound -mtime +60 -type f -maxdepth 1 -exec rm {} \;

It finds and deletes all files in directory /interface/inbound that are older than 60 days.
"-maxdepth 1" -> find files in current directory only. Don't look for files in sub directories.

After packing it in a shell script I got a request to delete "csv" files only. No problem... I added the "-name" to the find command:

find /interfaces/inbound -name "*.csv" -mtime +60 -type f -maxdepth 1 -exec rm {} \;

All csv files in /interface/inbound that are older than 60 days will be deleted.

But then, the request had changed, and I was asked to delete "*.xls" files further to "*.csv" files. At this point things went complicated for me since I'm not a shell script expert...

I tried several things, like add another "-name" to the find command:

find /interfaces/inbound -name "*.csv" -name "*.xls" -mtime +60 -type f -maxdepth 1 -exec rm {} \;

But no file was deleted. Couple of moments later I understood that I'm trying to find csv files which is also xls files... (logically incorrect of course).

After struggling a liitle with the find command, I managed to make it works:

find /interfaces/inbound \( -name "*.csv" -o -name "*.xls" \) -mtime +60 -type f -maxdepth 1 -exec rm {} \;



Categories: APPS Blogs

Oracle Data Integrator – Connectivity to Open LDAP of Shared Services

Venkat Akrishnan - Tue, 2009-06-09 15:32

One of the features of Oracle Data Integrator is its ability to connect to a lot of disparate data sources using JDBC. One such feature is its ability to expose any LDAP directory as a relational source. If you are on earlier releases of Hyperion EPM like 9.3, where there is no out of the box SSO and authentication/authorization capability to BI EE with open LDAP, one approach is to configure BI EE to authenticate against OpenLDAP and then get the user-group information from some other custom table(or by using the DBMS_LDAP package). I had shown how to configure BI EE to authenticate against OpenLDAP here. Since BI EE cannot automatically pick up the groups directly from OpenLDAP in prior releases, one way is to get the user-group related information from OpenLDAP and then populate that into a set of custom tables. Then BI EE can be used to get these groups from the custom tables. The architecture would look something like this


Lets look at what it takes to setup the OpenLDAP connectivity from ODI. As a first step lets first log into Topology Manager and create a new LDAP connection. Choose the “Sunopsis JDBC Driver for LDAP” as the JDBC driver


And then choose the JDBC URL.


To enable the connectivity to any LDAP directory, the password would have to be passed in an encoded format. To encode the password, run the below command from a command prompt.

java -cp {OracleDI}\oracledi\drivers\snpsldapo.jar 
<the of password root openldap>


Copy the above encoded password. In the JDBC URL, enter the below URL

jdbc:snps:ldap?ldap_url=ldap://localhost:28089/ &amp;ldap_password=KILAKMNJKKLHKJJJDDGPGPDB


The basedn above is what would be used for searching all the users, groups, roles etc. In the Data Server definition, enter the username as root user who has traversing access to the entire OpenLDAP directory


You should be able to test the connection to the LDAP from here. The root user of OpenLDAP is different from the admin user. In fact, the admin user’s original cn is not admin. It is 911. admin is the givenName attribute of the 911 user. The root user password is by default root. One behavior that i noticed across the releases, was the fact that in 9.3 release admin user had the traverse directory privilege. But in EPM 11, 911 user does not have the traverse directory privilege. In my case, the default root password did not work. So, i had to reset the root user password from shared services.


As a side note, if you feel that shared services web console does not give you the actual LDAP directory structure, i would recommend a free LDAP client like JXplorer. The screenshot of shared services OpenLDAP using this free client is given below


Now, if you go to the Designer and reverse engineer this data source using selective reverse.



This should convert the entire directory structure to a relational format. From this point onwards, its a matter of building the interfaces and loading the custom user-group tables. Though the setup of the above is pretty straight forward, this can come in very handy especially when you are trying to consolidate/report against multiple user sources.

Categories: BI & Warehousing

Getting a Handle on Logical I/O

Eric S. Emrick - Tue, 2009-06-09 13:43
The other day a colleague brought to my attention an interesting situation related to one of the databases he supports. The database was, rather consistently, experiencing heavy cache buffers chains (CBC) latch wait events while processing against a set of “related” tables. The solution devised to mitigate the CBC latch contention involved range partitioning said tables. I believe proper partitioning can be a very reasonable approach to minimize the probability of CBC latch collisions. Of course, you must know the manner in which your data is accessed and partition accordingly, as you don’t want to sacrifice existing solid execution plans among other considerations.

As it turned out, the partitioning approach did indeed reduce the CBC collisions; albeit another form of contention surfaced as a corollary, cache buffer handles latch collisions. I must admit I had a very limited knowledge of buffer handles prior to being made aware of this situation. My colleague pointed me to a very interesting article on Jonathan Lewis' site. This article gives a pithy description of buffer handles. I highly recommend you carve out a few minutes to read it. Not only might you learn something about buffer handles, you might be surprised that the more traditional notions of logical I/O do not really suffice. I was first suitably introduced to the buffer is pinned count statistic during a Hotsos training course. Essentially, this statistic indicates the presence of latch-reduced logical I/O.

While, generally speaking, Oracle recommends that hidden parameters not be changed, sometimes they need to be modified to accommodate very specific issues your database is encountering. In this particular case, increasing the value of the _db_handles_cached parameter got rid of the newly surfaced collisions on the cache buffer handles latch. I love learning from others’ experiences. It is amazing how many interesting little tales such as this exist. Also, this type of unforeseen contention shifting reinforces the need to properly test production changes - or maybe better said, the ability to properly test production changes.

SQL Features Tutorials: Grouping Rows with GROUP BY (New SQL Snippets Tutorial)

Joe Fuda - Tue, 2009-06-09 13:00
A new tutorial has been added to SQL Snippets exploring the GROUP BY clause and related extensions such as GROUPING SETS, ROLLUP, and CUBE. Group related functions such as GROUP_ID, GROUPING, and GROUPING_ID are also covered.

Tablespace selection in interval partitioning

Yasin Baskan - Tue, 2009-06-09 01:58
11G brought interval partitioning which is a new partitioning method to ease the maintenance burden of adding new partitions manually. The interval partition clause in the create table statement has an option to list tablespace names to be used for interval partitioning. The documentation states that the tablespaces in the list you provide are used in a round-robin manner for new partitions:

Interval partitions are created in the provided list of tablespaces in a round-robin manner.

This does not mean that any newly created partition will reside in the tablespace which is next on the list. The tablespaces may be skipped if partitions map to more than one interval. Here is a test case that shows how the list is used.

set lines 200
SQL> r
1 create table t(col1 date,col2 varchar2(100))
2 partition by range (col1)
3 interval(numtoyminterval(1,'MONTH')) store in (tbs1,tbs2,tbs3)
4* (PARTITION p0 VALUES LESS THAN (TO_DATE('1-1-2009', 'DD-MM-YYYY')) tablespace tbs1)

Table created.

SQL> r
2* from user_Tab_partitions where table_name='T'

------------------------------ -------------------------------------------------------------------------------- ------------------------------
P0 TO_DATE(' 2009-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA TBS1

The "store in" clause lists tablespaces tbs1, tbs2 and tbs3 to be used for interval partitioning. After the above create table command I now have one partition which resides in tbs1. Let's insert a row which needs to be inserted into a new partition and see which tablespace the partition will be created in.

SQL> insert into t values(to_date('15.01.2009','dd.mm.yyyy'),'jan');

1 row created.

SQL> commit;

Commit complete.

2 from user_Tab_partitions where table_name='T';

------------------------------ -------------------------------------------------------------------------------- ------------------------------
P0 TO_DATE(' 2009-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA TBS1

The row I inserted maps to one interval, which is one month, it does not have a date value which is more than one month higher than the current maximum value. So the next tablespace, tbs2, is used for the new partition.

SQL> insert into t values(to_date('15.02.2009','dd.mm.yyyy'),'feb');

1 row created.

SQL> commit;

Commit complete.

2 from user_Tab_partitions where table_name='T';

------------------------------ -------------------------------------------------------------------------------- ------------------------------
P0 TO_DATE(' 2009-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA TBS1

Again I inserted a row for the next month and the partition is created in tbs3, which is the next one on the list.

What happens if I insert a row with a date value that is more than one month after the current maximum partitioning key?

SQL> insert into t values(to_date('15.04.2009','dd.mm.yyyy'),'apr');

1 row created.

SQL> commit;

Commit complete.

2 from user_Tab_partitions where table_name='T';

------------------------------ -------------------------------------------------------------------------------- ------------------------------
P0 TO_DATE(' 2009-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA TBS1

I skipped March and inserted a value for April. The current maximum key becomes May 1st, we do not see a partition with a maximum value of Apr 1st. The next tablespace on the list was tbs1 but we see that the new partition is on tbs2, not tbs1. Tbs1 would be used if I did not skip an interval when inserting rows.

So, the tablespaces on the list are used in a round-robin manner but each is used for only one interval. If you skip intervals the tablespaces related to that interval are skipped too.

This is something to keep in mind if you want to strictly decide which tablespace will hold which partition.

The Need for Feed: RSS & Twitter, Why They May be Like Peanut Butter and Jelly

Ken Pulverman - Mon, 2009-06-08 20:54
The challenge with all things social media is keeping them up-to-date. Blogs can be a serious drain as they beg for content like Audrey in 'Little Shop of Horrors.' If you are using Twitter to communicate with your network and push relevant content, you may be experiencing the same thing. Even keeping up with 140 character posts can be a grind.

Recently, I've started experimenting with pushing some of the content that I feel is most relevant to the people I interact with as well as the public at large. It's been fun to watch the reactions and the results.

http://www.twitter.com/topcrmbloggers is a feed I set up that aggregates what I consider to be the best CRM Bloggers out there. You'll note that I have 33 followers in just a week without even trying. The feed of this twitter handle in turn feeds one of our Netvibes pages. Entropy yes, but we are serving up content at different potential access points for different users.

I also set up a fun feed of Odd News which I love to read while on the bus. It started on @pulverman on Twitter and is now featured on @OddNewsNetwork. @pulverman will now be an aggregation of the top Marketing 2.0 blogs starting tomorrow as well as my shorter musings on the world of Marketing 2.0. From the same feed as OddNewsNetwork I select one post at random once a day and feed my personal Twitter feed, @bolobao. This in turn updates Facebook providing a bit of fun for friends to see and comment on.

On slow news days, I know that at least the feeds I've set up are keeping various sites up-to-date with interesting content.

These are early days in my feed experiments, but I imagine marketers everywhere are struggling with these same issues.

Like a good blog post, I think in the final analysis, what you chose to feed to your social marketing efforts like what you post on your blog will be judged on relevance. If it is relevant - aggregated feeds crafted with the love and personality you apply to a post - it will be appreciated.

I'll no doubt keep tuning my feeds to make them ever more relevant and interesting. I just sent my Yelp reviews to my personal Twitter (@bolobao) and our Delicious posts (www.delicious.com/OracleCRM) to our work Twitter account (@OracleCRM). Perhaps RSS and Twitter aren't quite as cozy at PB&J yet, but I would like to submit that we all have the need for feed, even newspapers, making this combo perhaps your own personal Associated Press.

Oracle Exadata posts #1 TCP-H result

Nigel Thomas - Sun, 2009-06-07 07:01
Grag Rahn's Structured Data blog provides the data that Kevin Closson had to remove from his own blog. From an HP/Oracle point of view, a very good performance, reducing cost/QphH by a factor of 4.

However, it is interesting to see that the HP/Oracle solution is still more than 4 times the cost/QphH of the #2 placed Exasol solution (running on Fujitsu Primergy, and reported a year ago) - while the absolute performance improvement is relatively slight (1.16M queries/hr against 1.02M).


Subscribe to Oracle FAQ aggregator