Skip navigation.

Fusion Middleware

Memories of the way we were...

Greg Pavlik - Sat, 2014-05-31 15:13
The fascinating thing about Hadoop is the obviousness of its evolutionary needs. For example, MapReduce coupled with reliable scale out storage was a powerful - even revolutionary - effect for organizations with both lots of and multi-structured data. Out of the gate, Hadoop unlocked data "applications" that were for all intents and purposes unimplementable. At the same time, it didn't take much imagination to see that separating the compute model from resource management would be essential for future applications that did not fit well with MapReduce itself. It took a lot of work and care to get YARN defined, implemented and hardened, but the need for YARN itself was fairly obvious. Now it is here and Hadoop is no longer about "batch" data processing.

Note, however, it takes a lot of work to make the evolutionary changes available. In some cases, bolt on solutions have emerged to fill the gap. For key value data management, HBase is a perfect example. Several years ago, Eric Baldeschwieler was pointing out that HDFS could have filled that role. I think he was right, but the time it would take to get "HBase-type" functionality implemented via HDFS would have been a very long path indeed. In that case, the community filled the gap with HBase and it is being "back integrated" into Hadoop via YARN in a way that will make for a happier co-existence.

Right now we are seeing multiple new bolt on attempts to add functionality to Hadoop. For example, there are projects to add MPP databases on top of Hadoop itself. It's pretty obvious that this is at best a stop gap again - and one that comes at a pretty high price - I don't know of anyone that seriously thinks that a bolt on MPP is ultimately the right model for the Hadoop ecosystem. Since the open source alternatives look to be several years away from being "production ready", that raises an interesting question: is Hadoop evolution moving ahead at a similar or even more rapid rate to provide a native solution - a solution that will be more scalable, more adaptive and more open to a wider range of use cases and applications - including alternative declarative languages and compute models?

I think the answer is yes: while SQL on Hadoop via Hive is really the only open source game in town for production use cases - and its gotten some amazing performance gains in the first major iteration on Tez that we'll talk more about in the coming days - its clear that the Apache communities are beginning to deliver a new series of building blocks for data management at scale and speed: Optiq's Cost Based Optimizer; Tez for structuring multi-node operator execution; ORC and vectorization for optimal storage and compute; HCat for DDL. But what's missing? Memory management. And man has it ever been missing - that should have been obvious as well (and it was - one reason that so many people are interested in Spark for efficient algorithm development).

What we've seen so far has been two extremes available when it comes to supporting memory management (especially for SQL) - all disk and all memory. An obvious point here is that neither is ultimately right for Hadoop. This is a long winded intro to point to two, interrelated pieces by Julian Hyde and Sanjay Radia unveiling a model that is being introduced across multiple components called Discardable In-memory Materialized Query (DIMMQ). Once you see this model, it becomes obvious that the future of Hadoop for SQL - and not just SQL - is being implemented in real time. Check out both blog posts:

http://hortonworks.com/blog/dmmq/

http://hortonworks.com/blog/ddm/


MDM isn't about data quality its about collaboration

Steve Jones - Tue, 2014-05-27 09:00
I'm going to state a sacrilegious position for a moment: the quality of data isn't a primary goal in Master Data Management Now before the perfectly correct 'Garbage In, Garbage Out' statement let me explain.  Data Quality is certainly something that MDM can help with but its not actually the primary aim of MDM. MDM is about enabling collaboration, collaboration is about the cross-reference
Categories: Fusion Middleware

Data Lakes will replace EDWs - a prediction

Steve Jones - Fri, 2014-05-23 14:14
Over the last few years there has been a trend of increased spending on BI, and that trend isn't going away.  The analyst predictions however have, understandably, been based on the mentality that the choice was between a traditional EDW/DW model or Hadoop.  With the new 'Business Data Lake' type of hybrid approach its pretty clear that the shift is underway for all vendors to have a hybrid
Categories: Fusion Middleware

Lipstick on the iceberg - why the local view matters for IT evolution

Steve Jones - Thu, 2014-05-22 12:00
There is a massive amount of IT hype that is focused on what people see, its about the agile delivery of interfaces, about reporting, visualisation and interactional models.  If you could weight hype then it is quite clear that 95% of all IT is about this area.  Its why we need development teams working hand-in-hand with the business, its why animations and visualisation are massively important.
Categories: Fusion Middleware

How to select a Hadoop distro - stop thinking about Hadoop

Steve Jones - Thu, 2014-05-22 09:00
Scoop, Flume, PIG, Zookeeper.  Do these mean anything to you?  If they do then the odds are you are looking at Hadoop.  The thing is that while that was cool a few years ago it really is time to face it that HDFS is a commodity, Map Reduce is interesting but not feasible for most users and the real question is how we turn all that raw data in HDFS into something we can actually use. That means
Categories: Fusion Middleware

A Framework Approach to Building an Oracle WebCenter Intranet, Extranet, or Portal

Whether you already have or are planning to build an Oracle WebCenter-based intranet, extranet or customer portal, its overall success hinges on its time to market, ability to scale, and the presence of user productivity tools. Attend this webinar to see how Fishbowl’s Portal Solution Accelerator (PSA) can provide an extensible framework that bundles reusable templates and page layouts, standards-based portlets, and in-place security administration. Join us to discover how this framework can be applied to build or improve your corporate intranet, partner extranet, or customer portal.

Date: Thursday, May 22nd
Time: 1:00 PM EST
Register: https://www2.gotomeeting.com/register/236838418

 

The post A Framework Approach to Building an Oracle WebCenter Intranet, Extranet, or Portal appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

Pivotal GemFireXD*Web, Web based Interface For GemFireXD

Pas Apicella - Thu, 2014-05-15 04:59
Pivotal GemFire XD bridges GemFire’s proven in-memory intelligence and integrates it with Pivotal HD 2.0 and HAWQ. This enables businesses to make prescriptive decisions in real-time, such as stock trading, fraud detection, intelligence for energy companies, or routing for the telecom industries.

You can read more about how GemFireXD and it's integration with PHD here.

https://www.gopivotal.com/big-data/pivotal-hd

While development team worked on GemFireXD I produced another open source web based tool named GemFireXD*Web. It's available with source code as follows.

https://github.com/papicella/GemFireXD-Web

GemFireXD *Web enables schema management from a web browser with features as follows

  • Create all Schema Objects via Dialogs
  • Generate DDL
  • Run multiple SQL Commands, upload SQL files
  • Browse / Administer Objects
  • Browse / Administer HDFS stores/tables 
  • Browse / Administer Async Event Listeners
  • View data distribution
  • View Members / start parameters


etc…







http://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware

Berdyaev on Dostoevsky

Greg Pavlik - Sun, 2014-05-04 14:36
I just finished reading Nicholai Berdyaev's interpretative study on Dostoevsky. On the one hand, this is a work that will be difficult to digest without reading at least the four major novels: Demons (or The Possessed), Crime and Punishment, The Idiot and Brothers Karamazov - as well, I might add, as the Adolescent (or A Raw Youth, as it is sometimes titled). Berdyaev pursues his themes by reference to both characters and arguments that appear in those works. On the other hand, he does such a fine job of concisely presenting major thematic elements and positions that may be non-obvious to American or english language readers, that I would with some hesitation recommend it as a "preface" to reading Doestoevsky's novels. In the latter case, some substantial portion of the discussion would be lost on the reader, but the context it provides overall would certainly be helpful to those approaching the great author's oeuvre for the first time. In particular, the theme of "spiritual" freedom as a necessary condition for human development seems be a correct reading of Dostoevsky and Berdyaev works this idea out from a number of angles. And happily Berdyaev is quite comfortable criticizing some of Dostoevsky's pointedly bad ideas as well.

There are two things I would note as well - Berdyaev is a fascinating critic and character in the development of Russian philosophy, specifically the religious inspired philosophers that in some way were heirs to Soloviev; Berdyaev operates in the role of a philosophical social commentator as opposed to a primarily theological tradition - in this case he is very different than contemporaries like Sergius Bulgakov or Pavel Florensky. I am most familiar with him through his earlier work, including the Meaning of the Creative Act. This book, of course, echoes Berdyaev's thinking, but he is quite clear in distinguishing his critique from the views of his subject, which makes the book all the more valuable in that it seems to avoid projecting his reading of Dostoevsky into Dostoevsky himself. Of course, others may disagree with this - and perhaps my own reading of both authors is colored by my own interpretation.

However, this certainly weighs on the question of how I would rank Berdyaev's critique of Dostoevsky: while it is not the subtlest discussion I have read, it is one of the simplest and in my view "most correct" readings of the author I have encountered. I would go so far as to suggest that Berdyaev's work deserves a primary place in the secondary literature on Dostoevsky. In fact, I would place it alongside Joseph Frank's monumental intellectual and literary biography as recommended companions to Dostoevsky's novels.

Addendum: I should have mentioned Berdyaev's final assesssment: "So great is the worth of Dostoevsky that to have produced him is by itself sufficient justification for the existence of the Russian people in the world." And that my friends is in my view true.

Deep Dive: Oracle WebCenter Tips and Traps!

Bex Huff - Tue, 2014-04-08 17:26

I'm currently at IOUG Collaborate 2014 in Las Vegas, and I recently finished my 2-hour deep dive into WebCenter. I collected a bunch of tips & tricks in 5 different areas: metadata, contribution, consumption, security, and integrations:


Deep Dive: Oracle WebCenter Content Tips and Traps! from Brian Huff

As usual, a lot of good presentations this year, but the Collaborate Mobile App makes it a bit tough to find them...

Bezzotech will be at booth 1350, right by Oracle, be sure to swing by and register for a free iPad, or even a free consulting engagement!

read more

Categories: Fusion Middleware

This blog is now closed.

Billy Cripe - Mon, 2013-10-14 12:14

Thank you for visiting.  This blog has been closed down and merged with the WebCenter Blog, which contains blog posts and other information about ECM, WebCenter Content, the content-enabling of business applications and other relevant topics.  Please be sure to visit and bookmark https://blogs.oracle.com/webcenter/ and subscribe to stay informed about these topics and many more.   From there, use the #ECM hashtag to narrow your focus to topics that are strictly related to ECM.

See you there! 

Categories: Fusion Middleware

New Continuous Integration tutorial published

Lynn Munsinger - Mon, 2012-07-02 09:44
Hot off the press – a new continuous integration tutorial. It’s really not just about continuous integration, though! You’ll find it useful even if you aren’t using a continuous integration server like Hudson. It’s useful if you are doing any part of the scenario it documents: Setting up Team Productivity Center for your team and [...]

Advanced ADF eCourse, Part Deux

Lynn Munsinger - Tue, 2012-06-19 15:11
In February, we published the first in a series of FREE(!) online advanced ADF training: http://tinyurl.com/advadf-part1 The response to that course has been overwhelmingly positive as more and more people are moving past the evaluation/prototype stages with ADF and looking for more advanced topics. I’m pleased to relay the good news that the 2nd part [...]

Fun with Hudson, Part 1.1

Lynn Munsinger - Tue, 2012-06-05 09:19
Earlier I posted that I had used the following zip command in the ‘execute shell’ action for my Hudson build job: zip -r $WORKSPACE/builds/$JOB_NAME-$BUILD_NUMBER * -x ‘*/.svn/*’ -x ‘*builds/*’ This zips up the content of the exported source, so that I can send it on to team members who need the source of each build [...]

Hiring a Curriculum Developer

Lynn Munsinger - Tue, 2012-05-15 09:34
If you are an instructional designer with an eye for technologies like ADF, or if you are an ADF enthusiast and excel at creatively producing technical content, then ADF Product Management would like to hear from you. We’re looking for a curriculum developer to join our ADF Curriculum team, which is tasked with ensuring that [...]

Hiring a Curriculum Developer

Lynn Munsinger - Tue, 2012-05-15 09:34
If you are an instructional designer with an eye for technologies like ADF, or if you are an ADF enthusiast and excel at creatively producing technical content, then ADF Product Management would like to hear from you. We’re looking for a curriculum developer to join our ADF Curriculum team, which is tasked with ensuring that [...]

New ADF Insider on Layouts

Lynn Munsinger - Mon, 2012-03-26 13:22
I’ve published an ADF Insider session that helps de-mystify the ADF Faces components and how to work with them (and not against them), when building ADF applications. There’s also some great information on building ADF prototypes. Take a look here: http://download.oracle.com/otn_hosted_doc/jdeveloper/11gdemos/layouts/layouts.html

New ADF Insider on Layouts

Lynn Munsinger - Mon, 2012-03-26 13:22
I’ve published an ADF Insider session that helps de-mystify the ADF Faces components and how to work with them (and not against them), when building ADF applications. There’s also some great information on building ADF prototypes. Take a look here: http://download.oracle.com/otn_hosted_doc/jdeveloper/11gdemos/layouts/layouts.html

Number one, with a bullet.

Mike Olson - Wed, 2008-01-16 11:43

IDC has just released its annual vendor share report for embedded databases, and Oracle has taken the top spot.

The numbers are compelling: Oracle's share is 23.2%. A cluster of five other vendors have between 9% and 14% each. The rest is spread broadly, with each vendor commanding 2% or less. Oracle's share grew 23.3%, compared to growth of just under 12% for the sector as a whole.

I am glad to see this for a bunch of reasons. As Vice President of Embedded Technology at Oracle, I take a personal interest, of course. Oracle Berkeley DB, which Oracle acquired with Sleepycat in 2006, is aimed squarely at the embedded space. I have long maintained that embedded opportunities represent a significant source of new revenue and growth. Computers have escaped the data center, and special-purpose systems are getting deployed in living rooms, in the walls of buildings and in shirt pockets. There is an enormous amount of data travelling over networks and touching these systems.

The key to our success in the embedded space has been to assemble a family of products that address a wide range of requirements. A manufacturer building mobile telephone handsets needs to store crucial information reliably. So does a vendor building an optical network switch, and an ISV developing high-performance equity trading systems for financial markets. The three have very different requirements, though, and it's unrealistic to expect any single product to satisfy all of them.

All of our database products -- Oracle Database, Oracle TimesTen, Oracle Berkeley DB and Oracle Lite -- can be embedded in partner systems and deployed invisibly to end users. All contributed to our number one ranking by IDC.

It's not just the technology that has made us successful, though. The people who choose and deploy embedded databases are software developers. In the enterprise, we generally talk to DBAs and CIOs, but in the embedded world, we talk to architects and CTOs. Those conversations are different, and we have had to develop new expertise and new strategies as we have pursued embedded customers. Over the past several years, we've concentrated on building the technical, support and sales expertise necessary to win embedded business in countries around the globe. IDC's vendor share numbers suggest that we're doing okay.

Congratulations to Oracle's Embedded Global business team, and to the product development and support groups for all four products! This is a tremendous accomplishment.

Categories: Fusion Middleware

Wed, 1969-12-31 18:00