Skip navigation.

Other

Spark and Databricks

DBMS2 - Sun, 2014-02-02 12:50

I’ve heard a lot of buzz recently around Spark. So I caught up with Ion Stoica and Mike Franklin for a call. Let me start by acknowledging some sources of confusion.

  • Spark is very new. All Spark adoption is recent.
  • Databricks was founded to commercialize Spark. It is very much in stealth mode …
  • … except insofar as Databricks folks are going out and trying to drum up Spark adoption. :)
  • Ion Stoica is running Databricks, but you couldn’t tell that from his UC Berkeley bio page. Edit: After I posted this, Ion’s bio was quickly updated. :)
  • Spark creator and Databricks CTO Matei Zaharia is an MIT professor, but actually went on leave there before he ever showed up.
  • Cloudera is perhaps Spark’s most visible supporter. But Cloudera’s views of Spark’s role in the world is different from the Spark team’s.

The “What is Spark?” question may soon be just as difficult as the ever-popular “What is Hadoop?” That said — and referring back to my original technical post about Spark and also to a discussion of prominent Spark user ClearStory — my try at “What is Spark?” goes something like this:

  • Spark is a distributed execution engine for analytic processes …
  • … which works well with Hadoop.
  • Spark is distinguished by a flexible in-memory data model …
  • … and farms out persistence to HDFS (Hadoop Distributed File System) or other existing data stores.
  • Intended analytic use cases for Spark include:
    • SQL data manipulation.
    • ETL-like data manipulation.
    • Streaming-like data manipulation.
    • Machine learning.
    • Graph analytics.

Except for certain low-latency operations,* anything you can do in Spark can also be done in straight Hadoop; Spark just can have advantages in performance and programming ease. Spark RDDs (Resilient Distributed Datasets) are immutable at this time, so Spark is not suited for short-request update workloads.

*A new Spark task requires a thread, not a whole Java Virtual Machine.

Everybody agrees that machine learning is a top Spark use case. In particular:

  • Cloudera sees machine learning as the major area of Spark adoption to date.
  • Ion gave me the impression machine learning is one of the major areas of Spark adoption to date.
  • Mike gave me the impression that machine learning was a core intended use case for Spark the first time we talked about it.
  • There’s a machine learning library for Spark, and also a way to use Spark to do distributed R.

I believe data transformation is a major Spark use case as well.

  • Ion gave me that impression, although Cloudera surprisingly did not. Edit: Actually, see Matt Brandwine’s comment below.
  • I have one client (ClearStory) using Spark that way, and a second that’s likely to.
  • It makes sense that the #1 Hadoop use case (to date), which is something Spark also is well-suited for, would be an important early Spark use case as well.

Spark Streaming is fairly new, but is already getting some adoption. Notes on that start:

  • The actual technology is a form of micro-batching. I plan to learn more about that in the future.
  • Cloudera sees streaming as one of the two big Spark use cases, and praises Spark Streaming for its fault tolerance and its great ease of coding.
  • Mike Franklin knows a lot about streaming.

Part of that story is a sudden decline in the reputation of Storm, whose troubles seem to include:

  • Project founder and Twitter employee Nathan Marz seems no longer to be associated with Storm nor employed at Twitter.
  • I am told that in general the Storm community is not all that vibrant.
  • Various aspects of Storm’s technology are disappointing people.

Other notes on Spark use cases include:

  • Impala-loving Cloudera doesn’t plan to support Shark. Duh.
  • Cloudera also won’t at first support any Spark predictive modeling add-on.
  • Ion’s other company, Conviva, is doing some real-time decisioning in Spark.

Spark data management has been enhanced by a project called Tachyon.* The main point of Tachyon is that Spark RDDs (Resilient Distributed Datasets) now persist in memory beyond the life of a job; besides offering the RDDs to other Spark jobs, Tachyon also opens them to Hadoop via an HDFS emulator.

*If there’s ever a Spark/Tachyon management suite, I hope some aspect is named Cherenkov — i.e., the radiation that is measured to detect the passage of tachyons.:)

And finally, some metrics and so on:

  • Databricks has between 10 and 20 employees.
  • Spark has >100 individual contributors from >25 different companies.
  • There was a Spark Summit with >450 attendees (from >180 organizations), and an earlier Spark-mainly conference with >200 attendees.
  • The Spark meet-up group in San Francisco has >1500 members signed up.
  • Various Spark users and subprojects are identified on the Apache Spark pages.

Related link

  • Most of the current substance on Databricks’ website is in its blog.
Categories: Other

More on public policy

DBMS2 - Sat, 2014-02-01 05:35

Occasionally I take my public policy experience out for some exercise. Last week I wrote about privacy and network neutrality. In this post I’ll survey a few more subjects.

1. Censorship worries me, a lot. A classic example is Vietnam, which basically has outlawed online political discussion.

And such laws can have teeth. It’s hard to conceal your internet usage from an inquisitive government.

2. Software and software related patents are back in the news. Google, which said it was paying $5.5 billion or so for a bunch of Motorola patents, turns out to really have paid $7 billion or more. Twitter and IBM did a patent deal as well. Big numbers, and good for certain shareholders. But this all benefits the wider world — how?

As I wrote 3 1/2 years ago:

The purpose of legal intellectual property protections, simply put, is to help make it a good decision to create something.

Why does “securing … exclusive Right[s]” to the creators of things that are patented, copyrighted, or trademarked help make it a good decision for them to create stuff? Because it averts competition from copiers, thus making the creator a monopolist in what s/he has created, allowing her to at least somewhat value-price her creation.

I.e., the core point of intellectual property rights is to prevent copying-based competition. By way of contrast, any other kind of intellectual property “right” should be viewed with great suspicion.

That Constitutionally-based principle makes as much sense to me now as it did then. By way of contrast, “Let’s give more intellectual property rights to big corporations to protect middle-managers’ jobs” is — well, it’s an argument I view with great suspicion.

But I find it extremely hard to think of a technology industry example in which development was stimulated by the possibility of patent protection. Yes, the situation may be different in pharmaceuticals, or for gadgeteering home inventors, but I can think of no case in which technology has been better, or faster to come to market, because of the possibility of a patent-law monopoly. So if software and business-method patents were abolished entirely – even the ones that I think could be realistically adjudicatedI’d be pleased.

3. In November, 2008 I offered IT policy suggestions for the incoming Obama Administration, especially: 

  1. Pick the right Chief Technology Officer.
  2. Fix the government technology contracting process in general.
  3. Fix the air traffic control system in particular.
  4. Generally take a businesslike approach to government IT. Obama’s focus on making government “transparent” and searchable would be just one byproduct of that effort.
  5. Continue to beef up internal search and knowledge management (remember the FBI agent who guessed the 9/11 plans, but couldn’t communicate his ideas to anybody who cared).
  6. Write privacy laws of the sort that will, for example, allow electronic health records to be adopted without great fear of misuse. (I have some strong opinions as to what form those laws should take.)
  7. Drastically beef up math education!! (Science too, but math is especially important.) This takes leadership to convince people it’s CRUCIAL to be numerate, perhaps even more than it takes specific policy initiatives. Little else is as important.

and

… we need an experienced technology implementation leader to:

  • Recommend major changes in government IT contracting. Right now, information technology is bought at the wrong level of granularity, too coarse and too fine at once. Private sector CIOs make broad technology architecture decisions, then make incremental purchases as needed. Public sector IT managers, however, are generally compelled to make purchases on a “project” basis, which allows neither the sanity of broad-scale planning nor the economies and adaptability of just-in-time acquisition.
  • Establish best practices in a broad range of IT areas. Obama’s “transparency” initiative involves pushing the state of the art in public-facing technology for search, query, and audio/video, at a minimum. Other areas of major technical challenge include internal search, knowledge management, and social networking; disaster robustness; planning in the face of political budgeting uncertainty; numbers-based management without the benefit of a profit/loss statement … and the list could easily be twice as long.
  • Interact with the private sector. From electronic health records to the general supply chain, there are huge opportunities for public/private interoperability, quite apart from the obvious customer/vendor relationships the government has with the IT industry.
  • Improve training, recruiting, and retention. Anywhere government needs employees whose skills are also in high demand in the private sector, government pay scales cause difficulties. IT is a top area for that problem. Outstanding leadership is needed to overcome it.

Little of that actually happened.

Kudos if you noticed the link — which I herewith repeat — to what I wrote about privacy in 2006. :)

In particular — and even after the HealthCare.gov fiasco — I think few voters or legislators understand how incredibly broken government IT contracting is. Almost all major projects go through a five-stage process:

  • Specify.
  • Bid.
  • Select.
  • Complain.
  • Adjudicate.

Re-competes usually follow as well.

And so government IT is subject to extreme forms of two inevitable project killers:

  • Waterfall methodology.
  • Delay.

Procurement cycles take years, and in the worst cases decades. Project specifications are often fixed until the next procurement, which is often 7-10 years down the road. This, to put it mildly, is the opposite of agility, and widespread project failure ensues.

Categories: Other

The report of Obama’s Snowden-response commission

DBMS2 - Mon, 2014-01-27 14:14

In response to the uproar created by the Edward Snowden revelations, the White House commissioned five dignitaries to produce a 300-page report, released last December 12. (Official name: Report and Recommendations of The President’s Review Group on Intelligence and Communications Technologies.) I read or skimmed a large minority of it, and I found enough substance to be worthy of a blog post.

Many of the report’s details fall in the buckets of bureaucratic administrivia,* internal information security, or general pabulum. But the commission started with four general principles that I think have great merit.

*One big item — restrict the NSA to foreign intelligence, and split off domestic cyber defense into a separate organization.

The United States Government must protect, at once, two different forms of security: national security and personal privacy.

… It might seem puzzling, or a coincidence of language, that the word “security” embodies such different values. But the etymology of the word solves the puzzle; there is no coincidence here. In Latin, the word “securus” offers the core meanings, which include “free from care, quiet, easy,” and also “tranquil; free from danger, safe.”

Key point: The report rejects any idea that national security concerns should run roughshod over individual liberty.

The central task is one of risk management; multiple risks are involved, and all of them must be considered. …

  • Risks to privacy;
  • Risks to freedom and civil liberties, on the Internet and elsewhere;
  • Risks to our relationships with other nations; and
  • Risks to trade and commerce, including international commerce.

… If people are fearful that their conversations are being monitored, expressions of doubt about or opposition to current policies and leaders may be chilled, and the democratic process itself may be compromised.

… These points make it abundantly clear that if officials can acquire information, it does not follow that they should do so.

I am always pleased when policy makers recognize that the key issue is chilling effects upon the exercise of ordinary freedoms; the report made that point multiple times, footnoting both Sonia Sotomayor and the 1970s Church Commission. (Search the document for chill to see where.)

The idea of “balancing” has an important element of truth, but it is also inadequate and misleading.

… The purposes of surveillance must be legitimate. If they are not, no amount of “balancing” can justify surveillance. For this reason, it is exceptionally important to create explicit prohibitions and safeguards, designed to reduce the risk that surveillance will ever be undertaken for illegitimate ends.

Exceptionally important indeed.

The government should base its decisions on a careful analysis of consequences, including both benefits and costs (to the extent feasible).

Government officials, even more than other large-organization employees, have the tendency to avoid job failure at all costs. This goes triple when they work on life-and-death issues. Even so, sometimes security can be pursued with too much vigor, and much of the United States’ post-9/11 history directly bears that out.

And here’s the part I like best of all (emphasis mine):

We recommend that, if the government legally intercepts a communication under section 702 … and if the communication either includes a United States person as a participant or reveals information about a United States person:

(1) any information about that United States person should be purged upon detection unless it either has foreign intelligence value or is necessary to prevent serious harm to others;

(2) any information about the United States person may not be used in evidence in any proceeding against that United States person;

I’ve felt for years that a deciding issue in the preservation of liberty will be what kinds of information are admissible in court, or otherwise may be used to hurt people. All safeguards on data collection and retention notwithstanding, huge datasets will be created and maintained. Continued liberty requires careful limitation of how they may be used against us.

Related links

Categories: Other

Net neutrality and sponsored data — a middle course

DBMS2 - Mon, 2014-01-27 08:36

Thanks to a court decision that overturned some existing regulations, network neutrality is back in the news. Most people think the key issue is whether

  • Telecommunication companies (e.g. wireless and/or broadband services providers) should be allowed to charge …
  • … other internet companies (website owners, game companies, streaming media providers, etc., collectively known as edge providers) for …
  • … shipping data to internet service consumers in particularly attractive ways.

But I think some forms of charging can be OK — albeit not the ones currently being discussed — and so the question should instead be how the charges are designed.

When I wrote about network neutrality in 2006-7, the issue was mainly whether broadband providers would be allowed to ship different kinds of data at different speeds or reliability. Now the big controversy is whether mobile data providers should be allowed to accept “sponsorship” so as to have certain kinds of data not count against mobile data plan volume caps. Either way:

  • The “anything goes” strategy has obvious free-market appeal.
  • But proponents of network neutrality regulation — such as Fred Wilson and Nilay Patel — point out a major risk: By striking deals that smaller companies can’t imitate, large, established “edge provider” services may strangle upstart competitors in their cribs.

I think the anti-discrimination argument for network neutrality has much merit. But I also think there are some kinds of payment structure that could leave the playing field fairly level. Imagine, if you will, that:

  • Consumers are charged for data, speed of connection, reliability of delivery, or anything else, but …
  • … internet companies have the ability to absorb those charges on consumers’ behalf, but can only do so …
  • one interaction at a time, with no volume discounts, via an automated system that is open to everybody.

Such a system is surely technologically feasible — indeed, it is at least as feasible as the online advertising networks that already exist. Further, it would be possible for the system to have nice features such as:

  • Telcos could implement forms of peak load pricing, for those times when their network capacity actually is under stress.
  • “Edge provider” internet companies could pay subsidies only on behalf of certain consumers, where those consumers are selected in all the complex ways that advertisements are currently targeted.

In such a setup, which discrimination fears would or would not be realized?

  • Startups that hope to get adoption first and monetize second might face the cash cost of actually paying their users to try their services. Sorry. But at least they could target their spend on whoever they viewed as being the most important potential adopters.
  • Large vendors could not negotiate preferential pricing, reciprocal deals, or anything like that. At least, they couldn’t do so directly.
  • Discrimination by type of service – for example telcos trying to hamstring communications services that compete with their own offerings – could be staved off, via fairly lightweight regulatory oversight of the ways pricing plans are structured.
  • Regulators could head off sneaky “sweetheart deals” between big “edge provider” companies and telcos in much the same way.

I have no great objections to extreme net neutrality; behemoth oligopolist telcos should be among the last companies to cry “Un-free markets, boo-hoo-sob!!” But as internet pipes are increasingly used for telephony, streaming media or even medical consultations, drawing quality-of-service distinctions could have a certain merit. And so, for reasons similar to those I outlined in 2007, I still lean toward the partial network neutrality described above.

Related links

  • Wired articulated some of the dangers of a no-net-neutrality world.
  • Tech Republic mapped part of the legal and political net neutrality morass.
Categories: Other

Collaborate 2014 – 79 days until 79 degrees and Poolside WebCenter Discussions!

79 days.
Vegas.
Collaborate 2014.
79° F (average temp. in Vegas during April)

Fishbowl Solutions has another full list of activities planned for Collaborate 14. We look forward to discussing your Oracle WebCenter Content and Portal initiatives with you – hopefully poolside while we enjoy the warm weather together. It is a balmy 3° F in Minneapolis right now…

Here is a sneak peek of Fishbowl’s activities for Collaborate 2014:

Booth: 1453
Demos of our WebCenter Portal Solution Accelerator, SharePoint Connector 3.0, Google Search Connector for WebCenter and many more.

Presentations: 5

Here are the titles and abstracts of the sessions that Fishbowl is presenting or co-presenting on.

  • A Successful WebCenter Upgrade: What You Need to Know
    If your organization has not upgraded its WebCenter Content, Portal, or Imaging environment from pre-11g to 11g, then this 5-hour session is for you. Join WebCenter Content and Portal Specialized partner, Fishbowl Solutions, as they share facts and use cases that you will be able to apply to your WebCenter 11g upgrade. Fishbowl Solutions will be joined by customers who have successfully upgraded to 11g and therefore will be able to share their learnings, tips, and best practices that you will be able to utilize as well. This session will include a fact sharing discussion on upgrades, use case stories from WebCenter customers, and a roundtable forum where attendees will be able to ask questions specific to their Content, Portal, or Imaging upgrade. If your are planning your WebCenter upgrade and it seems daunting, or you aren’t sure where to begin; come to this session to collect first-hand and actionable information to get your upgrade project started and successfully completed.
  • Delivering on the Oracle WebCenter Portal Composite Application Vision – Top 5 Lessons Learned
    Most organizations see the benefit of creating composite web applications that aggregate services from disparate corporate and 3rd-party systems into a cohesive capability that efficiently supports business processes, driving self-service interactions for employees, customers and suppliers. The challenge is how to deliver on this vision, where to start, and how to plan and execute against your roadmap. Join us in this session as we discuss the Top 5 Lessons Learned at Rolls Royce in deploying WebCenter Portal, and how we were able to bridge content from multiple sources and surface that content to the right person, at the right time, and in the right context, to support our global customer portal.
  • Taking the WebCenter Portal User Experience to the Next Level
    Most organizations see the benefit of creating composite web applications that aggregate services from disparate corporate and 3rd-party systems into a cohesive capability that efficiently supports business processes, driving self-service interactions for employees, customers and suppliers. The challenge is how to deliver on this vision, where to start, and how to plan and execute against your roadmap. Join us in this session as we discuss the Top 5 Lessons Learned at Rolls Royce in deploying WebCenter Portal, and how we were able to bridge content from multiple sources and surface that content to the right person, at the right time, and in the right context, to support our global customer portal.
  • Leveraging BPM workflows for Accounts Payable Processing
    Accounts Payable departments are looking to create a more streamlined and paper-less business process. By achieving this an AP department, along with HR and many other departments are seeing huge ROI when converting from paper to digital management, but one key piece of this is the approval workflow of these documents. Oracle Business Process Management along side of Oracle WebCenter Imaging helps trigger an approval workflow to many different approvers to be acted upon. This session will describe how BPM workflows can be used for Accounts Payable processing and how they can be implemented with popular ERP applications like PeopleSoft and E-Business Suite.
  • Understanding Your Options for Searching Oracle WebCenter
    Search is a critical part of any effective content management solution. Without it, documents, web pages, policies, and other enterprise resources cannot be easily surfaced to end users. This session will explore the search technologies available to Oracle WebCenter customers including metadata-only search, Oracle Text Search, and Secure Enterprise Search, as well as the search functionality available with the Google Search Appliance. Attendees will get a side-by-side comparison of these search options covering the pros and cons of each technology and the use cases most suited to their capabilities. Whether you’re using WebCenter to power your website, intranet, or document management system, join this session to learn the differences between your search options and determine which one is best for you.

More information regarding the sessions above, as well as all of the scheduled sessions for Oracle WebCenter, can be found here: http://collaborate14.ioug.org/schedule

Did I mention it is 3° in Minneapolis right now with a high of 10° expected?! April, and Collaborate, can’t come soon enough.

The pool deck at the Venetian

 

 

The post Collaborate 2014 – 79 days until 79 degrees and Poolside WebCenter Discussions! appeared first on C4 Blog by Fishbowl Solutions.

Categories: Fusion Middleware, Other

Enhancing the WebCenter Portal ADF Template – 3 easy steps for front-end developers.

Here are a few tips for creating new ADF Templates for WebCenter Portal.
These tips are for front end developers applying a branded template or who are integrating their own custom Javascript enhancements.

There are 2 approaches widely used – the first option is to use pure ADF for everything – the second and one which I follow is the hybrid approach to use HTML and JSTL tags only for templating; as I feel its easier for web designers to skin and maintain a light weight frontend without the need to learn ADF techniques.

Read on for tips on templating -

Lets Start off with a clean ADF Page Template first -

<?xml version='1.0' encoding='UTF-8'?>
<jsp:root xmlns:jsp="http://java.sun.com/JSP/Page" version="2.1" 
	xmlns:f="http://java.sun.com/jsf/core" 
	xmlns:h="http://java.sun.com/jsf/html" 
	xmlns:af="http://xmlns.oracle.com/adf/faces/rich" 
	xmlns:pe="http://xmlns.oracle.com/adf/pageeditor" 
	xmlns:wcdc="http://xmlns.oracle.com/webcenter/spaces/taglib" 
	xmlns:trh="http://myfaces.apache.org/trinidad/html" 
	xmlns:c="http://java.sun.com/jsp/jstl/core" 
	xmlns:fn="http://java.sun.com/jsp/jstl/functions" 
	xmlns:fmt="http://java.sun.com/jsp/jstl/fmt">
<af:pageTemplateDef var="attrs">
<af:xmlContent>
	<component xmlns="http://xmlns.oracle.com/adf/faces/rich/component">
		<display-name>
			Clean Portal Template
		</display-name>
		<facet>
			<facet-name>
				content
			</facet-name>
			<description>
				Facet for content Composer
			</description>
		</facet>
	</component>
</af:xmlContent>

<!-- Content Composer Container -->
<af:group>
	<af:facetRef facetName="content"/>
</af:group>
<!-- xContent Composer Container -->

</af:pageTemplateDef>
</jsp:root>

The first thing to do is add the files to be included in the generated template <head></head>.

So first lets add  a generic CSS file ie global.css this is not the ADF Skin and should not contain any ADF skinning syntax ie af|panelGroupLayout {} or hacks like .af_panelGroupLayout {} or compressed CSS adf classes ie – .xyz {}.

<af:resource type="css" source="//css/global.css"/>

This af:resource tag will put either JavaScript or CSS files based on the attribute type into the DOM <head></head> of the generated template.

If your like me – I like to modularise my CSS files into multiple maintainable files like this -

/*Require JS will compress and pull these files into one*/
@import url("import/normalize.css");
@import url("import/main.css");
@import url("import/bootstrap.css");
@import url("import/psa_components.css");
@import url("import/skin.css");
@import url("import/iscroll.css");
@import url("import/responsive.css");
@import url("import/font-awesome.css");
@import url("import/ie8.css");
@import url("import/cache.css");

So you can see global.css acts as a CSS Module container importing the rest of the files. This allows me to maintain and update the CSS files individually ie Normalise, bootstrap, iscroll etc.

What’s also really useful is that with the requireJS library – when I move the files from DEV to SIT OR Live requireJS will compress and pull all those modules into a single global.css removing the imports improving load times.

Next lets add some base scripts to load first  in the head before the rest of the page loads.

<af:resource type="javascript" source="//js/libs/plugins.js"/>
<af:resource type="javascript" source="//js/libs/modernizr-2.0.6.min.js"/>

So first lets discuss plugins.js

/**********************
 * Avoid `console` errors in browsers that lack a console.
 */
(function() {
    var method;
    var noop = function () {};
    var methods = [
        'assert', 'clear', 'count', 'debug', 'dir', 'dirxml', 'error',
        'exception', 'group', 'groupCollapsed', 'groupEnd', 'info', 'log',
        'markTimeline', 'profile', 'profileEnd', 'table', 'time', 'timeEnd',
        'timeStamp', 'trace', 'warn'
    ];
    var length = methods.length;
    var console = (window.console = window.console || {});

    while (length--) {
        method = methods[length];

        // Only stub undefined methods.
        if (!console[method]) {
            console[method] = noop;
        }
    }
}());
/************************/

This defines an empty method if the object console is not defined.
For more info on JS console and debugging javascript (Chrome Dev Tools) || (FireFox FireBug)

I often leave console.info, log, error methods in my javascript which makes it easier to debug when doing early development – these echo methods will be stripped out when compressed and moved to SIT/Live with the use of requireJS or will be ignore if console is not defined.
(more to come on requirejs in next post) 

- this is common for browser like IE and FireFox without firebug enabled – any console objects found would create a JS error if it the above script was not included.

 JS Method Chaining and initialisation.. ..

/**********************
 * Define FB Base Chain if not defined
 */
var FB = window.FB || {};
//create Base object 
FB.Base = FB.Base || (function() {
	return {
		//create multi-cast delegate.
		onPortalInit: function(function1, function2) {
			return function() {
				if (function1) {
					function1();
				}
				if (function2) {
					function2();
				}
			}
		},
		//used for chaining methods
		chainPSA: function() {}
	}
})();
/************************/

This is important as I use requireJS library as a module loader at the footer of the template. RequireJS asynchronously loads in my dependency libraries ie Jquery, mustache templates and my own custom JS Libs as-well as providing a great tool for optimising and merging of the  libraries into 1 file like the CSS above.

After loading the dependencies the chain method is initialised.

(Why load JS files at the footer and not in the <head></head> with af:resource?
Read this - why it is best practise for scripts to load here by yahoo) ( btw – Modernizr needs to load in the <head></head>)

The chain method above allows me to have multiple portlets that can chain their methods to only initialise once all the dependencies have been loaded on the page. ie. If you imagine an onReady event that only initialises when page has loaded and all scripts in the footer of the page have asynchronously loaded – then and only then initialise all methods from the page or portlets or containers that are wrapped in the chain method that require Jquery for example… Without this if I  were to use the Jquery method in the portlet body but initialised the jquery script in the footer the page would send a JS error as the portlet jquery method would be unaware of the Jquery API – as it has not yet loaded.

How to setup inline methods -
in the template or portlet body that can access a library method after it async loads in the footer - 

FB.Base.chainPSA = FB.Base.onPortalInit(FB.Base.chainPSA, function() {
	//JS to initialises after Async load of required libs
	//ie lets alert the jquery version
	alert('You are running jQuery Version ' + $.fn.jquery);
});

This way you could write out FB.Base.chainPSA multiple times throughout your template to store all the required methods that need to be initiated after the page has loaded..

How to Initialise chainPSA
after all libs have loaded.

//load PSA javascript files 
if (FB.Base.chainPSA) {
	FB.Base.chainPSA();
}

So first check that chainPSA exist then execute all methods; that’s all there is too it.

An alternative solution which I often use is to setting a global JS variable flag – this enables me to contain and compress the forum portlet scripts in a single file that reads the configuration and data attributes from the portlet after all the JS dependencies have loaded if my main script in the footer – Once loaded it will then search to see if the variable flag exist and then asynchronously load all the required portlet files and dependencies ie

Portlet contains inline JS or JS script
which will be injected into the head - 

<af:resource type=”javascript”>

var WCP = WCP || {};
WCP.Portlet  = WCP.Portlet || {};
WCP.Portlet.enableForums = true;

</af:resource>

Footer Script
initiases the following after page load

if (WCP.Portlet.enableForums) {
//Async call required forum files.
}

 

Setting up Global reusable variables using JSTL

//Create FB Obj;
	var FB = FB || {};

	//Global Namespace get properties.
	//http://docs.oracle.com/cd/E25054_01/webcenter.1111/e10149/wcsugappb.htm#
	FB.Global = {
		wcApp: {
			defaultSkin:			'${fn:escapeXml(WCAppContext.application.applicationConfig.skin)}',
			logo:					'${fn:escapeXml(WCAppContext.application.applicationConfig.logo)}',
			resourcePath:			'${fn:escapeXml(WCAppContext.spacesResourcesPath)}',
			requestedSkin:			'${fn:escapeXml(requestContext.skinFamily)}',
			title:					'${fn:escapeXml(WCAppContext.application.applicationConfig.title)}',
			URL:					'${fn:escapeXml(WCAppContext.applicationURL)}',
			webCenterURI:			'${fn:escapeXml(WCAppContext.currentWebCenterURI)}'
		},
		spaceInfo: {
			description:			'${fn:escapeXml(spaceContext.currentSpace.GSMetadata.description)}',
			displayName:			'${fn:escapeXml(spaceContext.currentSpace.GSMetadata.displayName)}',
			keywords:				'${fn:escapeXml(spaceContext.currentSpace.metadata.keywords)}',
			name:					'${fn:escapeXml(spaceContext.currentSpaceName)}'
		},
		//custom Fishbowl lib
		restInfo: {
			trustServiceToken:		'${fb_rtc_bean.trustServiceToken}'
		},
		pageInfo: {
			createDateString:		'${fn:escapeXml(pageDocBean.createDateString)}',
			createdBy:				'${fn:escapeXml(pageDocBean.createdBy)}',
			lastUpdateDateString:	'${fn:escapeXml(pageDocBean.lastUpdateDateString)}',
			lastUpdatedBy:			'${fn:escapeXml(pageDocBean.lastUpdatedBy)}',
			pageName:				'${fn:escapeXml(pageDocBean.name)}',
			pagePath:				'${fn:escapeXml(pageDocBean.pagePath)}',
			pageTitle:				'${fn:escapeXml(pageDocBean.title)}',
			pageUICSSStyle:			'${fn:escapeXml(pageDocBean.UICSSStyle)}'
		},
		userInfo: {
			businessEmail:			'${fn:escapeXml(webCenterProfile[securityContext.userName].businessEmail)}',
			department:				'${fn:escapeXml(webCenterProfile[securityContext.userName].department)}',
			displayName:			'${fn:escapeXml(webCenterProfile[securityContext.userName].displayName)}',
			employeeNumber:			'${fn:escapeXml(webCenterProfile[securityContext.userName].employeeNumber)}',
			employeeType:			'${fn:escapeXml(webCenterProfile[securityContext.userName].employeeType)}',
			expertise:				'${fn:escapeXml(webCenterProfile[securityContext.userName].expertise)}',
			managerDisplayName:		'${fn:escapeXml(webCenterProfile[securityContext.userName].managerDisplayName)}',
			moderator:				'${fn:escapeXml(security.pageContextCommunityModerator)}',
			organization:			'${fn:escapeXml(webCenterProfile[securityContext.userName].organization)}',
			organizationalUnit:		'${fn:escapeXml(webCenterProfile[securityContext.userName].organizationalUnit)}',
			timeZone:				'${fn:escapeXml(webCenterProfile[securityContext.userName].timeZone)}',
			title:					'${fn:escapeXml(webCenterProfile[securityContext.userName].title)}'
		}
	};

Sometimes there are values from WebCenter that you wish to use ie Space Name or User Name – the easiest way is to escape these values with JSTL into a javascript Object within the page template. I’ve put a quick example above you can strip it out if you don’t need any values but it makes it easier to pull in values to other JS libs calling the key value pair from the object like this for user display name -

var userDisplayName = FB.Global.userInfo.displayName;

Setting RequireJS to load my dependencies via base bootstrap script
(Read this - For more info on module loading and using requirejs)  

<script src="/js/core/config.js"><jsp:text/></script>
<script src="/js/libs/requirejs/require.min.js" data-main="bootstrap"><jsp:text/></script>

Now as you can use html in JSF templates and I don’t want my scripts in the head – which af:resource enables I write out the <script> tag. A word or warning and you may have spotted <jsp:text/> this prevents the script tags from being self closed and breaking. This will happen if you are editing the page templates direct at runtime from the browser. This is the same for any empty container ie <div></div> would become <div/> self closing; this is fine with xml but not fine with browser interpreting html mark-up in the DOM..

Also you may want to consider putting this into the login template to pre-load and cache the initial scripts before the portal page loads all of the ADF JS lib dependencies.

The final template
<?xml version='1.0' encoding='UTF-8'?>
<jsp:root xmlns:jsp="http://java.sun.com/JSP/Page" version="2.1" 
	xmlns:f="http://java.sun.com/jsf/core" 
	xmlns:h="http://java.sun.com/jsf/html" 
	xmlns:af="http://xmlns.oracle.com/adf/faces/rich" 
	xmlns:pe="http://xmlns.oracle.com/adf/pageeditor" 
	xmlns:wcdc="http://xmlns.oracle.com/webcenter/spaces/taglib" 
	xmlns:trh="http://myfaces.apache.org/trinidad/html" 
	xmlns:c="http://java.sun.com/jsp/jstl/core" 
	xmlns:fn="http://java.sun.com/jsp/jstl/functions" 
	xmlns:fmt="http://java.sun.com/jsp/jstl/fmt">
<af:pageTemplateDef var="attrs">
<af:xmlContent>
	<component xmlns="http://xmlns.oracle.com/adf/faces/rich/component">
		<display-name>
			Clean Portal Template
		</display-name>
		<facet>
			<facet-name>
				content
			</facet-name>
			<description>
				Facet for content Composer
			</description>
		</facet>
	</component>
</af:xmlContent>

<!-- Put resources into head -->
<af:resource type="css" source="//css/global.css"/>
<af:resource type="javascript" source="//js/plugins.js"/>
<af:resource type="javascript" source="//js/libs/modernizr-2.0.6.min.js"/>
<!-- xPut resources into head -->

<!-- Define static global FB namespace and vars -->
<script>
	//Create FB Obj;
	var FB = FB || {};

	//Global Namespace get properties.
	//http://docs.oracle.com/cd/E25054_01/webcenter.1111/e10149/wcsugappb.htm#
	FB.Global = {
		wcApp: {
			defaultSkin:			'${fn:escapeXml(WCAppContext.application.applicationConfig.skin)}',
			logo:					'${fn:escapeXml(WCAppContext.application.applicationConfig.logo)}',
			resourcePath:			'${fn:escapeXml(WCAppContext.spacesResourcesPath)}',
			requestedSkin:			'${fn:escapeXml(requestContext.skinFamily)}',
			title:					'${fn:escapeXml(WCAppContext.application.applicationConfig.title)}',
			URL:					'${fn:escapeXml(WCAppContext.applicationURL)}',
			webCenterURI:			'${fn:escapeXml(WCAppContext.currentWebCenterURI)}'
		},
		spaceInfo: {
			description:			'${fn:escapeXml(spaceContext.currentSpace.GSMetadata.description)}',
			displayName:			'${fn:escapeXml(spaceContext.currentSpace.GSMetadata.displayName)}',
			keywords:				'${fn:escapeXml(spaceContext.currentSpace.metadata.keywords)}',
			name:					'${fn:escapeXml(spaceContext.currentSpaceName)}'
		},
		//custom Fishbowl lib
		restInfo: {
			trustServiceToken:		'${fb_rtc_bean.trustServiceToken}'
		},
		pageInfo: {
			createDateString:		'${fn:escapeXml(pageDocBean.createDateString)}',
			createdBy:				'${fn:escapeXml(pageDocBean.createdBy)}',
			lastUpdateDateString:	'${fn:escapeXml(pageDocBean.lastUpdateDateString)}',
			lastUpdatedBy:			'${fn:escapeXml(pageDocBean.lastUpdatedBy)}',
			pageName:				'${fn:escapeXml(pageDocBean.name)}',
			pagePath:				'${fn:escapeXml(pageDocBean.pagePath)}',
			pageTitle:				'${fn:escapeXml(pageDocBean.title)}',
			pageUICSSStyle:			'${fn:escapeXml(pageDocBean.UICSSStyle)}'
		},
		userInfo: {
			businessEmail:			'${fn:escapeXml(webCenterProfile[securityContext.userName].businessEmail)}',
			department:				'${fn:escapeXml(webCenterProfile[securityContext.userName].department)}',
			displayName:			'${fn:escapeXml(webCenterProfile[securityContext.userName].displayName)}',
			employeeNumber:			'${fn:escapeXml(webCenterProfile[securityContext.userName].employeeNumber)}',
			employeeType:			'${fn:escapeXml(webCenterProfile[securityContext.userName].employeeType)}',
			expertise:				'${fn:escapeXml(webCenterProfile[securityContext.userName].expertise)}',
			managerDisplayName:		'${fn:escapeXml(webCenterProfile[securityContext.userName].managerDisplayName)}',
			moderator:				'${fn:escapeXml(security.pageContextCommunityModerator)}',
			organization:			'${fn:escapeXml(webCenterProfile[securityContext.userName].organization)}',
			organizationalUnit:		'${fn:escapeXml(webCenterProfile[securityContext.userName].organizationalUnit)}',
			timeZone:				'${fn:escapeXml(webCenterProfile[securityContext.userName].timeZone)}',
			title:					'${fn:escapeXml(webCenterProfile[securityContext.userName].title)}'
		}
	};
</script>
<!-- xDefine static global namespace and vars -->

<!-- Content Composer Container -->
<af:group>
	<!-- Add any custom HTML HERE -->
	<div id="FB-wrapper" class="wrapper">
		<af:facetRef facetName="content"/>
	</div>
	<!-- xAdd any custom HTML HERE -->
</af:group>
<!-- Content Container -->

<!-- Init RequireJS -->
<script src="/js/core/config.js"><jsp:text/></script>
<script src="/js/libs/requirejs/require.min.js" data-main="bootstrap"><jsp:text/></script>
<!-- xInit RequireJS -->

</af:pageTemplateDef>
</jsp:root>

** I haven’t added a navigation structure to this template only the Content Composer facet.

Designing for mobile with Responsive Design.

When learning about responsive design the first thing to do is add the following meta viewport tag into the head defining the required content params for mobile ie -

<meta name="viewport" content="width=device-width, initial-scale=1">

Now unfortunately there is no ADF tag to add meta tags into the header and although you could add this html to the <body></body> its not really ideal.

What you should do is add this meta viewport into the Page Style not the Page Template.
This will allow you to add the following trinidad tag to generate the meta tag into the  <head></head> of the generated template as the facet metaContainer specifies this region to generate into.

<f:facet name="metaContainer">
	<af:group>
		<trh:meta name="viewport" content="width=device-width, initial-scale=1"/> 
	</af:group>
</f:facet>

The metaContainer facet should be held within the <af:document></af:document> tags

You could also add other meta tags ie keywords and add a dynamic value like this -

<trh:meta name="keywords" content="#{bindings.SEO_KEYWORDS}"/>

A word of warning you must make sure you add this to the page style before you create a page. Existing pages will ignore any updates to Page Styles that were used to create a page unlike Page Templates which allow you to tweak and update on the fly.

In the next post I’ll be writing up how to use requireJS properly with WebCenter to Asynchronously load in your libraries, templates and request additional libraries or templates when required by the page.

The post Enhancing the WebCenter Portal ADF Template – 3 easy steps for front-end developers. appeared first on C4 Blog by Fishbowl Solutions.

Categories: Fusion Middleware, Other

WebCenter Portal, ADF, REST API & JS templating – another approach to faster portlets.

So there are a few ways to create portlets, widgets and gadgets for WebCenter -

I’m going to show a few tips and tricks and one of the methods I use – to create faster interactive portlets using the WebCenter REST API and Javascript to leverage the services available.

As an example I will show you the basics to recreating the “OOTB WebCenter Portal forum taskflow” that hooks into JIVE as a light weight Async JS template driven portlet that you can drag and drop from the resource catalogue and supply configurable values.

This will enable marketing or IT teams who have no knowledge of ADF to manage, customise and enhance the look and feel of the portlet with just HTML and CSS skills. Also they will not need to redeploy the portlet via weblogic to apply enhancements – if they have access to the files on the server ie with FTP or SCP – you could also include these files on the content server – if it’s externally facing to handle revisioning version control much like sitestudio.

Click here to see the viewlet

Here’s a quick video that shows the OOTB Spaces Forum on the bottom against a simple one on the top that I created using this approach. You’ll also noticed I added an upload capability that allows the user to upload docs into WebCenter Content associated to the JIVE forum that is not part of the OOTB taskflow capability.

Read on to view the guides on how to recreate this approach and learn some useful tips on the way -

Over next month I will posting articles on how to achieve this (Keep an eye out)

Step 1) ENHANCE WEBCENTER LOGIN TEMPLATE
How to pre-authenticate with the REST API and WebCenter Content if you don’t have SSO enabled.

Step 2) ENHANCE ADF PAGE TEMPLATE
Taking the ADF Page to the next level – improve load times, caching via asynchronous calls to additional assets and JavaScript libs. (With RequireJS)

Step 3) REQUIREJS LOADING DEPENDENCIES
How to use RequireJS with WebCenter Portal effectively

Step 4) MOUSTACHE JS TEMPLATING
Moustache Template driven portlets

Step 5) EVENT DRIVEN PORTLET INTERACTION WITH “HTML5 DATA ATTRIBUTES”
How to navigate and interact between templates/screens with event listening

Step 6) SETTING UP THE FORUM PARAMETRISED PORTLET
How to create your first parametrised portlet for WebCenter Portal and deploy it to the resource catalogue 

Step 7) REQUIREJS COMPRESSING/MERGINING JS AND CSS ASSETS INTO A SINGLE FILE WITH NODEJS
Compress all libraries to Single script and CSS file that loads and caches all dependencies

Step 8) WebCenter Caching and Compression 
How to get the best performance from your portal!

Step 9) TIPS AND TRICKS 
Additional tips for integrating template driven portlets for WebCenter

The post WebCenter Portal, ADF, REST API & JS templating – another approach to faster portlets. appeared first on C4 Blog by Fishbowl Solutions.

Categories: Fusion Middleware, Other

The games of Watson

DBMS2 - Thu, 2014-01-09 14:57

IBM excels at game technology, most famously in Deep Blue (chess) and Watson (Jeopardy!). But except at the chip level — PowerPC — IBM hasn’t accomplished much at game/real world crossover. And so I suspect the Watson hype is far overblown.

I believe that for two main reasons. First, whenever IBM talks about big initiatives like Watson, it winds up bundling a bunch of dissimilar things together and claiming they’re a seamless whole. Second, some core Watson claims are eerily similar to artificial intelligence (AI) over-hype three or more decades past. For example, the leukemia treatment advisor that is being hopefully built in Watson now sounds a lot like MYCIN from the early 1970s, and the idea of collecting a lot of tidbits of information sounds a lot like the Cyc project. And by the way:

  • MYCIN led to E-MYCIN, which led to the company Teknowledge, which raised a lot of money* but now has almost faded from memory.
  • Cyc is connected to the computer science community’s standard unit of bogosity.

*Much of it, I’m ashamed to say, with my help, back in my stock analyst days.

AI is something of an umbrella category, often just meaning “Computerized stuff that we don’t know how to do yet”, or ” … only recently figured out how to do.” Automated decision-making is an aspect of AI, for example, but so also is natural language recognition. It used to be believed that most AI should be approached in the same way:

  • Come up with a clever way to represent knowledge.
  • Match the actual situation against the knowledge.
  • Produce a smart result.

But that template unfortunately proved disappointing time after time. The problem was typically that not enough knowledge could in practice be represented, and thus well-informed automated decisions could not be made. In particular, there was a “first step fallacy,” in which a demo system would solve a “toy problem”, but robust real-life systems never emerged.

Of course, there are exceptions to this general rule of disappointment; for example, Teknowledge and its fellow over-hyped expert system technology vendors of the 1980s (Intellicorp, Inference, et al.) did get a few solid production references. But the ones I remember best (e.g. American Express credit, United Airlines seat pricing, some equipment maintenance scheduling) were often for use cases that we’d now address in more straightforwardly mathematical ways.

Watson is generally promoted as helping with decision-making, but that message has to be scrutinized carefully. So far as I’ve been able to guess, the true core technology of IBM Watson is extracting knowledge from text — or primarily from text — and representing it in some way that is reasonably useful in answering natural language queries. The hope would then be to eventually achieve a rich enough knowledge base to support the Star Trek computer. But automated decision-making doesn’t just require knowledge; it also requires decision-making rules. And if Watson is significantly ahead of the 1980s decisioning state of the art (Rete, backward chaining, etc.), I’m not aware of how.

So if Watson is going to accomplish anything soon, it will probably be in areas where serious decision-making chops aren’t needed. Indeed, the application areas that I’ve seen mentioned for the past or near term are mainly:

  • Playing Jeopardy! That’s pretty simple from a decision-making standpoint.
  • Advising on treatments for a specific disease (not actually built yet). As noted above, that’s 1970s-level decisioning.
  • Knowledge extraction from medical research articles. That has very little to do with decisioning, and incidentally sounds a lot like what SPSS (before it was acquired by IBM) and Temis were already doing years ago.
  • Natural-language customer interaction. That may not involve any decisioning at all.

Returning to the point that Watson’s core technology is probably natural language, it seems fair to say that IBM these days is probably better at the text mining side than at speech understanding. Evidence I’m thinking of includes:

  • That seems to be what IBM itself is saying on its speech recognition page.
  • I also recall IBM’s natural language recognition projects being regarded as not going well in the late 1990s. (Project Penelope, I believe, although I can’t confirm that via googling.)
  • IBM’s LanguageWare sounded more oriented to text mining in 2008.
  • IBM bought SPSS, which had decent text mining technology.

And while this is too old to really count as evidence, IBM had a famously unsuccessful language recognition deal with Artificial Intelligence Corporation way back in 1983-4.*

*Yeah, I helped raise money for AICorp too, and also for Symbolics. As you might imagine, my investment banking trophies do not have pride of place on my desk.

One last observation — text mining has a very mixed track record. Watson will have to go far beyond predecessor text technologies to become nearly the big deal IBM is suggesting it will be.

Related links

Categories: Other

11g AJAX Authentication for WebCenter Portals Rest API and Content

WebCenters Portals REST API and WebCenter Content provide a great set of  web services enabling you to create rich interactive JavaScript components. You can see an example of this here - http://www.fishbowlsolutions.com/mobile via jQuery and UCM – Client Side Ajax UCM Interaction blog post.

An issue you may have come across if you don’t have SSO enabled is the ability to interact against these services. This can be a problem if you are writing Javascript Widgets or hybrid mobile applications for WebCenter Portal that require authentication to access them.

You could present a popup requesting the user to re authenticate; however this isn’t ideal if the user has already authenticated with the portal to access your new JS Components.

Read on to see the options available to you -

There are two options available if you don’t use SSO:

1) Enabling AJAX pre-authentication on the WebCenter portal login page; which will store the authenticated session.
2) Setting up a trust service token and passing the authentication request with the token when you need to access the services once the user has authenticated.

 

1. Pre-authenticating against the REST API. 

1.1 Updating the login template for pre-auth.

On the login page disable the submit event on the form to authenticate against WebCenter Portal.
Instead when the user selects the login button -

1. Pass a base64 authentication request to the REST API via AJAX.
2. On a success response (store the REST API security token if needed)
3. Trigger the submit request to enable the form post to authenticate on WebCenter.

Here are some code samples for authenticating with either WebCenter Portal or Content via AJAX using JQuery -

WebCenter Portal AJAX REST API Authentication
(with OIT or Username & Password)

jquery.ajax({
	url: endpoint,
	dataType: 'json',
	beforeSend: function(xhr) {
		//if trust token not defined send base64 user/pass authentication
		if (FB.restInfo.trustServiceToken === undefined) {
			//pass user/password credentials
			if (username.length &gt; 0) {
				//IE has no support for BTOA use crypto lib
				if (window.btoa === undefined) {
					xhr.setRequestHeader('Authorization', 'Basic ' + Crypto.util.bytesToBase64(Crypto.charenc.Binary.stringToBytes(username + ':' + password)));
				//all other browsers support BTOA
				} else {
					xhr.setRequestHeader('Authorization', 'Basic ' + window.btoa(username+':'+password));
				}
				return;
			}
		//else pass security token
		} else {
			//token auth
			xhr.setRequestHeader('Authorization', 'OIT ' + FB.restInfo.trustServiceToken); ///Obj path to Trust token value
		}

		//pass secure token
		xhr.withCredentials = true;
	},
	//Authentication Successful
	success: function(data) {
		//Process Resource Index Object in callback method
		//Store REST API Token
		callback(context);
	},
	//Authentication Failed
	error: function(request, status, error) {
		error_handler(callback)(request, status, error, endpoint);
	}
});

 

 WebCenter Content AJAX Authentication
(with OIT or Username & Password)

var params = {
	IdcService: 'PING_SERVER',
	IsJson: 1
};
//Authenticate with WebCenter Content 
jquery.ajax({ url: endpoint+ '/idcplg', data: params, dataType: 'json',
	//user already authenticated
	success: function(data) {
		callback(this);
	},
	//Error connection or authorisation to WebCenter Content failed
	error: function(request, status, error) {
		//if trust token defined send OIT Auth Request
		if (FB.restInfo.trustServiceToken !== undefined) {
			//Authenticate Via OIT
			jquery.ajax({
				type: "GET", 
				url: '/adfAuthentication', //http://domain.com/adfAuthentication will auth OIT on WebCenter
				//setup request headers first
				beforeSend: function(xhr) {
					xhr.setRequestHeader('Authorization', 'OIT ' + FB.restInfo.trustServiceToken); //Obj path to Trust token value
					xhr.withCredentials = true;
				},
				//request successfull
				success: function(data) {
					callback(this);
				},
				//issue with request
				error: function(request, status, error) {
					error_handler(callback)(request, status, error, endpoint+ '/idcplg');
				}
			});
		//else user/pass sent 
		} else {
			//Authenticate with user/pass
			jquery.ajax({
				type: "POST", 
				url: endpoint+'/login/j_security_check', 
				data: {
					j_username: 		username, 
					j_password: 		password, 
					j_character_encoding: 	'UTF-8'
				},
				//request successfull
				success: function(data) {
					callback(this);
				},
				//issue with request
				error: function(request, status, error) {
					error_handler(callback)(request, status, error, endpoint+ '/idcplg');
				}
			});
		}
	}
});

The WebCenter Content Secure Token Auth requires authentication on http://domain.com/adfAuthentication.

You can also use this to authenticate against the Inbound Refinery (Conversion Server)

http://domain.com/ibr/adfAuthentication

And Universal Records Management

http://domain.com/urm/adfAuthentication

Where as User/Pass Auth on the content server is requested via http://domain.com/cs/login/j_security_check.

Here is a simple example of a webcenter login page that makes an Authentication request first to the REST API before posting the form and logging into WebCenter Portal.

 

<!DOCTYPE html>
<!--[if lt IE 7]>      <html class="no-js lt-ie9 lt-ie8 lt-ie7"> <![endif]-->
<!--[if IE 7]>         <html class="no-js lt-ie9 lt-ie8"> <![endif]-->
<!--[if IE 8]>         <html class="no-js lt-ie9"> <![endif]-->
<!--[if gt IE 8]><!--> <html class="no-js"> <!--<![endif]-->
<head>
    <meta charset="utf-8">
    <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">

    <title>Login Auth Example</title>
    <meta name="description" content="">
    <meta name="viewport" content="width=device-width">

</head>
<body>

<div id="FB-loginWrapper">
	<h2>Sign-in to your account</h2>

	<form name="PortalLoginForm" method="post" action="/webcenter/wcAuthentication">
		<input type="hidden" name="success_url" value="/webcenter/intranet_loginhandlerservlet"/>
		<input type="hidden" name="j_character_encoding" value="UTF-8">

		<div class="formField">
			<label>Username:</label>
			<input type="text" name="j_username" />
		</div>

		<div class="formField">
			<label>Password:</label>
			<input type="password" name="j_password" />
		</div>

		<input disabled="disabled" type="submit" />
	</form>
</div>

<script src="//ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js"></script>
<script>window.jQuery || document.write('<script src="js/vendor/jquery-1.10.2.min.js"><\/script>')</script>
<script src="js/vendor/cryptojs.js"></script>

<script>
//Setup JS namespace
var FB = FB || {};
FB.Login = (function() {
	return {
		//on script init
		init: function() {
			//Setup page events
			this.events();

			//remove disabled attr on submit button.
			$('[name="PortalLoginForm"] [type="submit"]').prop('disabled',false); 
		},
		//setup page events
		events: function() {
			//on submit click initialise auth request
			$('[name="PortalLoginForm"]').submit(function() {
				//if data attribute true on form allow submit
				if ($(this).data('valid')) {
					return true;
				}

				//call REST API authentication method
				FB.Login.AuthRestAPI(
					'/rest/api/resourceIndex', 			//endpoint
					$('[name="j_username"]').val(), 	//User
					$('[name="j_password"]').val(), 	//Pass
					function() { //callback method
						$('[name="PortalLoginForm"]').data('valid', true).submit(); //enable form to submit
						$('[name="PortalLoginForm"] [type="submit"]').submit(); //submit form on AuthRestAPI success AJAX.
					}
				);

				//else stop form from submitting
				return false;
			});
		},
		//REST Auth AJAX method
		AuthRestAPI:function(endpoint, username, password, callback) {
			//make ajax req
			$.ajax({
				url: endpoint,
				dataType: 'json',
				beforeSend: function(xhr) {
					//IE has no support for BTOA use cryptoJS lib
					if (window.btoa === undefined) {
						xhr.setRequestHeader('Authorization', 'Basic ' + Crypto.util.bytesToBase64(Crypto.charenc.Binary.stringToBytes(username + ':' + password)));
					//all other browsers support BTOA
					} else {
						xhr.setRequestHeader('Authorization', 'Basic ' + window.btoa(username+':'+password));
					}

					//pass secure token
					xhr.withCredentials = true;
				},
				//Authentication Successful
				success: function(data) {
					//submit form via callback
					callback();
				},
				//Authentication Failed
				error: function(request, status, error) {
					alert('Authentication failed');
				}
			});
		}
	}
})();

//on DOM Loaded setup page
$(document).ready(function() {
	FB.Login.init();
});
</script>

</body>
</html>

 

2. Setting up the the trust service security token
(Info to setup OIT).

I would recommend setting up the trust token; however the base64 authentication pre login above is easier and quicker to setup.

The trust token will be generated once the user has logged in.

2.1. Create keystore

a) cd /opt/oracle/jrmc-4.0.1-1.6.0/bin/
b) keytool -genkeypair -keyalg RSA -dname “cn=spaces,dc=domain,dc=com” -alias orakey -keypass myKeyPassword -keystore /opt/oracle/keystore/default-keystore.jks -storepass myKeyPassword -validity 1064
c) keytool -exportcert -v -alias orakey -keystore /opt/oracle/keystore/default-keystore.jks -storepass myKeyPassword -rfc -file /opt/oracle/keystore/orakey.cer
d) keytool -importcert -alias webcenter_spaces_ws -file /opt/oracle/keystore/orakey.cer -keystore /opt/oracle/keystore/default-keystore.jks -storepass myKeyPassword

2.2. Update jps-config.xml

a)

<serviceInstance name="keystore" provider="keystore.provider" location="/opt/oracle/keystore/default-keystore.jks">
<description>Default JPS Keystore Service</description>

b)

<propertySets>
	<propertySet name="trust.provider.embedded">
		... existing entries
		<property value="orakey" name="trust.aliasName"/>
		<property value="orakey" name="trust.issuerName"/>
	</propertySet>
</propertySets>

 

2.3. Update credential store

a) in WLST: /opt/oracle/middleware/Oracle_WC1/common/bin/wlst.sh
b) connect()
c) updateCred(map=”oracle.wsm.security”, key=”keystore-csf-key”, user=”owsm”, password=”myKeyPassword ”, desc=”Keystore key”)
d) updateCred(map=”oracle.wsm.security”, key=”enc-csf-key”, user=”orakey”, password=”myKeyPassword ”, desc=”Encryption key”)
e) updateCred(map=”oracle.wsm.security”, key=”sign-csf-key”, user=”orakey”, password=”myKeyPassword ”, desc=”Signing key”)

2.4. Add TrustServiceIdentityAsserter.

a) Console -> Security Realms -> myrealm -> Providers -> New
b) Restart all

2.5. Configure Credential Store

a) in WLST: /opt/oracle/middleware/Oracle_WC1/common/bin/wlst.sh
b) connect()
c) createCred(map=”o.webcenter.jf.csf.map”, key=”keygen.algorithm”,user=”keygen.algorithm”, password=”AES”)
d) createCred(map=”o.webcenter.jf.csf.map”, key=”cipher.transformation”,user=”cipher.transformation”, password=”AES/CBC/PKCS5Padding”)

2.6. Test it against the rest api

a) http://www.domain.com/rest/api/resourceIndex

Once setup create a bean to output the token into the page template  ie ${fb_rtc_bean.trustServiceToken} JS object so that your JS AJAX request can reuse it.

var FB = FB || {};
FB.restInfo = {
	username: 				'${securityContext.userName}',
	trustServiceToken:			'${fb_rtc_bean.trustServiceToken}',
	spaceName: 				'${spaceContext.currentSpaceName}',
	spaceGUID: 				'${spaceContext.currentSpace.metadata.guid}'
};

You can then use one of the AJAX authentication methods above with ${fb_rtc_bean.trustServiceToken} reading the JS Obj FB.restInfo.trustServiceToken;

 

The post 11g AJAX Authentication for WebCenter Portals Rest API and Content appeared first on C4 Blog by Fishbowl Solutions.

Categories: Fusion Middleware, Other

Notes on memory-centric data management

DBMS2 - Fri, 2014-01-03 03:35

I first wrote about in-memory data management a decade ago. But I long declined to use that term — because there’s almost always a persistence story outside of RAM — and coined “memory-centric” as an alternative. Then I relented 1 1/2 years ago, and defined in-memory DBMS as

DBMS designed under the assumption that substantially all database operations will be performed in RAM (Random Access Memory)

By way of contrast:

Hybrid memory-centric DBMS is our term for a DBMS that has two modes:

  • In-memory.
  • Querying and updating (or loading into) persistent storage.

These definitions, while a bit rough, seem to fit most cases. One awkward exception is Aerospike, which assumes semiconductor memory, but is happy to persist onto flash (just not spinning disk). Another is Kognitio, which is definitely lying when it claims its product was in-memory all along, but may or may not have redesigned its technology over the decades to have become more purely in-memory. (But if they have, what happened to all the previous disk-based users??)

Two other sources of confusion are:

With all that said, here’s a little update on in-memory data management and related subjects.

  • I maintain my opinion that traditional databases will eventually wind up in RAM.
  • At conventional large enterprises — as opposed to for example pure internet companies — production deployments of HANA are probably comparable in number and investment to production deployments of Hadoop. (I’m sorry, but much of my supporting information for that is confidential.)
  • Cloudera is emphatically backing Spark. And a key aspect of Spark is that, unlike most of Hadoop, it’s memory-centric.
  • It has become common for disk-based DBMS to persist data through a “log-structured” architecture. That’s a whole lot like what you do for persistence in a fundamentally in-memory system.
  • I’m also sensing increasing comfort with the strategy of committing writes as soon as they’ve been acknowledged by two or more nodes in RAM.

And finally,

  • I’ve never heard a story about an in-memory DBMS actually losing data. It’s surely happened, but evidently not often.
Categories: Other

The How and Why of Integrating SharePoint with Oracle WebCenter in 13 Minutes

Integrating Microsoft SharePoint with Oracle WebCenter Content is more of a question of why than how. Integrations between the systems have existed for 6+ years now, and each of those have had their own set of integration points and technologies to make the integration work. However, companies need to first understand and agree why they want to integrate the two systems. This starts with identifying the need or business problem that continues to persist without an integration.

Fishbowl Solutions has had an integration for the systems for three years. In that time, we have talked to hundreds of customers regarding their needs and business problems and the disconnect between SharePoint content and getting that content into Oracle WebCenter. Here are the most common needs/business problems we have heard:

  • Lack of Governance over SharePoint use and what happens to orphaned sites and content
  • Difficulty surfacing high-value content created in SharePoint to Oracle-based websites, portals and business applications
  • Inability to selectively determine the SharePoint content items to store in WebCenter – based on version, site location, or business unit requirements

If your company has identified any of the problems above, then it has effectively answered the why question. However, companies should also take a look at their overall information governance strategy and how SharePoint and Oracle WebCenter are a part of that strategy. For organizations that have answered the why, but also have determined that Oracle WebCenter Content is THE repository for enterprise, mission-critical information,  then the how questions can be asked and answered as well.

This 13 minute overview presentation and demo addresses both questions and may be a good place to start in helping you and your organization define its information governance strategy:

For your convenience, here are the time slots for the use case demos of Fishbowl’s connector:

  • Content Publishing – 3:16 to 5:45
  • Project Lifecycle Governance – 5:46 to 7:58
  • Business Specific Storage Requirements – 7:59 to 10:45

Happy Holidays!

Jason Lamon is a product strategist and technology evangelist who writes about a range of topics regarding content management and enterprise portals. He writes to keep the communication going about such topics, uncover new opinions, and to get responses from people who are smarter than him. If you are one of those people, feel free to respond to his tweets and posts.

The post The How and Why of Integrating SharePoint with Oracle WebCenter in 13 Minutes appeared first on C4 Blog by Fishbowl Solutions.

Categories: Fusion Middleware, Other

Speaking at UKOUG Tech13 – UK Oracle User Group

I was fortunate to be accepted to present at this years UK Oracle User Group – “Developing and integrating with Oracle Social Network”.
Showing how easy it was to integrate with OSN using it’s REST API and my previous experience at OSNs Developer Challenge at Oracle Open World 2012.

You can see more info and my original OOW12 entry – Mobile Integration with OSN here

I had a bit of fun with this years presentations creating it for my ipad; this allowed me to move around whilst still having control of the presentation and making it more dynamic.
Above you can see me and my co-presenter David Rowe as I take a picture of the audience which jumps across and into my presentation being projected.

For those interested I created the IOS application using Cordova to build a hybrid app with the reveal.js library for creating my slides in html5; to speed things up I used www.slid.es and exported the html package which I imported into my cordova project.

I then used an app called reflector that enabled me to use airplay and mirror to my laptop from the ipad onto the projector.
And then finally I used Connectify to quickly create a network from my laptop my ipad could connect too, to enable airplay to work – this could of worked with the available wireless network but I wasn’t taking any chances of things being blocked.

It worked out pretty well and had a few smiles.. Although there were a few glitches and I’m blaming IOS 7 as I had recently upgraded and hadn’t fully tested on the new OS.

 

So Tech13 as you can see below in the infogram – the event had just over a thousand attendees from over 28 countries during 4 days 159 speakers and 200 exciting sessions.
It was a great event and good fun networking and connecting with colleagues, other Aces and the Oracle Team – there were lots of interesting topics/sessions from Fujitsu and their case studies with Webcenter to the Red Samurai team who gave 2 great presentations on ADF - ADF Development Survival Kit – Essentials for ADF Developer &  ADF Anti-Patterns: Dangerous Tutorials – Real Experience in ADF

I also got to play with Oracle’s Google Glass kit; and also get a glimpse at what has been happening behind the scenes; where they are taking new technology and products to enhance user experience and interaction.
With that said Fishbowl Solutions also got a pair of Fishbowl blue Google Glass kits recently for the innovation team to start looking into the future and experimenting as we did with our mobile product line back 5-6 years ago.

So be on the lookout for the first Oracle Partner Glass app ;)

 

 

 

 

The post Speaking at UKOUG Tech13 – UK Oracle User Group appeared first on C4 Blog by Fishbowl Solutions.

Categories: Fusion Middleware, Other

WebCenter Content: How to persist parameters across links ie &coreContentOnly=1

This is something new I came across thanks to a colleague; that I thought would be good to share.

Recently I had an issue where I needed to hide the standard UCM header and footer, but allow the users to navigate through the links available on the body.
(This  process will also allow you to persist other params)

If you add the parameter coreContentOnly=1 like this -

http://contentServer/cs/idcplg?IdcService=GET_DOC_PAGE&Action=GetTemplatePage

&Page=HOME_PAGE&coreContentOnly=1 

The header/footer are removed leaving the body content available however on navigating selecting a link or interacting with a form field the coreContentOnly param is removed therefore displaying the header & footer on the next page.

In the past I’ve written components to handle this or done some magic on the web-server; however no more is this needed!
There is a workaround to persist parameters -

By placing /_p/ after the ISAPI/CGI file Name (usually idcplg) – you can apply and persist variables.

http://contentServer/cs/idcplg/_p/?IdcService=…

This variable mapping can be found by opening the PersistentUrlKeys table.

Here are some of the mappings - 

min =coreContentOnly
cc = ClientControl

So to enable coreContentOnly=1 throughout you can use either approach -

http://contentServer/cs/idcplg/_p/min?IdcService=…

or

http://contentServer/cs/idcplg/_p/min-1?IdcService=…

On the first one I did not set -1 as 1 is the default assumed value the dash is used as a seperator for key/value.
If I wanted to add ClientControl or another variable I  can add the mapping in like this -

http://contentServer/cs/idcplg/_p/min/cc-queryselect?IdcService=…

Important - this mapping must exist in Content Server the SCRIPT_NAME environment variable or it will not be persisted.

 

The post WebCenter Content: How to persist parameters across links ie &coreContentOnly=1 appeared first on C4 Blog by Fishbowl Solutions.

Categories: Fusion Middleware, Other

Pyramid of needs for Cloud management

William Vambenepe - Sun, 2013-04-21 14:21

cloud-needs

Categories: Other