Skip navigation.

Feed aggregator

Spark: A Discussion

Greg Pavlik - Wed, 2014-07-23 08:36
A great presentation, worth watching in its entirety.

With apologies to my Hadoop friends but this is good for you too.

The Customer Experience

Steve Karam - Wed, 2014-07-23 07:00
The Apple Experience[This entry is part 6 of 6 in the series Grow Your Career

I’m going to kick this post off by taking sides in a long-standing feud.

Apple is amazing.

There. Edgy, right? Okay, so maybe you don’t agree with me, but you have to admit that a whole lot of people do. Why is that?

NOT part of the Customer Experience. Image from AppleFanSite.com

Image from AppleFanSite.com

Sure, there’s the snarky few that believe Apple products are successful due to an army of hipsters with thousands in disposable income, growing thick beards and wearing skinny jeans with pipes in mouth and books by Jack Kerouac in hand, sipping lattes while furiously banging away on the chiclet keyboard of their Macbook Pro with the blunt corner of an iPad Air that sports a case made of iPhones. I have to admit, it does make for an amusing thought. And 15 minutes at a Starbucks in SoHo might make you feel like that’s absolutely the case. But it’s not.

If you browse message boards or other sites that compare PCs and Apple products, you’ll frequently see people wondering why someone would buy a $2,000 Macbook when you can have an amazing Windows 8.1 laptop with better specs for a little over half the price. Or why buy an iPad when you can buy a Samsung tablet running the latest Android which provides more freedom to tinker. Or why even mess with Apple products at all when they’re not compatible with Fragfest 5000 FPS of Duty, or whatever games those darn kids are playing these days.

Part of the Customer Experience. Image provided by cnet.com

Image from cnet.com

The answer is, of course, customer experience. Apple has it. When you watch a visually stunning Apple commercial, complete with crying grandpas Facetiming with their newborn great-grandson and classrooms of kids typing on Macbook Airs, you know what to expect. When you make the decision to buy said Macbook Air, you know that you will head to the Apple Store, usually in the posh mall in your town, and that it will be packed to the gills with people buzzing around looking at cases and Beats headphones and 27″ iMacs. You know that whatever you buy will come in a sleek white box, and will be placed into a thick, durable bag with two drawstring cords that you can wear like a backpack.

When you get it home and open the box, it’s like looking at a Tesla Model S. Your new laptop, situated inside a silky plastic bed and covered in durable plastic with little tabs to peel it off. The sleek black cardboard wrapped around a cable wound so perfectly that there’s not a single millimeter of space between the coils, nor a plug out of place. The laptop itself will be unibody, no gaps for fans or jiggly CD-ROM trays or harsh textures.

All of which is to say, Apple provides an amazing customer experience. Are their products expensive, sometimes ridiculously so? Of course. But people aren’t just buying into the product, they’re buying into the “Apple life.” And why not? I’d rather pay for experiences than products any day. I may be able to get another laptop with better specs than my Macbook Pro Retina, but there will always be something missing. Not the same Customer Experience.Maybe the screen resolution isn’t quite so good, maybe the battery doesn’t last as long, or maybe it’s something as simple as the power cord coming wrapped in wire bag ties with a brick the size of my head stuffed unceremoniously into a plastic bag. The experience just isn’t there, and I feel like I’ve bought something that’s not as magnificent as the money I put into it, features and specs be damned.

Customer experience isn’t just a buzz phrase, and it doesn’t just apply to how you deal with angry customers or how you talk to them while making a sale. It also doesn’t mean giving your customer everything they want. Customer experience is the journey from start to finish. It’s providing a predictable, customer-centric, and enjoyable experience for a customer that is entrusting their hard-earned cash in your product. And it applies to every business, not just retail computer sellers and coffee shops. What’s more, it applies to anyone in a service-oriented job.

Customer Experience for IT Professionals

In a previous post I mentioned how important it is to know your client. Even if your position is Sub-DBA In Charge of Dropping Indexes That Start With The Letter Z, you still have a customer (Sub-DBA In Charge Of Dropping Indexes That Start With The Letters N-Z, of course). Not just your boss, but the business that is counting on you to do your job in order to make a profit. And you may provide an exceptional level of service. Perhaps you spend countless hours whittling away at explain plans until a five page Cognos query is as pure as the driven snow and runs in the millisecond range. But it’s not just what you do, but how you do it that is important.

I want you to try something. And if you already do this, good on you. Next time you get a phone call request from someone at your work, or have a phone meeting, or someone sends you a chat asking you to do something, I want you to send a brief email back (we call this an “ack” in technical terms) that acknowledges their request, re-lists what they need in your own words (and preferably with bullets), and lists any additional requirements or caveats. Also let them know how long it will take. Make sure you don’t underestimate, it’s better to quote too much time and get it to them early. Once you’ve finished the work, write a recap email. “As we discussed,” you might say, “I have created the five hundred gazillion tables you need and renamed the table PBRDNY13 to PBRDNY13X.” Adding, of course, “Please let me know if you have any other requests.”

If the task you did involves a new connection, provide them the details (maybe even in the form of a TNSNAMES). If there are unanswered questions, spell them out. If you have an idea that could make the whole process easier next time, run it by them. Provide that level of experience on at least one task you accomplish for your customer if you do not already, and let me know if it had any impact that you can tell. Now do it consistently.

The Apple ExperienceFrom what I’ve seen, this is what separates the “workers” from the “rockstars.” It’s not the ability to fix problems faster than a speeding bullet (though that helps, as a service that sells itself), but the ability to properly communicate the process and give people a good expectation that they can count on.

There’s a lot more to it than that, I know. And some of you may say that you lack the time to have this level of care for every request that comes your way. Perhaps you’re right, or perhaps you’re suffering from IT Stockholm Syndrome. Either way, just give it a shot. I bet it will make a difference, at least most of the time.

Conclusion

Recently, I became the Director of Customer Education and Experience at Delphix, a job that I am deeply honored to have. Delphix is absolutely a product that arouses within customers an eager want, it solves complex business problems, has an amazing delivery infrastructure in the Professional Services team, and provides top notch support thereafter. A solid recipe for Customer Experience if there ever was one. But it’s not just about the taste of the meal, it’s about presentation as well. And so it is my goal to continuously build an industrialized, scalable, repeatable, and enjoyable experience for those who decide to invest their dollar on what I believe to be an amazing product. Simply put, I want to impart on them the same enthusiasm and confidence in our product that I have.

I hope you have the chance to do the same for your product, whatever it may be.

The post The Customer Experience appeared first on Oracle Alchemist.

Javascript Driven ADF Taskflows for WebCenter Portal

This is a continuation from my previous post - Developing WebCenter Content Cross Platform iDoc Enabled Components for Mobile, ADF, Sharepoint, Liferay.

You can see a video of JIVE Forums integration with a JS Taskflows vs ADF Taskflow running in WebCenter Portal here -

Click here for hi-resolution

This post is aimed at Web Developers, Designers and Marketing web teams who aren’t familiar with ADF and want to create reusable dynamic taskflows without the need to learn ADF or Java to provide interactive dynamic regions using Javascript, HTML and CSS with custom frameworks like jQuery designed not to conflict with ADF JS environment.

Read on for a step by step run through on creating JS driven taskflows  -

    1. You will need to download JDeveloper – I’m using JDev 11.1.1.7.0 for WebCenter Portal 11g where I will deploy my custom taskflow driven entirely with Javascript.
    2. Run through the following Oracle guide to setup your project to extend Portal (11.1.1.8.3) - Developing Components for WebCenter Portal Using JDeveloper
    3. Add new taskflow to library by right-clicking WebCenterSpacesExtensions and selecting “New…”
    4. Add ADF Task Flow (JSF)
      .
      1
      .
    5. Name the xml file, leaving the Directory the JDev default
      .
      2
      .
    6. Double click the new xml file and drag a View element into the diagram from the Component Palette
      .
      3
      .
    7. Rename “view1″ to “[taskflow name]View”.
    8. Double click the new view to create a page fragment.
      Update the directory and add \taskflows\[taskflow name]\view
      This will make it easier to sort through in the future when you develop more taskflows.
      .
      4
      .
    9. Edit the JSFF and display code in source view.
      .
      5
      .
    10. Replace with the following -
      <?xml version='1.0' encoding='UTF-8'?>
      <jsp:root xmlns:jsp="http://java.sun.com/JSP/Page" version="2.1"
                xmlns:af="http://xmlns.oracle.com/adf/faces/rich"
                xmlns:f="http://java.sun.com/jsf/core">
      <af:resource type="javascript">
      <![CDATA[
      /**
       * CREATE BASE JS CONTAINER OBJ
       * This is base class to assist PSA javascript methods to init after page loaded.
       * You can add this script in the head of you template instead of the portlet.
       */
      var FB = window.FB || {},
      	Base = Base || (function() {
      		return {
      			//create multi-cast delegate.
      			onPortalInit: function(function1, function2) {
      				return function() {
      					if (function1) {
      						function1();
      					}
      					if (function2) {
      						function2();
      					}
      				}
      			},
      			//used for chaining methods
      			chainPSA: function() {}
      		}
      	})();
      
      //Use Base method if FB.Base hasn't been created
      FB.Base = FB.Base || Base;
      /************************/
      
      
      
      
      /**
       * CREATE CHAIN WRAPPER
       * Chain method will initialise from Base requirejs core script
       */
      FB.Base.chainPSA = FB.Base.onPortalInit(FB.Base.chainPSA, function() {
      	//set base mustache template name to load and inject
      	var vUID = 'FB_sampleContainer_${pageFlowScope.containerID}', //(UID) Unique Classname to inject template into - can't use IDs in portal 
      		oConstructor = {
      			vTemplate: 		'import/tpl/sampleTpl', //location of sampleTpl.mustache to load
      			oParams: { //Obj list of default params pulled from sample.xml Input definition
      				title:			'${pageFlowScope.title}',
      				displayTitle: 	'${pageFlowScope.displayTitle}',
      				activeUser: 	'${pageFlowScope.activeUser}'
      			},
      			containerID: 		vUID
      		};
      	
      	//check if array exists from other custom JS Portlets
      	if (typeof(FB.loadTemplate) === 'object') {
      		FB.loadTemplate.portletUIDList.push(vUID);
      	//create empty object
      	} else {
      		FB.loadTemplate = {
      			portletUIDList:[vUID],
      			portlets: {}
      		};
      	}
      
      	//inject params
      	FB.loadTemplate.portlets[vUID] = oConstructor;
      
      });
      /************************/
      ]]>
      </af:resource>
      
      
      <!-- Sample template will be injected here -->
      <af:panelGroupLayout layout="vertical" id="FB-SampleContainer" styleClass="FB_sampleContainer_#{pageFlowScope.containerID} portlet-sampleContainer"></af:panelGroupLayout>
      <!-- xSample template will be injected here -->
      
      
      </jsp:root>

      OVERVIEW:

      This is where the mustache template will be injected into to provide the sample component functionality.

    11. <af:panelGroupLayout layout="vertical" id="FB-SampleContainer" styleClass="FB_sampleContainer_#{pageFlowScope.containerID} portlet-sampleContainer"></af:panelGroupLayout>

      The oConstructor specifies the configuration of the the component to inject.
      vTemplate points to a JS file that requireJS imports and configures the base multiUploader components from the params defined.

      oParams contains all configuration for the App at the moment these are scoped params associated with the taskflow that you can allow the user to define and use within you sample component as a JS var.

      var vUID = 'FB_sampleContainer_${pageFlowScope.containerID}', //(UID) Unique Classname to inject template into - can't use IDs in portal 
      		oConstructor = {
      			vTemplate: 		'import/tpl/sampleTpl', //location of sampleTpl.mustache to load
      			oParams: { //Obj list of default params pulled from sample.xml Input definition
      				title:			'${pageFlowScope.title}',
      				displayTitle: 	'${pageFlowScope.displayTitle}',
      				activeUser: 	'${pageFlowScope.activeUser}'
      			},
      			containerID: 		vUID
      		};

      A simple check to see if other components exist on the page and append the new component within the JS Array “PortletUIDList” associated with a JS Object holding the component params in “portlets”

      //check if array exists from other custom JS Portlets
      	if (typeof(FB.loadTemplate) === 'object') {
      		FB.loadTemplate.portletUIDList.push(vUID);
      	//create empty object
      	} else {
      		FB.loadTemplate = {
      			portletUIDList:[vUID],
      			portlets: {}
      		};
      	}
      
      	//inject params
      	FB.loadTemplate.portlets[vUID] = oConstructor;

      Finally the JS configuration is wrapped in JS chain wrapper that will only initialise when requireJS has loaded in all its core base libraries like Jquery etc.

      FB.Base.chainPSA = FB.Base.onPortalInit(FB.Base.chainPSA, function() {
      
      //code
      
      });

      Make sure within your ADF Template you have setup requirejs core and have the following to initialise the FB.Base.chainPSA and loop through the custom taskflows to display on the page -

      //load JS Components
      		if (FB.Base.chainPSA) {
      			FB.Base.chainPSA();
      		}

      //loop and request all templates required
      			for (x;x<lPortletList;x++) {
      				var vPortletUID 	= aPortletList[x],
      					oPortlet 		= FB.loadTemplate.portlets[vPortletUID];
      				
      				//define temp object info to pass into script when init	
      				define('temp'+x, oPortlet);
      				
      				//request and initialise portlet template & pass params
      				require([oPortlet.vTemplate,'temp'+x], function(tpl,oPortlet) {
      					console.log('[IMPORTED TEMPLATE]',tpl.component,oPortlet);
      					tpl.init(oPortlet);
      				});
      			}

    12. To add taskflow parameters open the xml file again.
    13. Select Overview tab bottom left of the screen.
      Select the Parameters side tab.
      Add the following four example params -
      .
      6
      You will see these when we add and edit the taskflow to a portal page in WebCenter Composer.
    14. Deploy the taskflow to WebCenter Portal following the last steps in the Oracle GuideOnce the new taskflow / spaces extension project has been deployed load WebCenter Portal.
      The following screenshots from PS5 the UI has changed since PS7 but you should be able to work out the differences.
    1. Go into administration area of the portal and select the “Resources” Tab
    2. Select the “Resource Catalogs” from the items on the left under the “Structure” heading.
      A list of Resource Catalogs will be available. You can create a new one or use an existing one. Make sure the one you are updating is the one being used by the portal you want to add the taskflow into.
      .
      7
      .
    3. Select the resource catalogue and Edit from the Edit Menu drop down down.
      .
      8
      .
    4. A window will appear hear you can add folders and where you want your components to appear.
      I have created a Demo Taskflow folder.
    5. Select “Add From Library” from the Add dropdown menu.
      .
      9
      .
    6. Drill into Taskflows and add your [Taskflow] – I am adding the sample taskflow I created earlier.
    7. Go into your portal create a new page and add the new taskflow.
      Here is an example of the Jive Forums that I recreated as a JS driven taskflow.
      .
      11
      .
      12
      .
    8. And the final output of the taskflow on the page.
      13

 

 

 

The post Javascript Driven ADF Taskflows for WebCenter Portal appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

Developing WebCenter Content Cross Platform iDoc Enabled Components for Mobile, ADF, Sharepoint, Liferay

frankensteinSo over the last couple of months I’ve been thinking and tinkering with code, wondering, “What’s the best approach for creating WebCenter Content (WCC) components that I can consume and reuse across multiple platforms and environments?”
Is it pagelet producer or maybe an iFrame? These solutions just weren’t good enough or didn’t allow the flexibility I really wanted.

I needed a WCC Solution that could easily be consumed into mobile, either Cordova (Hybrid APP) or ADF Mobile (AMX views), and that worked on different devices/platforms as well as on any enterprise app, i.e. Sharepoint (.Net), Lifreray,  WebCenter Portal (ADF) or even consumed into the new WebCenter Content ADF WebUI. It also needed to provide the added advantage that there would not need to be multiple branches of code or redevelopment of the component for each platform and environment.

And in the famous words of Victor Frankenstein.. “It’s Alive!!”

After tinkering around and trying different approaches, this is the solution I created to support the above model.
I’m not saying this is the right approach or supported by the enterprise vendors, but an approach that is reusable and can work on all enterprise apps.

 

[VIDEO CONVERTING]…

Here’s a quick video of a drag/drop MultiUploader component I created for WebCenter Content Classic that I can reuse on .Net and ADF WebCenter Portal/Content as well as mobile.

Read on to find out more on how this was achieved.

1) First, I’m going to dig into WebCenter Content and explain the underlying structure of the component.

To create a flexible base model, I created a light Javascript framework, very similar to AngularJS or ReactJS.

This would be the base component that would enable additional components on the page with the use of Mustache (JS templates) to drive and inject dynamic functional areas of content into a specified DOM node by ID or className.
Any changes of layout with the component are handled via an AJAX request to a cached mustache template which updates the DOM when needed (similar to ADFs PPR). Any user interaction is handled through event-driven actions from the imported templates.

RequireJS is used to supply a flexible module loading framework, where I do not need to be worried over conflicts of JS libraries and is used to load in mustache templates and additional JS functionality when needed.

You’re probably thinking that there are going to be a lot of AJAX requests going back and forth and it’s going to be slow. Just check out the video – the answer is not really. The mustache templates are going to be smaller than average images you load on a page.

So as an example for the MultiUploader, I only have 1 mustache template that is 9kb. All interaction is handled by 2 JS files that are 39kb uncompressed.

2) As mentioned, a base model WCC component, “FishbowlModuleLoader”, will load in and initiate all other components on the page and will only load and cache required templates and JS files as and when is needed. There is no point to load in all templates and JS functionality on a page if it is not needed, which improves performance and interaction of the component.

3) Following is a quick overview of how the WCC component “FishbowlMultiUploader” works.

WebCenter Content Resource Asset

This is the base structure of the Content Component configuration, “fb_multi_upload_page_body”. It is consumed into a custom template, “MULTI_UPLOAD_PAGE”, which is requested via a custom service request, “?IdcService=GET_FB_MULTI_UPLOAD_PAGE”.

<!--
Name:           fb_multi_upload_page_body
Author:         John Sim  [18/06/2014]
Parameters:		
Description:	Page Body for Multi Checkin used in MULTI_UPLOAD_PAGE template
-->
<@dynamichtml fb_multi_upload_page_body@>
[[% FB fb_multi_upload_page_body Template body MULTI_UPLOAD_PAGE %]]

<div id="FB-multiCheckin" class="FB_multiCheckin"></div>

<script>
/**
 * CREATE CHAIN WRAPPER
 * Chain method will load from Base ModuleLoader requirejs core script
 */
FB.Base.chainPSA = FB.Base.onPortalInit(FB.Base.chainPSA, function() {
	//set base mustache template name to load and inject
	var vUID = 'FB_multiCheckin', //(UID) Unique Classname to inject template into - can't use IDs in portal 
		oConstructor = {
			vTemplate: 'import/tpl/multiUploadTpl', //location of template.mustache to load
			oParams: { //Obj list of default params pulled from multiUploader.xml Input definition
				maxUploadSize:			'10mb',
				defaultDocType:			('<$multiUploadDefaultType$>' !== '')? '<$multiUploadDefaultType$>': 'Document', 
				defaultSecurityGroup:		('<$multiUploadDefaultSecurityGroup$>' !== '')? '<$multiUploadDefaultSecurityGroup$>': 'Public',
				defaultAccount:			'Workspace/'+userName, 
				author:				(typeof(userName) !== 'undefined')? userName: '', 
				httpEnterpriseCgiPath: 		(typeof(httpEnterpriseCgiPath) !== 'undefined')? httpEnterpriseCgiPath: '',
				idcToken: 			(typeof(idcToken) !== 'undefined')? idcToken: '',
				httpWebRoot: 			(typeof(httpWebRoot) !== 'undefined')? httpWebRoot: '',
				enableTagging:			true,
				enableEmails:			true,
				enableBarcode:			true,
				enableCheckinProfiles: 		true,
				showHelpOption: 		true
			},
			containerID: 		vUID
		};
	
	//check if array exists from other custom JS Portlets
	if (typeof(FB.loadTemplate) === 'object') {
		FB.loadTemplate.portletUID.push('FB_multiUploadContainer_' + vUID);
	//create empty object
	} else {
		FB.loadTemplate = {
			portletUIDList:['FB_multiUploadContainer_' + vUID],
			portlets: {}
		};
	}

	//inject params
	FB.loadTemplate.portlets['FB_multiUploadContainer_' + vUID] = oConstructor;
});
/************************/
</script>


<@end@>

This is where the mustache template will be injected into to provide the multiUpload component functionality.

<div id="FB-multiCheckin" class="FB_multiCheckin"></div>

The oConstructor specifies the configuration of the the component to inject.
vTemplate points to a JS file that requireJS imports and configures the base multiUploader components from the params defined.

oParams contains all configuration for the app; at the moment, these are mostly hard coded, but could be defined as iDoc Variables when you install and enable the component within WCC.

var vUID = 'FB_multiCheckin', //(UID) Unique Classname to inject template into - can't use IDs in portal 
		oConstructor = {
			vTemplate: 'import/tpl/multiUploadTpl', //location of template.mustache to load
			oParams: { //Obj list of default params pulled from multiUploader.xml Input definition
				maxUploadSize:			'10mb',
				defaultDocType:			('<$multiUploadDefaultType$>' !== '')? '<$multiUploadDefaultType$>': 'Document', 
				defaultSecurityGroup:		('<$multiUploadDefaultSecurityGroup$>' !== '')? '<$multiUploadDefaultSecurityGroup$>': 'Public',
				defaultAccount:			'Workspace/'+userName, 
				author:				(typeof(userName) !== 'undefined')? userName: '', 
				httpEnterpriseCgiPath: 		(typeof(httpEnterpriseCgiPath) !== 'undefined')? httpEnterpriseCgiPath: '',
				idcToken: 			(typeof(idcToken) !== 'undefined')? idcToken: '',
				httpWebRoot: 			(typeof(httpWebRoot) !== 'undefined')? httpWebRoot: '',
				enableTagging:			true,
				enableEmails:			true,
				enableBarcode:			true,
				enableCheckinProfiles: 		true,
				showHelpOption: 		true
			},
			containerID: 		vUID
		};

This is a simple check to see if other components exist on the page and append the new component within the JS Array “PortletUIDList” associated with a JS Object holding the component params in “portlets”.

//check if array exists from other custom JS Portlets
	if (typeof(FB.loadTemplate) === 'object') {
		FB.loadTemplate.portletUID.push('FB_multiUploadContainer_' + vUID);
	//create empty object
	} else {
		FB.loadTemplate = {
			portletUIDList:['FB_multiUploadContainer_' + vUID],
			portlets: {}
		};
	}

	//inject params
	FB.loadTemplate.portlets['FB_multiUploadContainer_' + vUID] = oConstructor;

Finally, the JS configuration is wrapped in JS chain wrapper that will only initialize when required. JS has loaded in all its core base libraries like Jquery, etc.

FB.Base.chainPSA = FB.Base.onPortalInit(FB.Base.chainPSA, function() {

//code

});

 

4) So lets take a look at how the base component “FishbowlModuleLoader” works.

Essentially, this defines the FB.Base.chainPSA chain wrapper method in the header – does not need jquery or any other library.

<!--
Name:           std_html_head_declarations
Author:         John Sim  [18/06/2014]
Parameters:		
Description:	Add required header resources
-->
<@dynamichtml std_html_head_declarations@>
[[% FB std_html_head_declaration Update head add JS libs for module loader %]]

<$include super.std_html_head_declarations$>

<script>
/**
 * CREATE BASE JS CONTAINER OBJ
 * DONOT ADD JQUERY this is base class to assist PSA javascript methods to init after page loaded.
 */
var FB = window.FB || {},
	Base = Base || (function() {
		return {
			//create multi-cast delegate.
			onPortalInit: function(function1, function2) {
				return function() {
					if (function1) {
						function1();
					}
					if (function2) {
						function2();
					}
				}
			},
			//used for chaining methods
			chainPSA: function() {}
		}
	})();

//Use Base method if FB.Base hasn't been created
FB.Base = FB.Base || Base;
/************************/
</script>

<@end@>

You could cache this and put it in a script file, I’ve just put it inline easier for you to read.

In the footer, we define requireJS and the configuration that loads in base libraries that we need for all components ie Jquery and maybe a few others.
Also we setup fb.core.js as our base script to import and load in the core framework I built to handle routing and DOM event interaction as well as global vars.

<!--
Name:           std_page_end
Author:         John Sim  [18/06/2014]
Parameters:		
Description:	Component Module Loader RequireJS setup
-->
<@dynamichtml std_page_end@>
[[% FB std_page_end Add Module Loader RequireJS lib %]]

<$include super.std_page_end$>


<!-- Init FB Component Module Loader -->
<script src="<$HttpWebRoot$>resources/FishbowlModuleLoader/js/core/config.js"></script>
<script src="<$HttpWebRoot$>resources/FishbowlModuleLoader/js/libs/requirejs/require.min.js" data-main="fb.core"></script>
<!-- Init FB Component Module Loader -->
<@end@>

fb.core.js so here is where the magic begins:

// REQUIREJS Base configuration
require([
	//Dom ready req plugin
	'domReady',
	
	
	//core 
	'import/Layout',
	'import/Action',
	'import/Navigation',
	'import/Global',
	
	
	//Plugins
	'Moment',		//date plugin momentjs
	'ftlabsFastClick', 	//fix touch 300ms delay
	'fb'			//fb global methods
	

	
], function(domReady, Layout){
console.info('[ALL MODULES LOADED]');

	domReady(function() {
		console.info('[DOM READY]');
		
		//initialise layout DOM events ie click, touch etc.
		Layout.init();
		
		//load JS Components
		if (FB.Base.chainPSA) {
			FB.Base.chainPSA();
		}
		
		//check if any JS driven template containers exist
		if (typeof(FB.loadTemplate) !== 'undefined') {
			var aPortletList 	= FB.loadTemplate.portletUIDList,
				lPortletList 	= aPortletList.length,
				x 				= 0;
				
			//loop and request all templates required
			for (x;x<lPortletList;x++) {
				var vPortletUID 	= aPortletList[x],
					oPortlet 		= FB.loadTemplate.portlets[vPortletUID];
				
				//define temp object info to pass into script when init	
				define('temp'+x, oPortlet);
				
				//request and initialise portlet template & pass params
				require([oPortlet.vTemplate,'temp'+x], function(tpl,oPortlet) {
					console.log('[IMPORTED TEMPLATE]',tpl.component,oPortlet);
					tpl.init(oPortlet);
				});
			}
		}
		
	});
	
});

Once the Dom has fully loaded, FB.Base.chainPSA(); is initiated. This sets up and configures the FB.loadTemplate object that contains all information associated to required components that will need to be loaded into the page.

Here we loop through and load in all templates, and pass across the component configuration to the templates to be initialized:

//loop and request all templates required
			for (x;x<lPortletList;x++) {
				var vPortletUID 	= aPortletList[x],
					oPortlet 		= FB.loadTemplate.portlets[vPortletUID];
				
				//define temp object info to pass into script when init	
				define('temp'+x, oPortlet);
				
				//request and initialise portlet template & pass params
				require([oPortlet.vTemplate,'temp'+x], function(tpl,oPortlet) {
					console.log('[IMPORTED TEMPLATE]',tpl.component,oPortlet);
					tpl.init(oPortlet);
				});
			}

And thats all there is to it.

5) Lets dig into WebCenter Portal now. How can you reuse all that code you’ve written for WebCenter Content Classic within ADF?

Easy: let’s create a JS driven taskflow template that we can dump into the resource catalogue and drag, drop, and reuse it throughout any page where ever it is needed.

I’ve created a new post for this part:
Read on here to find out how to create JS Driven Taskflow templates.

 

Some gotcha’s - 

Some things to think about if you do decide to use this approach.

  1. You will need to make sure that all AJAX requests are made on the same domain.
    1. or enable CORs from UCM to accepts requests cross domain. (Mobile works crossdomain)
  2. WCC needs to be accessible by the users browser
    1. You can setup a proxy service and only allow access to the custom services you require to lock down other UCM environment access if needed.

And finally - one thing that comes to mind here: I am using static mustache templates but there is nothing stopping you from creating a custom WCC service to generate mustache templates with embedded idoc if you want..

The post Developing WebCenter Content Cross Platform iDoc Enabled Components for Mobile, ADF, Sharepoint, Liferay appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

Teradata bought Hadapt and Revelytix

Curt Monash - Wed, 2014-07-23 02:29

My client Teradata bought my (former) clients Revelytix and Hadapt.* Obviously, I’m in confidentiality up to my eyeballs. That said — Teradata truly doesn’t know what it’s going to do with those acquisitions yet. Indeed, the acquisitions are too new for Teradata to have fully reviewed the code and so on, let alone made strategic decisions informed by that review. So while this is just a guess, I conjecture Teradata won’t say anything concrete until at least September, although I do expect some kind of stated direction in time for its October user conference.

*I love my business, but it does have one distressing aspect, namely the combination of subscription pricing and customer churn. When your customers transform really quickly, or even go out of existence, so sometimes does their reliance on you.

I’ve written extensively about Hadapt, but to review:

  • The HadoopDB project was started by Dan Abadi and two grad students.
  • HadoopDB tied a bunch of PostgreSQL instances together with Hadoop MapReduce. Lab benchmarks suggested it was more performant than the coyly named DBx (where x=2), but not necessarily competitively with top analytic RDBMS.
  • Hadapt was formed to commercialize HadoopDB.
  • After some fits and starts, Hadapt was a Cambridge-based company. Former Vertica CEO Chris Lynch invested even before he was a VC, and became an active chairman. Not coincidentally, Hadapt had a bunch of Vertica folks.
  • Hadapt decided to stick with row-based PostgreSQL, Dan Abadi’s previous columnar enthusiasm notwithstanding. Not coincidentally, Hadapt’s performance never blew anyone away.
  • Especially after the announcement of Cloudera Impala, Hadapt’s SQL-on-Hadoop positioning didn’t work out. Indeed, Hadapt laid off most or all of its sales and marketing folks. Hadapt pivoted to emphasize its schema-on-need story.
  • Chris Lynch, who generally seems to think that IT vendors are created to be sold, shopped Hadapt aggressively.

As for what Teradata should do with Hadapt:

  • My initial thought Hadapt was to just double down, pushing the technology forward, presumably including a columnar option such as the one Citus Data developed.
  • But upon reflection, if it made technical sense to merge the Aster and Hadapt products, that would be better yet.

I herewith apologize to Aster co-founder and Hadapt skeptic Tasso Argyros (who by the way has moved on from Teradata) for even suggesting such heresy. :)

Complicating the story further:

  • Impala lets you treat data in HDFS (Hadoop Distributed File System) as if it were in a SQL DBMS. So does Teradata SQL-H. But Hadapt makes you decide whether the data is in HDFS or the SQL DBMS, and it can’t be in both at once. Edit: Actually, see Dan Abadi’s comments below.
  • Impala and Oracle’s new SQL-H competitor have daemons running on every data node. So does one option in Hadapt. But I don’t think SQL-H does that yet.

I was less involved with Revelytix that with Hadapt (although I’m told I served as the “catalyst” for the original Teradata/Revelytix partnership). That said, Teradata — like Oracle — is always building out a data integration suite to cover a limited universe of data stores. And Revelytix’ dataset management technology is a nice piece toward an integrated data catalog.

Related posts

Teradata bought Hadapt and Revelytix

DBMS2 - Wed, 2014-07-23 02:29

My client Teradata bought my (former) clients Revelytix and Hadapt.* Obviously, I’m in confidentiality up to my eyeballs. That said — Teradata truly doesn’t know what it’s going to do with those acquisitions yet. Indeed, the acquisitions are too new for Teradata to have fully reviewed the code and so on, let alone made strategic decisions informed by that review. So while this is just a guess, I conjecture Teradata won’t say anything concrete until at least September, although I do expect some kind of stated direction in time for its October user conference.

*I love my business, but it does have one distressing aspect, namely the combination of subscription pricing and customer churn. When your customers transform really quickly, or even go out of existence, so sometimes does their reliance on you.

I’ve written extensively about Hadapt, but to review:

  • The HadoopDB project was started by Dan Abadi and two grad students.
  • HadoopDB tied a bunch of PostgreSQL instances together with Hadoop MapReduce. Lab benchmarks suggested it was more performant than the coyly named DBx (where x=2), but not necessarily competitively with top analytic RDBMS.
  • Hadapt was formed to commercialize HadoopDB.
  • After some fits and starts, Hadapt was a Cambridge-based company. Former Vertica CEO Chris Lynch invested even before he was a VC, and became an active chairman. Not coincidentally, Hadapt had a bunch of Vertica folks.
  • Hadapt decided to stick with row-based PostgreSQL, Dan Abadi’s previous columnar enthusiasm notwithstanding. Not coincidentally, Hadapt’s performance never blew anyone away.
  • Especially after the announcement of Cloudera Impala, Hadapt’s SQL-on-Hadoop positioning didn’t work out. Indeed, Hadapt laid off most or all of its sales and marketing folks. Hadapt pivoted to emphasize its schema-on-need story.
  • Chris Lynch, who generally seems to think that IT vendors are created to be sold, shopped Hadapt aggressively.

As for what Teradata should do with Hadapt:

  • My initial thought Hadapt was to just double down, pushing the technology forward, presumably including a columnar option such as the one Citus Data developed.
  • But upon reflection, if it made technical sense to merge the Aster and Hadapt products, that would be better yet.

I herewith apologize to Aster co-founder and Hadapt skeptic Tasso Argyros (who by the way has moved on from Teradata) for even suggesting such heresy. :)

Complicating the story further:

  • Impala lets you treat data in HDFS (Hadoop Distributed File System) as if it were in a SQL DBMS. So does Teradata SQL-H. But Hadapt makes you decide whether the data is in HDFS or the SQL DBMS, and it can’t be in both at once. Edit: Actually, see Dan Abadi’s comments below.
  • Impala and Oracle’s new SQL-H competitor have daemons running on every data node. So does one option in Hadapt. But I don’t think SQL-H does that yet.

I was less involved with Revelytix that with Hadapt (although I’m told I served as the “catalyst” for the original Teradata/Revelytix partnership). That said, Teradata — like Oracle — is always building out a data integration suite to cover a limited universe of data stores. And Revelytix’ dataset management technology is a nice piece toward an integrated data catalog.

Related posts

Categories: Other

EID Holidays and things to do

Syed Jaffar - Wed, 2014-07-23 02:07
Looking forward to a much anticipated 9 day EID holiday break to complete the to-do-list which I have been carrying for a while now. Determined to complete some of the writing assignments that I have kept pending for a long period of time now. At the same time, will have to seek the possibilities to exploring the new features of v12.1.0.2 and Exadata as we might we going for the combination in the coming weeks for a Data Warehouse project.

Will surely blog about my test scenarios and will share the inputs on Oracle 12c new features.

I wish everyone a very happy and prosperous EID in advance.

12c Threaded Execution Test

Bobby Durrett's DBA Blog - Tue, 2014-07-22 17:39

I did a quick check of some facts I’m studying about Oracle 12c and its new threaded execution mode.  I set this parameter:

alter system set THREADED_EXECUTION=true scope=spfile;

I had to connect SYS as SYSDBA with a password to get the system to bounce.

Then it had these processes only:

oracle    1854     1  0 09:17 ?        00:00:00 ora_pmon_orcl
oracle    1856     1  0 09:17 ?        00:00:00 ora_psp0_orcl
oracle    1858     1  2 09:17 ?        00:00:00 ora_vktm_orcl
oracle    1862     1  3 09:17 ?        00:00:00 ora_u004_orcl
oracle    1868     1 99 09:17 ?        00:00:17 ora_u005_orcl
oracle    1874     1  0 09:17 ?        00:00:00 ora_dbw0_orcl

This differs from some of my 12c OCP study material but agrees with the manuals.  Only pmon, psp, vktm, and dbw have dedicated processes.

Also, I found that I needed this value in the listener.ora:

dedicated_through_broker_listener=on

I needed that value to connect using a thread.  Before I put that in it spawned a dedicated server process when I connected over the network.

Lastly, contrary to what I had read I didn’t need to set the local_listener parameter to get the new connections to use a thread:

SQL> show parameter local_listener

NAME                                 TYPE        VALUE
------------------------------------ ----------- ---------------------
local_listener                       string

- Bobby

Categories: DBA Blogs

REGEXP_LIKE: strange unspecified value in parameter “modifier”

XTended Oracle SQL - Tue, 2014-07-22 15:05

Today I noticed strange thing in predicate section of execution plan for simple query with regexp_like, where 3rd parameter “MODIFIER” was not specified:

SQL> select * from dual where regexp_like(dummy,'.');

D
-
X

SQL> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------
SQL_ID  97xuqf9cmjsta, child number 0
-------------------------------------
select * from dual where regexp_like(dummy,'.')

Plan hash value: 272002086

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |       |       |     2 (100)|          |
|*  1 |  TABLE ACCESS FULL| DUAL |     1 |     2 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter( REGEXP_LIKE ("DUMMY",'.',HEXTORAW('F07FD85CFF0700006A1116
              45010000000000000000000000FC12164501000000000000000000000000000000000000
              0010000000000000001880D85CFF07000002000000000000000000000081000000') ))


20 rows selected.

It is particularly interesting that the values in HEXTORAW() are always different for different first parameters:

SQL> select * from dual where regexp_like(dummy,'x');
...
   1 - filter( REGEXP_LIKE ("DUMMY",'x',HEXTORAW('3895D330FF0700006A1116
              45010000000000000000000000FC12164501000000000000000000000000000000000000
              0011000000000000006895D330FF07000002000000000000000000000081000000') ))
SQL> select * from dual where regexp_like(dummy,'y');
...
   1 - filter( REGEXP_LIKE ("DUMMY",'y',HEXTORAW('00DA3C3FFF0700006A1116
              45010000000000000000000000FC12164501000000000000000000000000000000000000
              00110000000000000030DA3C3FFF07000002000000000000000000000081000000') ))
SQL> select * from dual where regexp_like(dummy||'','x')
...
   1 - filter( REGEXP_LIKE ("DUMMY"||'','x',HEXTORAW('70964F2FFF0700006A
              111645010000000000000000000000FC1216450100000000000000000000000000000000
              0000001100000000000000A0964F2FFF07000002000000000000000000000081000000')
               ))

I don’t know, what does it mean, but it looks like garbage from memory.
When I noticed this, I decided to check how regexp_like will work in function-based indexes:

SQL> create table xtest as
  2    select dummy||level as str
  3    from dual
  4    connect by level<=30;

Table created.

SQL> select * from xtest where case when regexp_like(str,'1') then 1 end = 1;
...
12 rows selected.

SQL> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------
SQL_ID  7ztp0k8c1zn2h, child number 0
-------------------------------------
select * from xtest where case when regexp_like(str,'1') then 1 end = 1

Plan hash value: 4207139086

---------------------------------------------------------------------------
| Id  | Operation         | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |       |       |       |     3 (100)|          |
|*  1 |  TABLE ACCESS FULL| XTEST |    12 |   264 |     3   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(CASE  WHEN  REGEXP_LIKE
              ("STR",'1',HEXTORAW('68F9CB32FF0700006A111645010000000000000000000000FC1
              216450100000000000000000000000000000000000000110000000000000098F9CB32FF0
              7000002000000000000000000000081000000') ) THEN 1 END =1)

SQL> create index xtest_fbi on xtest(case when regexp_like(str,'1') then 1 end);

Index created.

SQL> select * from xtest where case when regexp_like(str,'1') then 1 end = 1;
...
12 rows selected.

SQL> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------
SQL_ID  7ztp0k8c1zn2h, child number 0
-------------------------------------
select * from xtest where case when regexp_like(str,'1') then 1 end = 1

Plan hash value: 1479471124

-----------------------------------------------------------------------------------------
| Id  | Operation                   | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |           |       |       |     2 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID| XTEST     |    12 |   300 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | XTEST_FBI |    12 |       |     1   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("XTEST"."SYS_NC00002$"=1)

SQL> select column_expression from user_ind_expressions e where e.index_name='XTEST_FBI';

COLUMN_EXPRESSION
-----------------------------------------------------------------------------------------
CASE  WHEN  REGEXP_LIKE ("STR",'1') THEN 1 END

As you can see it works fine, although the predicate from first execution plan differs from the FBI expression.
Then I dumped 10053 trace and noticed that the HEXTORAW(…) function appeared in “Explain Plan Dump” only, so it looks just like plan output bug.

Categories: Development

Oracle 12c Security - SQL Translation and Last Logins

Pete Finnigan - Tue, 2014-07-22 14:20

There has been some big new security items added to 12cR1 such as SHA2 in DBMS_CRYPTO, code based security in PL/SQL, Data Redaction, unified audit or even privilege analysis but also as I hinted in some previous blogs there are....[Read More]

Posted by Pete On 31/07/13 At 11:11 AM

Categories: Security Blogs

Annonce : Oracle Database 12.1.0.2 disponible

Jean-Philippe Pinte - Tue, 2014-07-22 13:53
Oracle Database 12.1.0.2 est disponible en téléchargement !

Plus d'informations :

Oracle Database 12.1.0.2.0 – New Features

Marco Gralike - Tue, 2014-07-22 11:21
The world around us is changing and new stuff in the Oracle database arena is nowadays released on patch level. Although not many new features...

Read More

Macros and parameter passing macros with #goldengate

DBASolved - Tue, 2014-07-22 10:04

Replicating data can be a time consuming process to setup.  Fortunately, Oracle GoldenGate provide a few tools to help ease the  complexity of setup.  One of these tools is the “macro”.  Macros are used to simplify and automate the work associated with setting up and replicating data.  So what exactly is a macro?  Oracle defines a macro as:

A macro is a built-in automation tool that enables you to call a stored set of processing steps from within the Oracle GoldenGate parameter file.

In a nutshell, a macro is a stored set of commands that are used on a frequent basis; consisting of parameters for simple to complex series of substitutions, calculations or conversions.  Macros may be  written inline in the parameter file or stored in a macro library.

What this post will show you is how to add a macro to a library, accept parameters and then pass the parameter to within the parameter file during replication.

First thing that needs to be done is setup a standard directory to contain all the macro files.  A directory called “dirmac” needs to be created in the OGG_HOME.

Note: The “dirmac” directory is something that you will need to create with the following command “mkdir -p $OGG_HOME/dirmac”.  A macro directory can be created anywhere you like, I personally try to keep all OGG items together.

In the macro directory, create a file to use as the macro library.  Ideally, you should have a file for each type of process running in that $OGG_HOME.  Once the file is created, then edit the library file and add the macros desired.


> mkdir -p $OGG_HOME/dirmac
> cd $OGG_HOME/dirmac
> touch <library_name>.mac 
> vi <library_name>.mac 

When the macro library file is open for editing add macro that is desired.  Remember, a macro library can house more than one macro.  In the example below, you will see two examples.  The first example is setting a tokens that can be called.  The second example is setting a macro to map tokens and header information to a specific table.


—Example 1— 

BEGIN
SRC_CSN_TS = @GETENV(‘GGHEADER’,’COMMITTIMESTAMP’)
END;

MACRO #src_icnt
BEGIN
ICNT = @GETENV('STATS', 'INSERT')
END;

MACRO #src_ucnt
BEGIN
UCNT = @GETENV('STATS', 'UPDATE')
END;

MACRO #src_dcnt
BEGIN
DCNT = @GETENV('STATS', 'DELETE')
END;

MACRO #src_dmlcnt
BEGIN
DMLCNT = @GETENV('STATS', 'DML')
END;

—Example 2— 

MACRO #hb_mappings
PARAMS (#src_schema)
BEGIN
MAP #src_schema.RANDOM_VALUES, target SCOTT.GG_REP_OP_STATUS,
INSERTMISSINGUPDATES
COLMAP (
SRC_DB_SCHEMA=@token('SRC_SCHEMA'),
GG_REPLICAT_NAME=@GETENV('GGENVIRONMENT','GROUPNAME'),
TGT_LAST_UPDATE_DT=@DATE('YYYY-MM-DD HH:MI:SS.FFFFFF','JTS',@GETENV('JULIANTIMESTAMP')),
SRC_CSN_TS=@token('SRC_CSN_TS'),
ICNT=@token('ICNT'),
UCNT=@token('UCNT'),
DCNT=@token('DCNT'),
DMLCNT=@token('DMLCNT')
);
END; 

Notice in example 2 the PARAMS statement.  When using macros this can get a bit confusing since defining a macro uses the hash mark (#) and parameters in the PARAMS statement use the hash mark (#) as well.  Also notice that the parameter #src_schema is used in the MAP statement in the macro.  This is how the value for #src_schema is passed into the macro from the parameter files.

Now, lets take a look at a parameter file.

In my test environment, I have the following processes running:


GGSCI (oel.acme.com) 1> info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
JAGENT STOPPED
EXTRACT RUNNING EXT 00:00:09 00:00:09
EXTRACT RUNNING PMP 00:00:00 00:00:07
REPLICAT RUNNING REP 00:00:00 00:00:04

Taking a look at the extract parameter file (apply side); in order to use the macro in example 1, add an INCLUDE statement which references macro library to the parameter file.  Then in the TABLE statement, using the TOKEN string, the macro for the token can be referenced.


-- Verifies parameter file syntax. COMMENT OUT AFTER TESTING.
--CHECKPARAMS

--Specifies the name of the extract process
EXTRACT EXT

--Set Oracle Environment Variables
SETENV (ORACLE_HOME="/oracle/app/product/11.2.0.4/dbhome_1")
SETENV (ORACLE_SID="bc11g")

--Oracle Login
USERID ggate, PASSWORD ggate

--Warns for a long running transaction
WARNLONGTRANS 1h, CHECKINTERVAL 30m

--Trace process info
--TRACE ./dirrpt/trace_ext.trc

--Specifies the location of the remote trail file on target machine
EXTTRAIL ./dirdat/lt

--Ignore transactions for golden gate user
TRANLOGOPTIONS EXCLUDEUSER GGATE

--Resolves the TABLES to be replicated ON START-UP
WILDCARDRESOLVE IMMEDIATE

<strong>INCLUDE ./dirmac/ops_info.mac</strong>

--Table Mappings
TABLE SCOTT.RANDOM_VALUES, TOKENS(<strong>#src_csn_ts(), #src_icnt(),#src_ucnt(),#src_dcnt(),#src_dmlcnt()</strong>);

Once the extract parameter file is updated, then the extract needs to be restarted. Upon restart of the extract, keep an eye out and make sure the process doesn’t abend.  The VIEW REPORT command can be used to check the report file during startup.  Once the extract starts, you can see how the macro is used and expanded in the parameter file.  From the extract side, the needed info will be captured and placed in the associated trail files.

In the middle, the pump really doesn’t need any changes or restarting.

Now on the replicat (capture) side, in order to use the macro defined in example 2 (above); the replicat parameter file needs to be edited to include the macro library and statements to call the macros.  The next code block shows the contents of my replicat parameter file.


--Specifies the name of the replicat load process.
REPLICAT REP

-- Verifies parameter file syntax. COMMENT OUT AFTER TESTING.
--CHECKPARAMS

SETENV (ORACLE_HOME="/oracle/app/product/11.2.0.4/dbhome_1")
SETENV (ORACLE_SID="bc11g")

--Oracle login.
USERID ggate, PASSWORD ggate

--surpress triggers - enable for 11.2.0.2 or later
--DBOPTIONS SUPPRESSTRIGGERS

ALLOWDUPTARGETMAP

--The source ddl and target ddl are identical
ASSUMETARGETDEFS

--Tracing info
--TRACE ./dirrpt/trace_rep.trc

--Specifies name and location of a discard file.
DISCARDFILE ./dirrpt/REP.dsc, append, megabytes 200

--Resolves the TARGETs to be replicated during process start-up
--WILDCARDRESOLVE IMMEDIATE

--Specify error handling rules:
REPERROR(default, discard)
REPERROR(default2, discard)

--Table Mappings
INCLUDE ./dirmac/ops_info.mac

map SCOTT.RANDOM_VALUES, target SCOTT.RANDOM_VALUES_HIST;
#hb_mappings(SCOTT);

You will notice that I have included the INCLUDE statement to call the macro library.  Then the macro (example 2) that does the table mapping for the desired information can be accessed using #hb_mappings() (last line of parameter file example).   Passing the schema name is simple by placing it in between the parenthesis.  As the example above shows, I’m passing SCOTT as the schema I want to use.

Upon restart of the replicat, by looking at the report (VIEW REPORT), I can see where the macro library is read and how the macro is translated into a map statement for the replicat to use.


— Report Output (summerized)— 

map SCOTT.RANDOM_VALUES, target SCOTT.RANDOM_VALUES_HIST;
#hb_mappings(SCOTT);
MAP SCOTT.RANDOM_VALUES, target SCOTT.GG_REP_OP_STATUS,
INSERTMISSINGUPDATES
COLMAP (
SRC_DB_SCHEMA=@token('SRC_SCHEMA'),
GG_REPLICAT_NAME=@GETENV('GGENVIRONMENT','GROUPNAME'),
TGT_LAST_UPDATE_DT=@DATE('YYYY-MM-DD HH:MI:SS.FFFFFF','JTS',@GETENV('JULIANTIMESTAMP')),
SRC_CSN_TS=@token('SRC_CSN_TS'),
ICNT=@token('ICNT'),
UCNT=@token('UCNT'),
DCNT=@token('DCNT'),
DMLCNT=@token('DMLCNT')
);

After the replicat has restarted.  Then the table in the mapping statement can be checked to see if any data was inserted (SCOTT.GG_REP_OP_STATUS).  Image 1 below shows the output of the data I requested to be replicated using the macro.

Image 1:
image1_macro_params.png

 

 

 

 

 

This should have shown you a way to use macros within your replication environment.

Enjoy!!!

http://about.me/dbasolved

 


Filed under: Golden Gate
Categories: DBA Blogs

Beyond In-Memory, what's new in 12.1.0.2 ?

Yann Neuhaus - Tue, 2014-07-22 09:53

It's just a patchset. The delivery that is there to stabilize a release with all the bug fixes. But it comes with a lot of new features as well. And not only the one that has been advertised as the future of the database. It's a huge release.

Let's have a look at what's new.

First, it seems that it will be the only patchest for 12.1

Then, there is that In-Memory option awaited for a while. There has been some demo done by Larry Ellison on Exadata or even on the Oracle SPARC M6. Of course, if you have 32 TB of memory, we can understand the need for an In-Memory optimized storage. For a more real-life usage of that option, stay tune on our blog. We investigate the features in the context of our customer concerns, to fit their needs. For example, In-Memory addresses cases where some customers use Active Data Guard to offload reporting/real-time analytics to another server. But unfortunately In-Memory is not populated on a physical standby. We probably have to wait 12.2 for that.

In-Memory is an option, so available only in Enterprise Edition.

There are other new features related with large memory. There is a part of buffer cache dedicated to big tables (you just set the percentage) to be cached for In-Memory Parallel Query. And there is also a mode where all the database is in buffer cache. About performance and Parallel Query, a new transformation has been introduced to optimize the group by operation when joining a fact table to dimensions.

Second new feature is the range-partitioned hash cluster. Oracle CLUSTER segments is a very old feature but not widely used. Hash cluster is the fastest way to access to a row because the key can be directly transformed to a rowid. Unfortunately maintenance is not easy, especially when the volume increases. And we have partitioning which is the way to ease maintenance with growing tables but, until today, we can't partition a hash cluster. I mean, not in a supported way because Oracle uses it on SPARC for the TPC benchmarks - applying a specific patch (10374168) for it.

Well, the good news is that we can finally partition hash clusters with the simple syntax:

create cluster democ1 (sample_time timestamp,sample_id number)
hashkeys 3600 hash is sample_id size 8192
partition by range (sample_time) (
partition P12 values less than( timestamp'2014-04-26 12:00:00' )
... 

Another nice feature is Attribute Clustering. Lot of other RDBMS has the ability to arrange rows but Oracle puts any insert anywhere in a heap table, depending only on where some free space is left. The alternative is IOT of course. But it can be good to try to cluster rows on one or several columns. It's better for index access, it's better for cache efficiency, it's better for storage indexes (or in-memory min/max), for ILM compression, etc. We can finally do it and I'll blog soon about that. 

Attribute Clustering is not an option, but available only in Enterprise Edition.

 

I think those two features are my favorite ones. Because the best optimization we can do, without refactoring the application design, is to place data in the way it will be retreived.

 

The trend today is to store unstructured data as JSON. XML was nice, but it's verbose. JSON is easier to read and even PostgreSQL can store JSON in its latest version. So Oracle has it in 12.1.0.2: you can store and index it. Once again stay tuned on this blog to see how it works.

Something important was missing in Oracle SQL. How do you grant a read only user? You grant only select privilege? But that's too much because with a select privilege we can lock a table (with LOCK or SELECT FOR UPDATE). So we have now a READ privilege to prevent that. That's my favorite new feature for developers.

Then there are a few improvements on multitenant, such as the possibility to save the state of a pluggable database so that it can be automatically opened when the CDB startup. We already addressed that in 12.1.0.1 in our Database Management Kit. An undocumented parameter, _multiple_char_set_cdb, let us imagine that we will be able to have different characterset for the PDB - probably in the future. Currently it's set to false.

And once again as beta testing partners we have put the pressure to have a fix for what we consider as serious availability bug. The behaviour in 12.1.0.1 beta was even worse about CDB availability and I finally had a bug opened (Bug 19001390 - PDB SYSTEM TABLESPACE MEDIA FAILURE CAUSES THE WHOLE CDB TO CRASH) that should be fixed in 12.1

About fixes, some restrictions are now gone: we can finally use ILM with multitenant and we can have supplemental logging while using a move partition online. And you can have Flashback Data Archive in multitenant as well.

All that is good news, but remember, even if it's only the 4th digit that is increased in the version number, it's a brand new version with lot of new features. So, when do you plan to upgrade ? 11g is supported until January 2015. Extended support is free until January 2016 given that you are in the terminal patchset (11.2.0.4). So either you don't want to be in the latestet release and you will have to upgrade to 11.2.0.4 before the end of the year, waiting for 12.2 maybe in 2016. Or you want those new features and will probably go to 12.1.0.2 for 2015.

Talking about upgrade, there's a bad news. We thought that multitenancy can accelarate upgrade time. Because the data dictionary is shared, you just have to plug a PDB into a newer version CDB and it's upgraded. And we show that in our 12c new features workshop by applying a PSU. But we have tested the upgrade to 12.1.0.2 in the same way, and it's not that simple. Plugging is quick when you have only new patches that did not change the dictionary. It's still true for PSU when the dictionary changes are limited to the root container. But when you upgrade to 12.1.0.2 you have to synchronize all the PDB dictionaries (all that magic behind object links and metadata links) and that takes time. It takes the same time as upgrading a non-CDB. Conclusion: you don't save time when you do it by plug/unplug.

But I have good news as well for that because I've tested a 1 minute downtime migration from 12.1.0.1 to 12.1.0.2. Dbvisit replicate, the affordable replication solution, supports multitenant in it's latest version, both as source and target. If your application is compatible (which is easy to check with the 30 days trial) then it's a good way to migrate without stress and with minimal downtime. It's available for Standard Edition as well, but currently the 12.1.0.2 download can install only an Enterprise Edition.

Still here

Michael Armstrong-Smith - Tue, 2014-07-22 07:50
Hi everyone
I am still here. Just wanted to let you know that I am still in the business of working with Discoverer even though Oracle recently announced that it would be de-supported. If you need help just get in touch.

Also, you may not be aware but we have updated our Discoverer Handbook with the latest 11g version. You can find it on Amazon

Oracle Big Data SQL -Your All-Access Pass to Big Data

What if a single Big Data breakthrough could enable you to simply reuse the skills you already have to access all your data in Hadoop, allowing you to leverage existing applications to query Hadoop,...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Partner Webcast - Oracle Database 12c Recovery Manager (RMAN)

Oracle is offering a comprehensive set of Database High Availability capabilities that seamlessly work together to help reduce both planned and unplanned downtime. Oracle Maximum Availability...

We share our skills to maximize your revenue!
Categories: DBA Blogs

University of California’s $220 million payroll project reboot

Michael Feldstein - Mon, 2014-07-21 16:11

Chris Newfield has an excellent post at Remaking the University about the University of California’s budget situation and how it relates to the recent Moody’s negative outlook on higher education finances. The whole article is worth reading, but one section jumped off the page for me [emphasis added].

The sadder example of ongoing debt is the request for “external financing for the UCPath project.” UC Path was UCOP’s flagship solution to UC inefficiencies that were allegedly wasting taxpayers’ money–in other words, new enterprise software for the systemwide consolidation of payroll and human resources functions. This is boring, important back office stuff, hardly good material for a political campaign to show the state “UC means business,” but that’s what it became. Rather than funding each campus’s decades-old effort to upgrade its systems on its own, UCOP sought centralization, which predictably introduced new levels of cost, complexity, and inefficiency, since centralization is often not actually efficient.

I had heard nothing good about UC Path from people trying to implement it on campuses, and have tried to ignore it, but this week it has resurfaced as a problem at the Regental level. The project timeline has grown from 48 to 72 months, and its costs are said to be $220 million (it had spent $131 million by May 2014) . Worse, the repayment schedule has mushroomed from seven to twenty years. Annual payments are to be something like $25 million. Campuses are to be taxed to pay for 2015-era systems until 2035, which is like taking out a twenty year mortgage to pay for your refrigerator, except that your fridge will be working better in 2035 than next year’s PeopleSoft product. Since the concurrent budget document notes efficiency savings of $30 million per year (top of page 4), UCOP may be spending $220 million to save a net $5 million per year over a couple of decades–and going into debt to do it. In the end, an efficiency measure has turned into a literal liability.

What the hell – a $220 million project to save money? How did this project get in this much trouble?

The UCPath project concept originated in 2009 with the project announcement coming in late 2011. The goal is to replace the Payroll Personnel System (PPS) that runs separately for each of the 11 UC locations with Oracle’s PeopleSoft payroll and HR systems. PPS is over 30 years old, and there are major risk issues with such an old system as well as a host of inefficient processes. The original project plans were based on a $170 million budget[1] with the first wave of go-live for the Office of the President and 3 campuses scheduled for early 2013. All campuses would be live on the new system by late 2014.[2]

In a presentation to the Board of Regents in January 2012:

Over the same period, cost reductions are expected to be approximately $750 million from technology efficiency gains, process standardization and consolidation of transactional activities into a UC-wide shared services center. Overall, the project has a net present value of approximately $230 million (at a nine percent discount rate) with breakeven in year 5.

Subsequent promises were made in March of 2012:

We think this project is likely to pay for itself within five years, and UC could be accruing over $100 million in annual savings by the eighth year,” said Peter Taylor, UC’s chief financial officer. “We also expect to deliver HR and payroll services with increased efficiency, accuracy and quality.”

At the Board of Regents’ meeting last week, the project team gave the first update to the regents since January 2012 (itself a troubling sign). See this Sharestream video from 2:56:10 – 3:22:40.

By Fall 2013 the project was in trouble, and UC leadership brought in new leadership for the project: Mark Cianca as Deputy CIO and Sabu Varghese as Program Director. Their first act was to do a health check on the project, and the results were not pretty (as described in last week’s Board of Regents’ meeting).

  • The project team and implementation partner (Oracle) had treated the project as a software replacement rather than a fundamental business transformation initiative.
  • The individual campuses had not been consulted on changes in business processes, and in fact they had not even been asked to sign off on future state business processes that each campus would have to run to stay in operation.
  • The new project team had to go through more than 100 future state processes with campuses and get agreement on how to proceed.

The result, as described by UC President Janet Napolitano at last week’s meeting, was the team having to “reboot the entire project”.

Based on the reboot, the current plan is $220 million with first wave complete by February 2016 and all campuses live by mid 2017. That’s $50 million over budget and 24 months over schedule.

Deployment Schedule Jul 2014

But the planning is not complete. They are working up their “final” replan of budget and timeline, which they will present in January 2015.

Topics for Jan 2015

How solid is the current estimate? The implementation schedule is listed as the highest risk, even with the delays.

Major Risks Jul 2014

The project financing has changed so much that UC is now facing the need to use external financing over a much longer term, as described in the material for last week’s board meeting.

Therefore, this item seeks approval to refinance the UCPath loan out of CapEquip and into external financing to achieve the financing customization required. As indicated above, the original repayment plan based on the $220.5 million budget was expected to have been repaid with annual debt service of $25 million. This would have resulted in a 12-year loan term once principal was to be repaid. In January 2015, UCPath project leadership plans to present a revised project timeline, a revised project budget and a revised estimated loan repayment schedule. Project leadership will work with the campus budget officers (and campus budget department staff) to develop: (1) an appropriate campus cost allocation strategy; (2) an estimated repayment schedule that will reflect commencement of principal repayments in conjunction with the final campus deployment (estimated to be early 2017); and (3) an estimated 15-20 year loan repayment period.

Notes
  • The new project team seems quite credible, and for the most part they addressed the right points during the briefing. Kudos to UC for making this change in leadership.
  • This is a major project turnaround (or reboot, in Napolitano’s words), but I’m not sure that UC had communicated the significance of the project changes to system campuses (and certainly not to the media).
  • I would view the current plan of $220 million and Q1 2017 full deployment as best case situation – the team told the regents that they were going to update the plan, and ERP project almost never come in earlier than planned.
  • The actual amount is much higher than $220 based on this footnote: “The $10 million in tenant improvements approved for the UCPath Center Riverside site as well as the $17.4 million purchase of the facility (UCPath is currently projected to use no more than 50 percent of the building) are not included in the figures above.”
  • How do you go 2.5 years between updates from what is now a quarter billion dollar project?
  • What about the current estimate of benefits – is it $30 million per year as Chris described or closer to $100 million per year? One big concern I have is that the information on project benefits was not updated, presented to the regents, or asked by the regents. While I question the $25 million financing and $30 million benefits numbers, I think Chris got it exactly right by noting how UC administration is failing to ask hard questions:

Moving forward, I’m afraid that officials are going to have to get much better at admitting mistakes like UCPath, and then actually undoing them. I couldn’t listen to the recording of the UCPath conversation, but Cloudminder made it sound like a lot of restrained finger-pointing with no solution in sight. Did anyone say, “well, this seemed like a good idea at the time, but it’s not. Let’s just cancel it, figure out where we went wrong, and come up with something better”?

It is possible that continuing with the rebooted project is the right answer, but UC is not even asking the question. Failing to ask whether 15-20 year financing of a new ERP makes sense seems like a major oversight. Won’t this lock UC into an Oracle system that is already antiquated for another two decades or more? It seems stunning to me that UC is planning to commit to $220 million of external financing without asking some basic questions.

  1. one regent last week stated the original request was actually $156 million.
  2. All public projects should fear the Wayback Machine for checking old web pages.

The post University of California’s $220 million payroll project reboot appeared first on e-Literate.

Oracle Data as a Service for Business - Launch Webcast

Running a data-driven enterprise is key to gaining competitive advantage, but many businesses still struggle with a myriad of point solutions that are siloed, varied in quality, and complex to...

We share our skills to maximize your revenue!
Categories: DBA Blogs