Feed aggregator

Four Options For Oracle DBA Tuning Training

This page has been permanently moved. Please CLICK HERE to be redirected.

Thanks, Craig.Four Options For Oracle DBA Tuning Training
Oracle DBAs are constantly solving problems... mysteries. That requires a constant knowledge increase. I received more personal emails from my Oracle DBA Training Options Are Changing posting than ever before. Many of these were from frustrated, angry, and "stuck" DBAs. But in some way, almost all asked the question, "What should I do?"

In response to the "What should I do?" question, I came up with four types of Oracle DBA performance tuning training that are available today. Here they are:

Instructor Led Training (ILT) 
Instructor Led Training (ILT) is the best because you have a personal connection with the teacher. I can't speak for other companies, but I strive to connect with every student and every student knows they can personally email or call me...even years after the training. In fact, I practically beg them to do what we do in class on their production systems and send me the results so I can continue helping them. To me being a great teacher is more than being a great communicator. It's about connection. ILT makes connecting with students easy.

Content Aggregators
Content Aggregators are the folks who pull together free content from various sources, organize and display it. Oh yeah... and they profit from it. Sometimes the content value is high, sometimes not. I tend to think of content aggregators like patent trolls, yet many times they can be a great resource. The problem is you're not dealing with the creator of the content. However, the creator of the content actually knows the subject matter. You can somtimes contact them...as I encourage my students and readers to do.

Content Creators
Content Creators are the folks who create content based on their experiences. We receive that content through their blogs, videos, conference presentations and sometimes through their training. I am a content creator but with an original, almost child-like curiosity, performance research twist. Content creators rarely directly profit from their posted content, but somehow try to transform it into a revenue stream. I can personally attest, it can be a risky financial strategy...but it's personally very rewarding. Since I love do research, it's easy and enjoyable to post my findings so others may benefit.

Online Training (OLT)
Online Training (OLT) is something I have put off for years. The online Oracle training I have seen is mostly complete and total crap. The content is usually technically low and mechanical. The production quality is something a six year old can do on their PC. The teaching quality is ridiculous and the experience puts you to sleep. I do not ever want to be associated with that kind of crowd.

I was determined to do something different. It had to be the highest quality. I have invested thousands of dollars in time, labor, and equipment to make online video training
Craig teaching in an OraPub Online Institute Seminarwork. Based on the encouraging feedback I receive it's working!

This totally caught me by surprise. I have discovered that I can do things through special effects and a highly organized delivery that is impossible to do in a classroom. (Just watch my seminar introductions on YouTube and you'll quickly see what I mean.) This makes the content rich and highly compressed. One hour of OraPub Online Institute training is easily equivalent to two to four hours of classroom training. Easily. I have also strive to keep the price super low, the production at a professional level and ensure the video can be streamed anywhere in the world and on any device. Online training is an option, but you have to search for it.

Summary
So there you have it. Because of economics and the devaluation of DBAs as human beings coupled with new technologies, the Oracle DBA still has at least four main sources of training and knowledge expansion. Don't give up learning!

Some of you reading may be surprised that I'm writing about this topic because it will hurt my traditional instructor led training (public or on-site) classes. I don't think so. If people can attend my classes in person, they will. Otherwise, I hope they will register for an OraPub Online Institute seminar. Or, at least subscribe to my blog (see upper left of page).

All the best in your quest to do great work,

Craig.
Categories: DBA Blogs

Silence

Greg Pavlik - Sat, 2014-07-26 11:26
Silence. Sometimes sought after, but in reality almost certainly feared - the absence of not just sound but voice. Silence is often associated with divine encounter - the neptic tradition of the Philokalia comes to mind - but also and perhaps more accurately with abandonment, divine or otherwise. I recently read Shusaku Endo's Silence, a remarkable work, dwelling on the theme of abandonment in the context of the extirpation of Kakure Kirishitan communities in Tokagawa Japan. Many resilient families survived and eventually came out of hiding in the liberalization in the mid-19th century, but the persecutions were terrible. Their story is deeply moving (sufficiently so that over time I find myself drawn to devotion to the image of Maria-Kannon). Endo's novel was not without controversy but remains one of the great literary accomplishments of the 20th century.

In fact, the reason for this post is a kind of double entendre on silence: the relative silence in literate western circles with respect to Japanese literature of the past century. Over the last month, I realized that virtually no one I had spoken with had read a single Japanese novel. Yet, like Russia of the 19th century, Japan produced a concentration of great writers and great novelists in the last 20th century that is set apart: the forces of of profound national changes (and defeat) created the crucible of great art. That art carries the distinctive aesthetic sense of Japan - a kind of openness of form, but is necessarily the carrier of universal, humanistic themes.

Endo is a writer in the post war period - the so-called third generation, and in my view the last of the wave of great Japanese literature. Read him. But don't stop - perhaps don't start - there. The early 20th century work of Natsume Soseki are a product of the Meiji period. In my view, Soseki is not only a father of Japenese literature but one of the greatest figures of world literature taken as a whole - I am a Cat remains one of my very favorite novels. Two troubling post-war novels by Yukio Mishima merit attention - Confessions of a Mask and the Sailor Who Fell From Grace with the Sea, both I would characterize broadly as existential masterpieces. The topic of identity in the face of westernization is also a moving theme in Osamu Dazai's No Longer Human. I hardly mean this as a complete survey - something in any case I am not qualified to provide -just a pointer toward something broader and important.

My encounter with contemporary Japanese literature - albeit limited - has been less impactful (I want to like Haruki Murakami in the same way I want to like Victor Pelevin, but both make me think of the distorted echo of something far better). And again like Russia, it is difficult to know what to make of Japan today - where its future will lead, whether it will see a cultural resurgence or decline. It is certain that its roots are deep and I hope she finds a way to draw on them and to flourish.


SYNC 2014 !

Bas Klaassen - Thu, 2014-07-24 13:15
Vanuit Proact organiseren wij het kennisplatform SYNC 2014 op 17 september in de Rotterdam Cruise Terminal. Alle hedendaagse IT-infrastructuurontwikkelingen in 1 dag: • Een interactief programma o.l.v. dagvoorzitter Lars Sørensen o.a. bekend van BNR • Een keynote van Marco Gianotten van Giarte, de Nederlandse “Gartner” op het gebied van Outsoucing/Managed Services • Huisman Equipment over de Bas Klaassenhttp://www.blogger.com/profile/04080547141637579116noreply@blogger.com3
Categories: APPS Blogs

Unlimited Session Timeout

Jim Marion - Thu, 2014-07-24 11:21

There are a lot of security admins out there that are going to hate me for this post. There are a lot of system administrators, developers, and users, however, that will LOVE me for this post. The code I'm about to share with you will keep the logged in PeopleSoft user's session active as long as the user has a browser window open that points to a PeopleSoft instance. Why would you do this? I can think of two reasons:

  • Your users have several PeopleSoft browser windows open. If one of them times out because of inactivity at the browser window level, then it will kill the session for ALL open windows. That just seems wrong.
  • Your users have long running tasks, such as completing performance reviews, that may require more time to complete than is available at a single sitting. For example, imagine you are preparing a performance review and you have to leave for a meeting. You don't have enough information in the transaction to save, but you can't be late for the meeting either. You know if you leave, your session will time out while you are gone and you will lose your work. This also seems wrong.

Before I show you how to keep the logged in user's session active, let's talk about security... Session timeouts exist for two reasons (at least two):

  • Security: no one is home, so lock the door
  • Server side resource cleanup: PeopleSoft components require web server state. Each logged in user session (and browser window) consumes resources on the web server. If the user is dormant for a specific period of time, reclaim those resources by killing the user's session.

We can "lock the door" without timing out the server side session with strong policies on the workstation: password protected screen savers, etc.

So here is how it works. Add the following JavaScript to the end of the HTML definition PT_COMMON (or PT_COPYURL if using an older version of PeopleTools) (or even better, if you are on PeopleTools 8.54+, use component and/or role based branding to activate this script). Next, turn down your web profile's timeout warning and timeout to something like 3 and 5 minutes or 5 and 10 minutes. On the timeout warning interval, the user's browser will place an Ajax request to keep the session active. When the user closes all browser windows, the reset won't happen so the user's server side session state will terminate.

What values should you use for the warning and timeout? As low as possible, but not so low you create too much network chatter. If the browser makes an ajax request on the warning interval and a user has 10 windows open, then that means the user will trigger up to 10 Ajax requests within the warning interval window. Now multiply that by the number of logged in users at any given moment. See how this could add up?

Here is the JavaScript:

(function (root) {
// xhr adapted from http://toddmotto.com/writing-a-standalone-ajax-xhr-javascript-micro-library/
var xhr = function (type, url, data) {
var methods = {
success: function () {
},
error: function () {
}
};

var parse = function (req) {
var result;
try {
result = JSON.parse(req.responseText);
} catch (e) {
result = req.responseText;
}
return [result, req];
};

var XHR = root.XMLHttpRequest || ActiveXObject;
var request = new XHR('MSXML2.XMLHTTP.3.0');
request.open(type, url, true);
request.setRequestHeader('Content-type', 'application/x-www-form-urlencoded');
request.onreadystatechange = function () {
if (request.readyState === 4) {
if (request.status === 200) {
methods.success.apply(methods, parse(request));
} else {
methods.error.apply(methods, parse(request));
}
}
};

request.send(data);
return {
success: function (callback) {
methods.success = callback;
return methods;
},
error: function (callback) {
methods.error = callback;
return methods;
}
};
}; // END xhr


var timeoutIntervalId;
var resetUrl;

/* replace warning message timeout with Ajax call
*
* clear old timeout after 30 seconds
* macs don't set timeout until 1000 ms
*/
root.setTimeout(function () {
/* some pages don't have timeouts defined */
if (typeof (timeOutURL) !== "undefined") {
if (timeOutURL.length > 0) {
resetUrl = timeOutURL.replace(/expire$/, "resettimeout");
if (totalTimeoutMilliseconds !== null) {
root.clearTimeout(timeoutWarningID);
root.clearTimeout(timeoutID);

timeoutIntervalId =
root.setInterval(resetTimeout /* defined below */,
root.warningTimeoutMilliseconds);
}
}
}
}, 30000);

var resetTimeout = function () {
xhr("GET", resetUrl)
.success(function (msg) {
/* do nothing */
})
.error(function (xhr, errMsg, exception) {
alert("failed to reset timeout");
/* error; fallback to delivered method */
(root.setupTimeout || root.setTimeout2)();
});
};
}(window));

A special "shout out" to Todd Motto for his Standalone Ajax/XHR JavaScript micro-library which is embedded (albeit modified) in the JavaScript above.

AWR Warehouse

Asif Momen - Wed, 2014-07-23 21:10
AWR Warehouse is a central repository configured for long term AWR data retention. It stores AWR snapshots from multiple database sources. Increasing AWR retention in the production systems would typically increase overhead and cost of mission critical databases. Hence, offloading the AWR snapshots to a central repository is a better idea. Unlike AWR retention period of default 8 days, the AWR Warehouse default retention period is "forever". However, it is configurable for weeks, months, or years. 

For more information on AWR Warehouse click on the following link for a video tutorial. 

http://www.youtube.com/watch?v=StydMitHtuI&feature=youtu.be

My Oracle Support Community Enhancement Brings New Features

Joshua Solomin - Wed, 2014-07-23 18:33
Untitled Document

 

GPIcon


Be sure to visit our My Oracle Support Community Information Center to see what is new. Choose from the tabs to watch the How to Video Series. You can also enroll for a live webcast on Wednesday, August 6 at 9am PST.

One change, you can now read blogs in My Oracle Support Community. The new Support Blogs space provides access to Support related blogs. The My Oracle Support Blog provides posts on the portal and tools that span all product areas.

Support Blogs also allow you to stay in touch with the latest product-specific news, tools, and troubleshooting tips in a growing list of product blogs maintained by Support engineers. Check back frequently to read new posts and discover new blogs.

Spark: A Discussion

Greg Pavlik - Wed, 2014-07-23 09:36
A great presentation, worth watching in its entirety.

With apologies to my Hadoop friends but this is good for you too.

The Customer Experience

Steve Karam - Wed, 2014-07-23 08:00
The Apple Experience

I’m going to kick this post off by taking sides in a long-standing feud.

Apple is amazing.

There. Edgy, right? Okay, so maybe you don’t agree with me, but you have to admit that a whole lot of people do. Why is that?

NOT part of the Customer Experience. Image from AppleFanSite.comImage from AppleFanSite.comSure, there’s the snarky few that believe Apple products are successful due to an army of hipsters with thousands in disposable income, growing thick beards and wearing skinny jeans with pipes in mouth and books by Jack Kerouac in hand, sipping lattes while furiously banging away on the chiclet keyboard of their Macbook Pro with the blunt corner of an iPad Air that sports a case made of iPhones. I have to admit, it does make for an amusing thought. And 15 minutes at a Starbucks in SoHo might make you feel like that’s absolutely the case. But it’s not.

If you browse message boards or other sites that compare PCs and Apple products, you’ll frequently see people wondering why someone would buy a $2,000 Macbook when you can have an amazing Windows 8.1 laptop with better specs for a little over half the price. Or why buy an iPad when you can buy a Samsung tablet running the latest Android which provides more freedom to tinker. Or why even mess with Apple products at all when they’re not compatible with Fragfest 5000 FPS of Duty, or whatever games those darn kids are playing these days.

Part of the Customer Experience. Image provided by cnet.comImage from cnet.comThe answer is, of course, customer experience. Apple has it. When you watch a visually stunning Apple commercial, complete with crying grandpas Facetiming with their newborn great-grandson and classrooms of kids typing on Macbook Airs, you know what to expect. When you make the decision to buy said Macbook Air, you know that you will head to the Apple Store, usually in the posh mall in your town, and that it will be packed to the gills with people buzzing around looking at cases and Beats headphones and 27″ iMacs. You know that whatever you buy will come in a sleek white box, and will be placed into a thick, durable bag with two drawstring cords that you can wear like a backpack.

When you get it home and open the box, it’s like looking at a Tesla Model S. Your new laptop, situated inside a silky plastic bed and covered in durable plastic with little tabs to peel it off. The sleek black cardboard wrapped around a cable wound so perfectly that there’s not a single millimeter of space between the coils, nor a plug out of place. The laptop itself will be unibody, no gaps for fans or jiggly CD-ROM trays or harsh textures.

All of which is to say, Apple provides an amazing customer experience. Are their products expensive, sometimes ridiculously so? Of course. But people aren’t just buying into the product, they’re buying into the “Apple life.” And why not? I’d rather pay for experiences than products any day. I may be able to get another laptop with better specs than my Macbook Pro Retina, but there will always be something missing. Not the same Customer Experience.Maybe the screen resolution isn’t quite so good, maybe the battery doesn’t last as long, or maybe it’s something as simple as the power cord coming wrapped in wire bag ties with a brick the size of my head stuffed unceremoniously into a plastic bag. The experience just isn’t there, and I feel like I’ve bought something that’s not as magnificent as the money I put into it, features and specs be damned.

Customer experience isn’t just a buzz phrase, and it doesn’t just apply to how you deal with angry customers or how you talk to them while making a sale. It also doesn’t mean giving your customer everything they want. Customer experience is the journey from start to finish. It’s providing a predictable, customer-centric, and enjoyable experience for a customer that is entrusting their hard-earned cash in your product. And it applies to every business, not just retail computer sellers and coffee shops. What’s more, it applies to anyone in a service-oriented job.

Customer Experience for IT Professionals

In a previous post I mentioned how important it is to know your client. Even if your position is Sub-DBA In Charge of Dropping Indexes That Start With The Letter Z, you still have a customer (Sub-DBA In Charge Of Dropping Indexes That Start With The Letters N-Z, of course). Not just your boss, but the business that is counting on you to do your job in order to make a profit. And you may provide an exceptional level of service. Perhaps you spend countless hours whittling away at explain plans until a five page Cognos query is as pure as the driven snow and runs in the millisecond range. But it’s not just what you do, but how you do it that is important.

I want you to try something. And if you already do this, good on you. Next time you get a phone call request from someone at your work, or have a phone meeting, or someone sends you a chat asking you to do something, I want you to send a brief email back (we call this an “ack” in technical terms) that acknowledges their request, re-lists what they need in your own words (and preferably with bullets), and lists any additional requirements or caveats. Also let them know how long it will take. Make sure you don’t underestimate, it’s better to quote too much time and get it to them early. Once you’ve finished the work, write a recap email. “As we discussed,” you might say, “I have created the five hundred gazillion tables you need and renamed the table PBRDNY13 to PBRDNY13X.” Adding, of course, “Please let me know if you have any other requests.”

If the task you did involves a new connection, provide them the details (maybe even in the form of a TNSNAMES). If there are unanswered questions, spell them out. If you have an idea that could make the whole process easier next time, run it by them. Provide that level of experience on at least one task you accomplish for your customer if you do not already, and let me know if it had any impact that you can tell. Now do it consistently.

The Apple ExperienceFrom what I’ve seen, this is what separates the “workers” from the “rockstars.” It’s not the ability to fix problems faster than a speeding bullet (though that helps, as a service that sells itself), but the ability to properly communicate the process and give people a good expectation that they can count on.

There’s a lot more to it than that, I know. And some of you may say that you lack the time to have this level of care for every request that comes your way. Perhaps you’re right, or perhaps you’re suffering from IT Stockholm Syndrome. Either way, just give it a shot. I bet it will make a difference, at least most of the time.

Conclusion

Recently, I became the Director of Customer Education and Experience at Delphix, a job that I am deeply honored to have. Delphix is absolutely a product that arouses within customers an eager want, it solves complex business problems, has an amazing delivery infrastructure in the Professional Services team, and provides top notch support thereafter. A solid recipe for Customer Experience if there ever was one. But it’s not just about the taste of the meal, it’s about presentation as well. And so it is my goal to continuously build an industrialized, scalable, repeatable, and enjoyable experience for those who decide to invest their dollar on what I believe to be an amazing product. Simply put, I want to impart on them the same enthusiasm and confidence in our product that I have.

I hope you have the chance to do the same for your product, whatever it may be.

The post The Customer Experience appeared first on Oracle Alchemist.

Teradata bought Hadapt and Revelytix

Curt Monash - Wed, 2014-07-23 03:29

My client Teradata bought my (former) clients Revelytix and Hadapt.* Obviously, I’m in confidentiality up to my eyeballs. That said — Teradata truly doesn’t know what it’s going to do with those acquisitions yet. Indeed, the acquisitions are too new for Teradata to have fully reviewed the code and so on, let alone made strategic decisions informed by that review. So while this is just a guess, I conjecture Teradata won’t say anything concrete until at least September, although I do expect some kind of stated direction in time for its October user conference.

*I love my business, but it does have one distressing aspect, namely the combination of subscription pricing and customer churn. When your customers transform really quickly, or even go out of existence, so sometimes does their reliance on you.

I’ve written extensively about Hadapt, but to review:

  • The HadoopDB project was started by Dan Abadi and two grad students.
  • HadoopDB tied a bunch of PostgreSQL instances together with Hadoop MapReduce. Lab benchmarks suggested it was more performant than the coyly named DBx (where x=2), but not necessarily competitive with top analytic RDBMS.
  • Hadapt was formed to commercialize HadoopDB.
  • After some fits and starts, Hadapt was a Cambridge-based company. Former Vertica CEO Chris Lynch invested even before he was a VC, and became an active chairman. Not coincidentally, Hadapt had a bunch of Vertica folks.
  • Hadapt decided to stick with row-based PostgreSQL, Dan Abadi’s previous columnar enthusiasm notwithstanding. Not coincidentally, Hadapt’s performance never blew anyone away.
  • Especially after the announcement of Cloudera Impala, Hadapt’s SQL-on-Hadoop positioning didn’t work out. Indeed, Hadapt laid off most or all of its sales and marketing folks. Hadapt pivoted to emphasize its schema-on-need story.
  • Chris Lynch, who generally seems to think that IT vendors are created to be sold, shopped Hadapt aggressively.

As for what Teradata should do with Hadapt:

  • My initial thought for Hadapt was to just double down, pushing the technology forward, presumably including a columnar option such as the one Citus Data developed.
  • But upon reflection, if it made technical sense to merge the Aster and Hadapt products, that would be better yet.

I herewith apologize to Aster co-founder and Hadapt skeptic Tasso Argyros (who by the way has moved on from Teradata) for even suggesting such heresy. :)

Complicating the story further:

  • Impala lets you treat data in HDFS (Hadoop Distributed File System) as if it were in a SQL DBMS. So does Teradata SQL-H. But Hadapt makes you decide whether the data is in HDFS or the SQL DBMS, and it can’t be in both at once. Edit: Actually, see Dan Abadi’s comments below.
  • Impala and Oracle’s new SQL-H competitor have daemons running on every data node. So does one option in Hadapt. But I don’t think SQL-H does that yet.

I was less involved with Revelytix that with Hadapt (although I’m told I served as the “catalyst” for the original Teradata/Revelytix partnership). That said, Teradata — like Oracle — is always building out a data integration suite to cover a limited universe of data stores. And Revelytix’ dataset management technology is a nice piece toward an integrated data catalog.

Related posts

EID Holidays and things to do

Syed Jaffar - Wed, 2014-07-23 03:07
Looking forward to a much anticipated 9 day EID holiday break to complete the to-do-list which I have been carrying for a while now. Determined to complete some of the writing assignments that I have kept pending for a long period of time now. At the same time, will have to seek the possibilities to exploring the new features of v12.1.0.2 and Exadata as we might we going for the combination in the coming weeks for a Data Warehouse project.

Will surely blog about my test scenarios and will share the inputs on Oracle 12c new features.

I wish everyone a very happy and prosperous EID in advance.

Oracle Database 12c Release 1 [12.1.0.2] Patch Released

VitalSoftTech - Tue, 2014-07-22 11:46
Oracle 12.1.0.2 Database Patch 17694377 for Linux, Solaris SPARC and x86 has been released on Metalink and edelivery. This software is a complete set and does not require a previous install. A number of key features in this release include the much talked about “Oracle Database In-Memory” database option. This feature will increase the DSS […]
Categories: DBA Blogs

Still here

Michael Armstrong-Smith - Tue, 2014-07-22 08:50
Hi everyone
I am still here. Just wanted to let you know that I am still in the business of working with Discoverer even though Oracle recently announced that it would be de-supported. If you need help just get in touch.

Also, you may not be aware but we have updated our Discoverer Handbook with the latest 11g version. You can find it on Amazon

Kscope14

Galo Balda's Blog - Mon, 2014-07-21 12:11

ultimate-seattle-wallpaper2

Photo by Nate Whitehill

It’s been a few weeks since I returned from another awesome Kscope conference and I just realized that I never wrote about it.

For me, it was the first time visiting Seattle and I really liked it even when I only managed to walk around the downtown area. I had some concerns about how the weather was going to be but everything worked out very well with clear skies, temperature in the mid 70’s and no rain!

The view from my hotel room.

The Sunday symposiums, the conference sessions and the hands-on labs provided really good content. I particularly enjoyed all the presentations delivered by Jonathan Lewis and Richard Foote.

My friend Amy Caldwell won the contest to have a dinner with ODTUG’s President Monty Latiolais and she was very kind to invite me as her guest. We had a good time talking about the past, present and future of ODTUG and it was enlightening and inspirational to say the least.

My presentation on row pattern matching went well but the attendance wasn’t the best mostly because I had to present on the last time slot when people were on party mode and ready to head to the EMP Museum for the big event. Nevertheless, I had attendees like Dominic Delmolino, Kim Berg Hansen, Alex Zaballa, Leighton Nelson, Joel Kallman and Patrick Wolf that had good questions about my topic.

Some comments on Social Media

As I said before, the big event took place at the EMP Museum and I believe everyone had a good time visiting the music and sci-fi exhibits and enjoying the food, drinks and music.

The EMP Museum

The EMP Museum

Next year, Kscope will take place on Hollywood, Florida. If you’re a Developer, DBA or an Architect working with Oracle products that’s where you want to be from June 21 – 25. I suggest you register and book your hotel room right away because it’s going to sell out really fast.

Hope to see you there!


Filed under: Kscope Tagged: Kscope
Categories: DBA Blogs

Exactly Wrong

Greg Pavlik - Mon, 2014-07-21 09:58
I normally avoid anything that smacks of a competitive discussion on what I consider to be a space for personal reflection. So while I want to disclose the fact that I am not disinterested in the points I am making from a professional standpoint, my main interest is to frame some architecture points that I think are extremely important for the maturation and success of the Hadoop ecosystem.

A few weeks back, Mike Olson of Cloudera spoke at Spark Summit on how Spark relates to the future of Hadoop. The presentation can be found here:

http://youtu.be/8kcdwnbHnJo

In particular I want to draw attention to the statement made at 1:45 in the presentation that describes Spark as the "natural successor to MapReduce" - it becomes clear very quickly that what Olson is talking about is batch processing. This is fascinating as everyone I've talked to immediately points out one obvious thing: Spark isn't a general purpose batch processing framework - that is not its design center. The whole point of Spark is to enable fast data access and interactivity.
 
The guys that clearly "get" Spark - unsurprisingly - are DataBricks. In talking with Ion and company, it's clear they understand the use cases where Spark shines - data scientist driven data exploration and algorithmic development, machine learning, etc. - things that take advantage of the memory mapping capabilities and speed of the framework. And they have offered an online service that allows users to rapidly extract value from cloud friendly datasets, which is smart.

Cloudera's idea of pushing SQL, Pig and other frameworks on to Spark is actually a step backwards - it is a proposal to recreate all the problems of MapReduce 1: it fails to understand the power of refactoring resource management away from the compute model. Spark would have to reinvent and mature models for multi-tenancy, resource managemnet, scheduling, security, scaleout, etc that are frankly already there today for Hadoop 2 with YARN.

The announcement of an intent to lead an implementation of Hive on Spark got some attention. This was something that I looked at carefully with my colleagues almost 2 years ago, so I'd like to make a few observations on why we didn't take this path then.

The first was maturity, in terms of the Spark implementation, of Hive itself, and Shark. Candidly, we knew Hive itself worked at scale but needed significant enhancement and refactoring for both new features on the SQL front and to work at interactive speeds. And we wanted to do all this in a way that did not compromise Hive's ability to work at scale - for real big data problems. So we focused on the mainstream of Hive and the development of a Dryad like runtime for optimal execution of operators in physical plans for SQL in a way that meshed deeply with YARN. That model took the learnings of the database community and scale out big data solutions and built on them "from the inside out", so to speak.

Anyone who has been tracking Hadoop for, oh, the last 2-3 years will understand intuitively the right architectural approach needs to be based on YARN. What I mean is that the query execution must - at the query task level - be composed of tasks that are administered directly by YARN. This is absolutely critical for multi-workload systems (this is one reason why a bolt on MPP solution is a mistake for Hadoop - it is at best a tactical model while the system evolves).  This is why we are working with the community on Tez, a low level framework for enabling YARN native domain specific execution engines. For Hive-on-Tez, Hive is the engine and Tez provides the YARN level integration for resource negotiation and coorindation for DAG execution: a DAG of native operators analogous the the execution model found in the MPP world (when people compare Tez and Spark, they are fundamentally confused - Spark could be run on Tez for example for a much deeper integration with Hadoop 2 for example). This model allows the full range of use cases from interactive to massive batch to be administered in a deeply integrated, YARN native way.

Spark will undoubtedly mature into a great tool for what it is designed for: in memory, interactive scenarios - generally script driven - and likely grow to subsume new use cases we aren't anticipating today. It is, however, exactly the wrong choice for scale out big data batch processing in anything like the near term; worse still, returning to a monolithic general purpose compute framework for all Hadoop models would be a huge regression and is a disastrously bad idea.

Dependent Rational Animals

Greg Pavlik - Sun, 2014-07-20 17:32
I wanted to briefly comment on Alisdair MacIntyre's lectures collected as "Dependent Rational Animals", but let me precede that with a couple of comments for context: first, as I alluded in my last post referencing Levinas, it is my view that the the ethics demands a certain primacy in any healthy conception of life and society; second, in the area of ethics, Macintyre's After Virtue is the book that has had perhaps the biggest impact on my own thinking.

One of the criticisms of MacIntyre is that his critique of rational ethics is, on the one hand, devastating; on the other hand, his positive case for working out a defense of his own position - a revivification of social ethics in the Aristotelian-Thomist tradition(s) was somewhat pro forma. I think this is legitimate in so far as it relates to After Virtue itself (I believe I have read the latest edition - 3 - most recently), though I am not enough of a MacIntyre expert to offer a defensible critique of his work overall.

I do, however, want to draw attention to Dependent Rational Animals specifically in this light. Here MacIntyre begins with is the position of human as animal - as a kind of naturalist starting point for developing another pass at the importance of the tradition of the virtues. What is most remarkable is that in the process of exploring the implications of our "animality" MacIntyre manages to subvert yet another trajectory of twentieth century philosophy, this time as it relates to the primacy of linguistics. The net effect is to restore philosophical discourse back toward the reality of the human condition in the context of the broader evolutionary context of life on earth without - and this I must say is the most amazing part of this book - resorting to fables-masked-as-science (evolutionary psychology).

Putting my DB / Apex install through the wringer

Gary Myers - Sun, 2014-07-20 04:53
I was mucking around trying to get APEX on one of my PCs to be visible on the internet.

This was just a proof-of-concept, not something I intend to actually leave running.

EPG on Port 8080

I do other testing on the home network too, so I already had my router configured to forward port 80 to another environment. That meant the router's web admin had been shifted to port 8080, and it wouldn't let me use that. Yes, I should find a open source firmware, but OpenWRT says it is unsupported and will "brick the router" and I can't see anything for Tomato.

So I figured I'd just use any incoming router port and forward it to the PC's 8080. I chose 6000. This was not a good choice. Looks like Chrome comes with a list of ports which it thinks shouldn't be talking http. 6000 is one of them, since it is supposed to be used for X11 traffic so Chrome told me it was unsafe and refused to co-operate.

Since it is a black-list of ports to avoid, I just happened to be unlucky (or stupid) in picking a bad one. Once I selected another, I got past that issue.

My task list was:

Server
  1. Install Oracle XE 11gR2 (Windows 64-bit)
  2. Configure the EPG for Apex. I ran apex_epg_config.sql as, I had switched straight from the pre-installed Apex 4.0 to 4.2.5 rather than upgrading a version I had actively used. 
  3. Unlocked the ANONYMOUS database account
  4. Checked DBMS_XDB.GETHTTPPORT returned 8080 
(At this point, you can test that you have connectivity to apex on the machine on which XE / Apex is installed, through 127.0.0.1 and localhost).

Local Network
  1. Enabled external access by setting DBMS_XDB.SETLISTENERLOCALACCESS(false); 
(Now you can test connectivity from another machine on the same local network through whatever hostname and/or IP address is assigned to that machine, such as 10.x.x.x or 192.168.x.x)

Remote Network
  • I got a handy Dynamic DNS via NoIP because my home IP can potentially change (though it is very rare). [Yes, there was a whole mess about Microsoft temporarily hijackinging some noip domains, but I'm not using this for anything important.] This was an option in my router setup.
  • The machine that runs XE / Apex should be assigned a specific 192.168.1.nnn IP address by the router (based on it's MAC address). This configuration is specific to the router hardware, so I won't go into my details here. But it is essential for the next step.
  • Configure the port forwarding on the router to push incoming traffic on the router's port 8088 off to port 8080 for the IP address of the machine running XE / Apex. This is also router specific. 
When everything is switched on, I can get to my Apex install from outside the local network based on the hostname set up with noip, and the port configured in the router. I used my phone's 3G internet connection to test this. 

Apex Listener

My next step was to use the Apex Listener rather than the EPG. Oracle have actually retagged the Apex Listener as RDS (Restful Data Services) so that search engines can confuse it with Amazon RDS (Relational Database Service).

This one is relatively easy to set up, especially since I stuck with "standalone" mode for this test. 

A colleague had pointed me to this OBE walkthrough on Apex PDF reports via RDS, so I took a spin through that and it all worked seamlessly.

My next step would be a regular web server/container for RDS rather than standalone. I'm tempted to give Jetty a try as the web server and container for the listener rather than Tomcat etc, but the Jetty documentation seems pretty sketchy. I'm used to the thoroughness of the documentation for Apache (as well as Oracle).


Oracle VirtualBox 4.3.14 Released

VitalSoftTech - Wed, 2014-07-16 15:08
Oracle has just released the Oracle VirtualBox 4.3.14. You can download the binaries here and view the change log here. VirtualBox Articles
Categories: DBA Blogs

PeopleCode Coding Discipline

Jim Marion - Wed, 2014-07-16 12:23

Java, JavaScript, C++, C Sharp, Objective C, Groovy... what do these languages have in common? Yes, curly braces, but besides that... actually, there are a lot of similarities between these languages. Throw Visual Basic, Perl, Python, or any other well-known language into the mix and the number of similarities drops significantly. Setting semantics and syntax aside, a common attribute of all well-known languages is standards and best practices. Some of those best practices (such as coding style) differ by language. For example, bash scripts can either look like, uh... bash scripts or they can look like c-style logic statements. Obviously, bash best practices prefer you make bash code look like bash code. Other standards are personal: do you prefer real tabs or spaces? How many spaces does your tab consume? Do you put curly braces on a new line?

How does all of this fit into PeopleCode? Forget about code formatting preferences. Application Designer has its own code formatting ideas. But there are other best practices that can help you write better code with fewer defects (fewer defects = better code). By following best practices your code will be easier to read, you will be more productive, and your users will be happier because you deliver better solutions faster.

Even though best practices usually result in code that is more efficient to process, that isn't really the point. Computers can interpret just about anything. Compilers and interpreters are really good at eliminating useless words and resolving seemingly incomprehensible logic. I love Martin Fowler's quote, "Any fool can write code that a computer can understand. Good programmers write code that humans can understand." Best practices are really about writing code that humans can easily comprehend. For example, avoid complex logic (including double negatives, or any negative logic, for that matter), keep your method and function code short, etc. If you write some code, leave it for a night, and then come back the next day and either need to read lots of comments to figure it out or spend a few minutes "remembering" what that code does, then the code is probably too complex. The problem with complex code is that it is easily misinterpreted by humans. Another problem with complex code is we actually ignore it when trying to resolve problems. We know it takes time to digest complex code, so we avoid it, preferring to test simple code first. Why waste time trying to understand complex code if it might be functioning properly?

Today's Quest Newsletter contained a link to 10 Bad Coding Practices That Wreck Software Development Projects. These are language agnostic practices that we can easily apply to PeopleSoft development.

If I were to summarize Coding best practices, I think I would do it like this: two.sentenc.es. Now, arguably, short does not equal comprehensible. There are programmers that err on the terse side because it is clever. This is true, often short code is clever. It is also hard to read. Most of us, however, err the other way. E. F. Schumacher said, "Any fool can make things bigger, more complex, and more violent. It takes a touch of genius — and a lot of courage — to move in the opposite direction." Schumacher died in 1977, so this problem is not new.

Computer programming is about communication. As programmers we have two audiences:

  • Computers (which can interpret anything -- even complex stuff)
  • Humans (who have a limited attention span, distractions, and a preference for simplicity)

Here is why I think discipline and best practices are critical for good PeopleCode:

We use PeopleCode to create business rules, but PeopleCode is NOT a business rules language. PeopleCode is a Metadata manipulation language. (Note: this is purely my opinion)

Here is why I believe PeopleCode is for metadata, not business rules: PeopleCode only has Metadata objects: records, fields, SQL, components, menus, etc. These are all metadata. These are the low level API's we use to write business logic. Consider the following PeopleCode:

Local record &rec = CreateRecord(Record.PSOPRDEFN);
Local field &descr;

&rec.SelectByKey("jimsoprid");
&descr = &rec.GetField(Field.OPRDEFNDESC);

&descr.Value = "Jim Marion";

&rec.Update();

This code implements business logic, but does so by manipulating metadata objects. PeopleCode metadata objects are building blocks for business logic. If we were to rewrite this using a business logic language, it would probably look something like this:

Local User &u = GetUser("jimsoprid");

&u.descr = "Jim Marion";
&u.Update();

And this is why discipline and best practices are SO important for PeopleCode developers: We are trying to speak business logic with a metadata vocabulary. We start with a communication deficit. It is like trying to teach advanced weaving using an automobile mechanics vocabulary. The two subjects have different vocabularies. But if you combine the words correctly, you can communicate the same meaning.

Oracle Critical Patch Update Advisory – July 2014

VitalSoftTech - Tue, 2014-07-15 20:45
The July Oracle Critical Patch has been released. This includes patches for Database Product Suite, Fusion Middleware Product Suite, Exalogic, and Enterprise Manager Suite Critical Patch Updates and Patch Set Updates. It includes 113 new security , 5 Oracle Database and a host of other bug fixes. More about the July 2014 Critical Patch Update […]
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator