Skip navigation.

Feed aggregator

Harmonizing Learning and Education

Michael Feldstein - Thu, 2015-01-01 16:20

I’m the Whether Man, not the Weather Man, for after all it’s more important to know whether there will be weather than what the weather will be.

The Phantom Tollbooth

Dave Cormier has written a couple of great posts on our failure to take learner motivation seriously and the difference between improving learning and improving education. In the latter post—a response to Stephen Downes’ comment on the former post—Dave writes about the tension between improving an individual’s learning and improving our system of education, essentially positing that the reason why we as a society often fail to take learner engagement sufficiently seriously is because we become preoccupied with making the educational system accountable, a goal that we would be irresponsible not to take on but that we are also essentially doomed to fail at. (I may be putting words in his mouth on that last bit.) Dave writes,

There’s definitely something wrong if people are leaving their first degree and are not engaged in learning. We certainly need to address it. We totally want to be in the business of helping people do what they want to do. Try it. No really. Just try it. Sit down with a child and help them do what they want to do. And i don’t mean “hey this child has shown up with a random project they are totally passionate about and are asking me a question” I mean “stop them at a random time, say 8:25am, and just start helping them.” You will get blank stares. You’ll get resistance. You’ll get students who will say anything you want if it means you will go away/give them a grade. You will not enjoy this process. They will also not enjoy it.

There is something wrong. The problem is that we have built an education system with checks and balances, trying to make it accountable and progressive (in some cases), but we are building it without knowing why. We have not built an education system that encourages people to be engaged. The system is not designed to do it. It’s designed to get people to a ‘standard of knowing.’ Knowing a thing, in the sense of being able to repeat it back or demonstrate it, has no direct relationship to ‘engagement’. There are certainly some teachers that create spaces where engagement occurs, but they are swimming upstream, constantly battling the dreaded assessment and the need to cover the curriculum. The need to guarantee knowing.

He suggests that we need to redesign our education system around the goal of getting students to start caring and keep caring about learning. And his argument is interesting:

Give me a kid who’s forgotten 95% of the content they were measured in during K-12 and I will match that with almost every adult i know. Give me a kid who cares about learning… well… then i can help them do just about anything.

This is partly a workplace argument. It’s an economic value argument. It’s a public good argument. If Dave is right, then people who care about learning are going to be better at just about any job you throw at them than people who don’t. This is a critical argument in favor of public funding of a liberal arts education, personalized in the old-fashioned sense of having-to-do-with-individual-persons, that much of academia has ceded for no good reason I can think of. The sticky wicket, though, is accountability which, as Dave points out, is the main reason we have a schism between learning and education in the first place. Too bad we can’t demonstrate, statistically, that people who are passionate about learning are better workers. It’s a shame that we don’t have good data linking being excited about learning, being a high-performer in your job, and being a happy, fulfilled and economically well-off person. If we had that, we could largely resolve the tension between improving learning and improving education. We could give a compelling argument that it is in the taxpayers’ interest to build an education system whose purpose, as Dave suggests, is to increase the chances that students will start to care and continue to care about learning. It’s a tragedy that we don’t have proof of that link.

Oh, wait.

The Intuition Behind the Argument

Before I get into the numbers, I think it’s important to articulate the argument in a way that makes intuitive sense even to skeptics. As Dave points out, everybody agrees with the proposition that students should love learning if that proposition is presented to them as a platitude. Where people start to waffle is when we present the proposition to them as a priority, as in, “It is more important for students to learn to develop and nurture a passion for learning than it is for them to learn any particular thing.” And in order to resolve the tension between learning and education, we need to make an even stronger proposition: “A student who develops a passion for learning about subjects that are unrelated to her eventual career will, on balance, be a better employee and more successful professional than the same student who has studied content directly related to her eventual career with relative indifference.” Do you believe this proposition? Here’s a test:

Imagine that you could go back in time and choose an undergraduate major that was exactly tailored to the job that you do today. Would you be better or worse at your job than you are now? Would you be more or less happy?

Obviously, this test won’t work for people whose undergraduate major was the perfect pre-professional major for what they are doing now, which will include most faculty. But it should work for a majority of people, including lots of folks in business and government. In my case, I was a philosophy major, which prepared me well for a career in anything except philosophy. If I could have precognitively created a major for myself in educational technology back in the late 1980s, would I be more successful today? Would I be happier? The answer to both of those questions is almost certainly “no.” In fact, there is a good chance that I would have been less successful and less happy. Why? For one thing, I didn’t care about educational technology back then. I cared about philosophy. I pursued it with a passion. This gave me three things that I still have today. First, I have the intellectual tools of a philosopher. I don’t think I would have held onto the tools of another discipline if I didn’t care about them when I was learning about them. Second, I know what it feels like to pursue work that I am passionate about. I am addicted to that feeling. I am driven to find it in every job, and I am not satisfied until I do. This makes me more selective about the jobs I look at and much, much better at the ones that I take. And finally, though it was a long and winding road, my interest in philosophy led me to my interest in instructional technology in many ways. We tend to have a rather stunted notion of what it means for a subject we study to be “related” to our work. In my philosophy classes, I spent a lot of time thinking about what it means to “know” something, what it means to “learn” something, and what it means for something to be “good.” I got to see how these words are tangled up in logic, language, and culture, and how our notions of them change over time. I learned how to write and how to think, while I was simultaneously studying the first principles of language and cognition. All of these experiences, all of this knowledge, all of these skills have been directly valuable to me in my career as a professional non-philosopher (or a standup philosopher, as Mel Brooks might call me). I wouldn’t have them if I had majored in educational technology. I would have other things, but honestly, there are no deep skills in my work that I wish I had acquired through earlier specialization. Everything that I have needed to learn, I have been able to learn on the job. As Dave wrote, “Give me a kid who cares about learning… well… then i can help them do just about anything.”

If you are one of those people who majored in exactly what you ended up doing as a career, then try reversing the thought experiment. Suppose you could go back in time and major in anything you wanted. Something that you were passionate about, but something different from what you ended up majoring in. Would it have made a difference? Would you have been more or less successful in your current career? Would you have been more or less happy than you are now? For some folks, that pre-professional major was exactly what they needed to be doing. But I bet that, for a lot of folks, it wasn’t.

Survey says…?!

If any of this resonates with you at all, then you really must read the 2014 Gallup Purdue Index Report. You’ll have to register to get it, but trust me, this one is worth it. Gallup is most widely known for their political polling, but more broadly, their business is in collecting data that links people’s attitudes and beliefs to observable behaviors and objective outcomes. How likely is a person who thinks the “country is on the wrong track” to vote for the incumbent? Or to vote at all? Does believing that your manager is incompetent correlate with an increased chance of a serious heart problem? And conversely, does “having fun” at your job correlate with a higher chance of living into your 90s? Does having a “manager that cares about me as a person” mean that I am more likely to be judged a “top performer” at work and reduce the likelihood that I will be out sick? Does having a teacher who “makes me feel excited about learning” correlate with better workplace engagement when I graduate?

Ah. There it is.

To get the full impact of Gallup’s research, you have to follow it backwards from its roots. The company does significant business in employee satisfaction surveys. As with schooling, managers know that employee engagement matters but often fail to take it seriously. But according to research cited in Gallup’s book Wellbeing: The Five Essential Elements (which I also recommend), employees who could answer “yes” to the question about whether their manager cares about them as a person, are “more likely to be top performers, produce higher quality work, are less likely to be sick, less likely to change jobs, and less likely to get injured on the job.” Also, people who love their jobs are more likely to both stay working longer and live longer. In a study George Gallup conducted in the 1950s,

…men who lived to see 95 did not retire until they were 80 years old on average. Even more remarkable, 93% of these men reported getting a great deal of satisfaction out of the work they did, and 86% reported having fun doing their job.

Conversely, a 2008 study the company found a link between employee disengagement and depression:

We measured their engagement levels and asked them if they had ever been diagnosed with depression. We excluded those who reported that they had been diagnosed with depression from our analysis. When we contacted the remaining panel members in 2009, we again asked them if they had been diagnosed with depression in the last year. It turned out that 5% of our panel members (who had no diagnosis of depression in 2008) had been newly diagnosed with depression. Further, those who were actively disengaged in their careers in 2008 were nearly twice as likely to be diagnosed with depression over the next year. While there are many factors that contribute to depression, being disengaged at work appears to be a leading indicator of a subsequent clinical diagnosis of depression.

Which is obviously bad for employer and employee alike.

In some cases, Gallup went all in with physiological studies. For example, they “recruited 168 employees and studied their engagement, heart rate, stress levels, and various emotions throughout the day,” using heart rate monitors, saliva samples, and handheld devices that surveyed employees on their activities and feelings of the moment at various points in the day.

After reviewing all of these data, it was clear that when people who are engaged in their jobs show up for work, they are having an entirely different experience than those who are disengage. [Emphasis in original.] For those who were engaged, happiness and interest throughout the day were significantly higher. Conversely, stress levels were substantially higher for those who were disengaged. Perhaps most strikingly, disengaged workers’ stress levels decreased and their happiness increased toward the end of the workday….[P]eople with low engagement…are simply waiting for the workday to end.

From here, the authors go on to talk about depression and heart attacks and all that bad stuff that happens to you when you hate that job. But there was one other striking passage at the beginning of this section:

Think back to when you were in school sitting through a class in which you had very little interest. Perhaps you eyes were fixed on the clock or you were staring blankly into space. You probably remember the anticipation of waiting for the bell to ring so you could get up from your desk and move on to whatever was next. More than two-thirds of workers around the world experience a similar feeling by the end of a typical workday.

And here’s what Dave said in his first post:

Student separate into two categories… those that care and those that don’t care.

Our job, as educators, is to convince students who don’t care to start caring, and to encourage those who currently care, to continue caring.

All kinds of pedagogy happens after this… but it doesn’t happen until this happens.

So. In this case, we’re trying to make students move from the ‘not care’ category to the ‘care’ category by threatening to not allow them to stay with their friends. Grades serve a number of ‘not care to care’ purposes in our system. Your parents may get mad, so you should care. You’ll be embarrassed in front of your friends so you should care. In none of these cases are you caring about ‘learning’ but rather caring about things you, apparently, already care about. We take the ‘caring about learning’ part as a lost cause.

The problem with threatening people is that in order for it to continue to work, you have to continue to threaten them (well… there are other problems, but this is the relevant one for this discussion). And, as has happened, students no longer care about grades, or their parents believe their low grades are the fault of the teacher, then the whole system falls apart. You can only threaten people with things they care about.

I’m not suggesting that we shouldn’t hold kids accountable, but if we’re trying to encourage people to care about their work, about their world, is it practical to have it only work when someone is threatening them? Even if you are the most cynical personal imaginable, wouldn’t you like people to be able to do things when you aren’t actually threatening them? Are we promoting a ‘creative/knowledge economy’ by doing this? Are we building democracy? Unless you are a fascist (and i really mean that, unless you want a world where a couple of people tell everyone exactly what to do) you can’t really want the world to be this way.

It turns out that Dave actually overstates the case for Fascism. Fascist bosses get bad results from employees (in addition to, you know, killing them). If you want high-performing workers, you need engaged workers. And you can’t force people to engage.

Wellbeing isn’t just about work. It looks at five different types of personal “wellbeing”—career, social, financial, physical, and community—and shows how they are related to each other, to overall wellbeing, and to performance at work and in the world. (By the way, there’s a lot of good stuff in the sections on social and community wellbeing for the connectivists and constructionists in the crowd.)

We Don’t Need No Education

The Gallup Purdue Index Report picks up where Wellbeing leaves off. Having established some metrics that correlate both with overall personal happiness and success as well as workplace success, Gallup backs up and asks the question, “What kind of education is more likely to promote wellbeing?” They surveyed a number of college graduates in various age groups and with various measured levels of wellbeing, asking them to reflect back on their college experiences. What they didn’t find is in some ways as important as what they did find. They found no correlation between whether you went to a public or private, selective or non-selective school and whether you achieved high levels of overall wellbeing. It doesn’t matter, on average, whether you go to Harvard University or Podunk College. It doesn’t matter whether your school scored well in the U.S. News and World Report rankings. Student debt levels, on the other hand, do matter, so maybe that Harvard vs. Podunk choice matters after all. And, in a finding that will cheer my philosophy professors, it turns out that “[s]lightly more employed graduates who majored in the arts and humanities (41%) and social sciences (41%) are engaged at work than either science (38%) or business (37%) majors.”

What factors did matter? What moved the needle? Odds of thriving in all five areas of Gallup’s wellbeing index were

  • 1.7 times higher if “I had a mentor who encouraged me to pursue my goals and dreams”
  • 1.5 times higher if “I had at least one professor at [College] who made me excited about learning”
  • 1.7 times higher if “My professors at [College] cared about me as a person”
  • 1.5 times higher if “I had an internship or job that allowed me to apply what I was learning in the classroom”
  • 1.1 times higher if “I worked on a project that took a semester or more to complete”
  • 1.4 times higher if “I was extremely active in extracurricular activities and organizations while attending [College]”

Again, the institution type didn’t matter (except for students who went to for-profit private colleges, only 4% of which were found to be thriving on all five measures of wellbeing). It really comes down to feeling connected to your school work and your teachers, which does not correlate well with the various traditional criteria people use for evaluating the quality of an educational institution. If you buy Gallup’s chain of argument and evidence this, in turn, suggests that being a hippy-dippy earthy-crunchy touchy-feely constructivy-connectivy commie pinko guide on the side will produce more productive workers and a more robust economy (not to mention healthier, happier human beings who get sick less and therefore keep healthcare costs lower) than being a hard-bitten Taylorite-Skinnerite practical this-is-the-real-world-kid type career coach. It turns out that pursuing your dreams is a more economically productive strategy, for you and your country, than pursuing your career. It turns out that learning a passion to learn is more important for your practical success than learning any particular facts or skills. It turns out that it is more important to know whether there will be weather than what the weather will be.

So…what do we do with all this ed tech junk we just bought?

This doesn’t mean that ed tech is useless by any means, but it does mean that we have to think about what we use it for and what it can realistically accomplish. Obviously, anything that helps teachers and advisers connect with students, students connect with each other, or students connect with their passions is good. There’s also nothing inherently wrong with video lectures or adaptive learning programs as long as they are used as informational supplements once students start caring about what they learn or as tools to keep them caring about what they learn rather than substitutes for real engagement that shovel content in the name of “competency.” I’m interested in “flipping,” fad or no fad, because it emphasizes using the technology to clear the way for more direct human-to-human interactions with the students. Competencies themselves should be used more as markers of progress down a road that the student has chosen to travel rather than a set of hoops that the student must jump through (like a trained dog). Another thing that technologies can do is help students with what may be the only prerequisite to having passion to learn, which is believing that you can learn. In the places where I’ve seen adaptive learning software employed to most impressive effect, it has been in concert with outreach and support designed to help students who never learned to believe in themselves discover that they can, in fact, make progress in their education. Well-designed adaptive software lets them get help without feeling embarrassed and, perhaps more importantly, enables them to arrive at a confidence-building feeling of success and accomplishment quickly.

The core problem with our education system isn’t the technology or even the companies. It’s how we deform teaching and learning in the name of accountability in education. Corporate interests amplify this problem greatly because they sell to it, thus reinforcing it. But they are not where the problem begins. It begins when we say, “Yes, of course we want the students to love to learn, but we need to cover the material.” Or when we say, “It’s great that kids want to go to school every day, but really, how do we know that they’re learning anything?” It’s daunting to think about trying to change this deep cultural attitude. Nor does embracing Gallup’s train of evidence fully get us out of the genuine moral obligation to find some sort of real (but probably inherently deforming) measure of accountability for schools. But the most interesting and hopeful result from the Gallup research is this:

You don’t have to have every teacher make you feel excited about learning in order to have a better chance at a better life. You just need one.

Just one.

The post Harmonizing Learning and Education appeared first on e-Literate.

Four Secrets of Success

FeuerThoughts - Thu, 2015-01-01 09:56
More than a few people think that I am pretty good at what I do, that I am successful. I respect their judgement and thought about what contributed to my success. I came up with four that form a foundation for (my) success. Since it is possible that others will find them helpful, I have decided to share my Four Secrets of Success (book in the works, film rights sold to Branjolina Films).
Follow these four recommendations, and you will be more successful in anything and everything you seek to accomplish.
1. Drink lots of water.
if you are dehydrated, nothing about you is operating optimally. By the time you realize you are thirsty, you are depleted. You are tired and listless. You think about getting another cup of coffee but your stomach complains at the thought.
No problem. Just get yourself a big glass of water, room termperature, no ice, and drink it down. You will feel the very substance of life trickle into your body and bring you back to life. Then drink another glass. 
Couldn’t hurt to try, right?
2. Work your abs.
What they say about a strong core? It’s all true. Strengthen your abdominal muscles and you will be amazed at the change in your life. I vouch for it from my own experience. 
I’m not talking about buying an Ab-Roller or going nuts with crazy crunches. Just do something every day, and see if you can do a little more every day. 
Couldn’t hurt to try, right?
3. Go outside. 
Preferably amongst trees, in a forest. 
We did not evolve to sit in front of a screen, typing. Our bodies do not like what we force them to do. Go outside and you will make your body happy. And seeing how your brain is inside your body, it will make you happy, too. Then when you get back to the screen, you will be energized, creative and ready to solve problems.
Couldn’t hurt to try, right?
How do I know these three things will make a difference? Because whenever I stop doing any of them for very long, I start to feel bad, ineffective, unfocused. 
Oh, wait a minute. I said “Four Secrets of Success”. So there’s one more. This one’s different from the others. The above three are things I suggest you do. Number Four is, in contrast, something I suggest you stop doing:
4. Turn off your TV.
By which I mean: stop looking at screens for sources of information about the world. Rely on direct experience as much as possible.
Not only is television bad for humans physically, but you essentially turn off your brain when you watch it. If, instead, you turn off the TV, you will find that you have more time (objectively and subjectively) to think about things (and go outside, and work your abs, and...).
Couldn’t hurt to try, right?
Well, actually, you might find it kind of painful to turn off your TV. It depends on how comfortable you are living inside your own mind. 
And if you are not comfortable, well, how does that make you feel?
Wishing you the best in 2015,Steven Feuerstein
Categories: Development

Oracle Advanced Procurement

OracleApps Epicenter - Thu, 2015-01-01 06:55
Oracle Advanced Procurement is an integrated suite of software that dramatically cuts all supply man-agement costs. It adapts to your purchasing processes, supporting any combination of procurement models. It leverages Oracle’s extensive applications capabilities, robust development and operating platform, and award-winning global support. Thousands of companies in diverse industries—including professional services, government, asset-intensive sectors, and […]
Categories: APPS Blogs

Notes on machine-generated data, year-end 2014

DBMS2 - Wed, 2014-12-31 21:49

Most IT innovation these days is focused on machine-generated data (sometimes just called “machine data”), rather than human-generated. So as I find myself in the mood for another survey post, I can’t think of any better idea for a unifying theme.

1. There are many kinds of machine-generated data. Important categories include:

  • Web, network and other IT logs.
  • Game and mobile app event data.
  • CDRs (telecom Call Detail Records).
  • “Phone-home” data from large numbers of identical electronic products (for example set-top boxes).
  • Sensor network output (for example from a pipeline or other utility network).
  • Vehicle telemetry.
  • Health care data, in hospitals.
  • Digital health data from consumer devices.
  • Images from public-safety camera networks.
  • Stock tickers (if you regard them as being machine-generated, which I do).

That’s far from a complete list, but if you think about those categories you’ll probably capture most of the issues surrounding other kinds of machine-generated data as well.

2. Technology for better information and analysis is also technology for privacy intrusion. Public awareness of privacy issues is focused in a few areas, mainly:

  • Government snooping on the contents of communications.
  • Communication traffic analysis.
  • Photos and videos (airport scanners, public cameras, etc.)
  • Commercial ad targeting.
  • Traditional medical records.

Other areas, however, continue to be overlooked, with the two biggies in my opinion being:

  • The potential to apply marketing-like psychographic analysis in other areas, such as hiring decisions or criminal justice.
  • The ability to track people’s movements in great detail, which will be increased greatly yet again as the market matures — and some think this will happen soon — for consumer digital health.

My core arguments about privacy and surveillance seem as valid as ever.

3. The natural database structures for machine-generated data vary wildly. Weblog data structure is often remarkably complex. Log data from complex organizations (e.g. IT shops or hospitals) might comprise many streams, each with a different (even if individually simple) organization. But in the majority of my example categories, record structure is very simple and repeatable. Thus, there are many kinds of machine-generated data that can, at least in principle, be handled well by a relational DBMS …

4. … at least to some extent. In a further complication, much machine-generated data arrives as a kind of time series. Many (but not all) time series call for a strong commitment to event-series styles of analytics. Event series analytics are a challenge for relational DBMS, but Vertica and others have tried to step up with various kinds of temporal predicates or datatypes. Event series are also a challenge for business intelligence vendors, and a potentially significant driver for competitive rebalancing in the BI market.

5. Event series even aside, I wish I understood more about business intelligence for non-tabular data. I plan to fix that.

6. Streaming and memory-centric processing are closely related subjects. What I wrote recently about them for Hadoop still applies: Spark, Kafka, etc. is still the base streaming case going forward; Storm is still around as an alternative; Tachyon or something like it will change the game somewhat. But not all streaming machine-generated data needs to land in Hadoop at all. As noted above, relational data stores (especially memory-centric ones) can suffice. So can NoSQL. So can Splunk.

Not all these considerations are important in all use cases. For one thing, latency requirements vary greatly. For example:

  • High-frequency trading is an extreme race; microseconds matter.
  • Internet interaction applications increasingly require data freshness to the last click or other user action. Computational latency requirements can go down to the single-digit milliseconds. Real-time ad auctions have a race aspect that may drive latency lower yet.
  • Minute-plus response can be fine for individual remote systems. Sometimes they ping home more rarely than that.

There’s also still plenty of true batch mode, but — and I say this as part of a conversation that’s been underway for over 40 years — interactive computing is preferable whenever feasible.

7. My views about predictive analytics are still somewhat confused. For starters:

  • The math and technology of predictive modeling both still seem pretty simple …
  • … but sometimes achieve mind-blowing results even so.
  • There’s a lot of recent innovation in predictive modeling, but adoption of the innovative stuff is still fairly tepid.
  • Adoption of the simple stuff is strong in certain market sectors, especially ones connected to customer understanding, such as marketing or anti-fraud.

So I’ll mainly just link to some of my past posts on the subject, and otherwise leave discussion of predictive analytics to another day.

Finally, back in 2011 I tried to broadly categorize analytics use cases. Based on that and also on some points I just raised above, I’d say that a ripe area for breakthroughs is problem and anomaly detection and diagnosis, specifically for machines and physical installations, rather than in the marketing/fraud/credit score areas that are already going strong. That’s an old discipline; the concept of statistical process control dates back before World War II. Perhaps they’re underway; the Conviva retraining example listed above is certainly imaginative. But I’d like to see a lot more in the area.

Even more important, of course, could be some kind of revolution in predictive modeling for medicine.

Categories: Other

Learning about the Oracle Cloud and the new Alta skin.

Eric Rajkovic - Wed, 2014-12-31 19:36
I had a conversation with +David Haimes on twitter the other day about blogging, and realized I have not been blogging for a long long time.

While I can find information about almost everything with Google or Stack Overflow, I realized that it's sometime hard to find the one piece of information which is relevant for you, in your current context.

It usually ends up being a compilation of multiple blog posts or answers; it's the curated content which brings most of the $$$ value.

As I go through the process to discover how to use the Oracle public cloud, I am going to blog about the learning process, here.

The first advise I'll share is this one:

For any sample you are using, start with Git
Here is why:

  1. Once you have a working version, commit and push to the remote.

    It give you the following benefits: easy to share with your peers - you have a backup for 'free' - in case the next set of changes break something in a bad way, rolling back is a breeze.
  2. Once you have a broken version, rollback is cheap - see above ;)
  3. Once you are happy with your work, it's ready for other to view and use

    You have no extra step required, and other may even find it valuable before you consider to be done with it. 
Here is my new GitHub account where I will start to follow the same rule in the coming days:https://github.com/erajkovic
The specific blog post I wanted to reference today is this one: http://markchensblog.blogspot.com/2014/05/develop-and-deploy-jersey-restful.html
It's a great post, and I do not need to replicate it's content here. 
Only issue I had was that jersey-bundle-1.9.war was not present on the distribution I used to deploy to my cloud instance - I was using JDeveloper 11.1.1.7.1 and I could only find jersey-bundle-1.1.5.1.war on my local wlserver_10.3\common\deployable-libraries folder.
The fix is also trivial - get a newer version and install it as documented in the blog post referenced above.

Today, the sample I'll love to see in a Git repository, so I can Scan the code to learn how it was built before I Commit to it and run the sample on my local instance - it's the Oracle Alta demo.

There is a new code sample available on OTN for Alta skin - source: http://t.co/kZ0794V4R1 (or http://www.oracle.com/technetwork/developer-tools/jdev/index-098948.html#alta) It would have been nice to get it from https://github.com/oracle/Oracle-Cloud (or some other repo).

As I am proof reading my billet, I realized that this is the second advise I would have love to get from someone more knowledgeable -my mentor- with the Oracle public cloud development model.

It's give me an easy follow-up post for next year.

Fun : 2015 !

Jean-Philippe Pinte - Wed, 2014-12-31 17:01
Excellente Année 2015 !

Oracle Fusion Middleware EMEA Partner Community Forum 2015

Take this opportunity and register now for the Oracle Fusion Middleware Partner Community Forum XX Budapest in on March 3th & 4th 2015. with hands-on training delivered on March...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Dynamcially add components to an Oracle MAF AMX page & show and hide components

Shay Shmeltzer - Wed, 2014-12-31 14:18

A question I saw a couple of times about Oracle MAF AMX pages is "how can I add a component to the page at runtime?".

In this blog entry I'm going to show you a little trick that will allow you to dynamically "add" components to an AMX page at runtime, even though right now there is no API that allows you to add a component to an AMX page by coding.

Let's suppose you want to add a bunch of buttons to a page at runtime. All you'll need to have is an array that contain entries for every button you want to add to the page.

We are going to use the amx:iterator component that is bounded to the above array and simply goes over the records and renders a component for each one.

Going one step beyond that, I'm going to show how to control which components from that array actually shows up, based on another value in the array.

So this is another thing you get to see in this example and this is how to dynamically show or hide a component in an AMX page with conditional EL. Usually you'll use this EL in the rendered property of a component, but in the iterator situation we need to use another approach using the inlineStyle that you change dynamically.

You can further refine this approach to control which type of component you render - see for example this demo I did for regular ADF Faces apps and apply a similar approach. 

By the way - this demo is done with Eclipse using OEPE - but if you are using JDeveloper it should be just as easy :-) 

<p> </p>

Here is the relevant code from the AMX page:

<amx:iterator value="#{bindings.emps1.collectionModel}" var="row" id="i2">

<amx:commandButton id="c1" text="#{row.name}" inlineStyle="#{row.salary >4000 ? 'display: none;' : 'display: inline;'}">

<amx:setPropertyListener id="s1" from="#{row.name}" to="#{viewScope.title}"/>

</amx:commandButton>

</amx:iterator> 

Categories: Development

Happy New Year 2015

Senthil Rajendran - Wed, 2014-12-31 12:38

Wish every one a Happy New Year 2015.

UKOUG Annual Conference (Tech 2014 Edition)

Andrew Clarke - Wed, 2014-12-31 12:37
The conferenceThis year the UKOUG's tour of Britain's post-industrial heritage brought the conference to Liverpool. The Arena & Convention Centre is based in Liverpool docklands, formerly the source of the city's wealth and now a touristic playground of museums, souvenir shops and bars. Still at least the Pumphouse functions as a decent pub, which is one more decent pub than London Docklands can boast. The weather was not so much cool in the 'Pool as flipping freezing, with the wind coming off the Mersey like a chainsaw that had been kept in a meat locker for a month. Plus rain. And hail. Which is great: nothing we Brits like more than moaning about the weather.

After last year's experiment with discrete conferences, Apps 2014 was co-located with Tech 2014; each was still a separate conference with their own exclusive agendas (and tickets) but with shared interests (Exhibition Hall, social events). Essentially DDD's Bounded Context pattern. I'll be interested to know how many delegates purchased the Rover ticket which allowed them to cross the borders. The conferences were colour-coded, with the Apps team in Blue and the Tech team in Red; I thought this was an, er, interesting decision in a footballing city like Liverpool. Fortunately the enforced separation of each team's supporters kept violent confrontation to a minimum. The sessionsThis is not all of the sessions I attended, just the ones I want to comment on. There's no place like ORACLE_HOMEI started my conference by chairing Niall Litchfield's session on Monday morning. Niall experienced every presenter's nightmare: switch on the laptop, nada, nothing, completely dead. Fortunately it turned out to be the fuse in the charger's plug, and a marvellous tech support chap was able to find a spare kettle cable. Niall coped well with the stress and delivered a wide-ranging and interesting introduction of some of the database features available to developers. It's always nice to here a DBA say difficult is the task of developers these days. I'd like to hear more acknowledge it, and more importantly being helpful rather than becoming part of the developer's burden :) The least an Oracle DBA needs to know about LinuxTurns out "the least" is still an awful lot. Martin Nash started with installing a distro and creating a file system, and moves on from there. As a developer I find I'm rarely allowed OS access to the database server these days; I suspect many enterprise DBAs also spend most of their time in OEM rather than the a shell prompt. But Linux falls into that category of things which when you need to know them you need to know them in the worst possible way. So Martin has given me a long list of commands with which to familiarize myself. Why solid SQL still delivers the best performanceRobyn Sands began her session with the shocking statement that the best database performance requires good application design. Hardware improvements won't safe us from the consequences of our shonky code. From her experience in Oracle's Real World Performance team, the top three causes of database slowness are:
  • People not using the database the way it was designed to be used
  • Sub-optimal architecture or code
  • Sub-optimal algorithm (my new favourite synonym for "bug")

The bulk of her session was devoted to some demos, racing different approaches to DML:
  • Row-by-row processing
  • Array (bulk) processing
  • Manual parallelism i.e. concurrency
  • Set-based processing i.e. pure SQL
There were a series of races, starting with a simple copying of data from one table to another and culminating in a complex transformation exercise. If you have attended any Oracle performance session in the last twenty years you'll probably know the outcome already but it was interesting to see how much faster pure SQL was compared to the other approaches. in fact the gap between the set-based approach and the row-based approach widened with each increase in complexity of the task. What probably surprised many people (including me) was how badly manual parallelism fared: concurrent threads have a high impact on system resource usage, because of things like index contention. Enterprise Data Warehouse Architecture for Big DataDai Clegg was at Oracle for a long time and has since worked for a couple of startups which used some of the new-fangled Big Data/NoSQL products. This mix of experience has given him a breadth of insight which is not common in the Big Data discussion.

His first message is one of simple economics: these new technologies solve the problem of linear scale-out at a price-point below that of Oracle. Massively parallel programs using cheap or free open source software on commodity hardware. Commodity hardware is more failure prone than enterprise tin (and having lots of the blighters actually reduces the MTTF) but these distributed frameworks are designed to handle node failures; besides, commodity hardware has gotten a lot more reliable over the years. So, it's not that we couldn't implement most Big Data applications using relational databases, it's just cheaper not to.

Dai's other main point addressed the panoply of products in the Big Data ecosystem. Even in just the official Hadoop stack there are lots of products with similar or overlapping capabilities: do we need Kafka or Flume or both? There is no one Big Data technology which is cheaper and better for all use cases. Therefore it is crucial to understand the requirements of the application before starting on the architecture. Different applications will demand different permutations from the available options. Properly defined use cases (which don't to be heavyweight - Dai hymned the praises of the Agile-style "user story") will indicate which kinds of products are required. Organizations are going to have to cope with heterogeneous environments. Let's hope they save enough on the licensing fees to pay for the application wranglers. How to write better PL/SQLAfter last year's fiasco with shonky screen rendering and failed demos I went extremely low tech: I could have my presentation from the PDF on a thumb-drive. Fortunately that wasn't necessary. My session was part of the Beginners' Track: I'm not sure how many people in the audience were actual beginners; I hope the grizzled veterans got something out of it.

One member of the audience turned out to be a university lecturer; he was distressed by my advice to use pure SQL rather than PL/SQL whenever possible. Apparently his students keep doing this and he has to tell them to use PL/SQL features instead. I'm quite heartened to hear that college students are familiar with the importance of set-based programming. I'm even chuffed to have my prejudice confirmed that it is university lecturers who are teach people to write what is bad code in the real world. I bet he tells them to use triggers as well :) Oracle Database 12c New Indexing FeaturesI really enjoy Richard Foote's presenting style: it is breezily Aussie in tone, chatty and with the occasional mild cuss word. If anybody can make indexes entertaining it is Richard (and he did).

His key point is that indexes are not going away. Advances in caching and fast storage will not remove the need for indexed reads, and the proof is Oracle's commitment to adding further capabilities. In fact, there are so many new indexing features that Ricahrd's presentation was (for me) largely a list of things I need to go away and read about. Some of these features are quite arcane: an invisible index? on an invisible column? Hmmmm. I'm not sure I understand when I might want to implement partial indexing on a partitioned table. What I'm certain about is that most DBAs these days are responsible for so many databases that they don't have the time to acquire the requisite understanding of individual applications and their data; so it seems to me unlikely that they will be able to decide which partitions need indexing. This is an optimization for the consultants. Make your data models singIt was one of the questions in the Q&A section of Susan Duncan's talk which struck me. The questioner talked about their "legacy" data warehouse. How old did that make me feel? I can remember when Data Warehouses were new and shiny and going to solve very enterprises data problems.

The question itself dealt with foreign keys: as is a common practice the data warehouse had no defined foreign keys. Over the years it had sprawled across several hundred tables, without the documentation keeping up. Is it possible, the petitioner asked, to reverse engineer the data model with foreign keys in the database? Of course the short answer is No. While it might be possible to infer relationships from common column names, there isn't any tool we were aware of which could do this. Another reminder that disabled foreign keys are better than no keys at all. Getting started with JSON in the DatabaseMarco Gralike has a new title: he is no longer Mr XMLDB he is now Mr Unstructured Data in the DB. Or at least his bailiwick has been extended to cover JSON. JSON (JavaScript Object Notation) is a lightweight data transfer mechanism: basically it's XML without the tags. All the cool kids like JSON because it's the basis of RESTful web interfaces. Now we can store JSON in the database (which probably means all the cool kids will wander off to find something else now that fusty old Oracle can do it).
The biggest surprise for me is that Oracle haven't introduced a JSON data type (apparently there were so many issues around the XMLType nobody had the appetite for another round). So that means we store JSON in VARCHAR2, CLOB, BLOB or RAW. But like XML there are operators which allow us to include JSON documents in our SQL. The JSON dot notation works pretty much like XPath, and we can use it to build function-based indexes on the stored documents. However, we can't (yet) update just part of a JSON doc: it is wholesale replacement only.

Error handling is cute: by default invalid JSON syntax in a query produces null in result set rather than an exception. Apparently that's how the cool kids like it. For those of us that prefer our exceptions hurled rather than swallowed there is an option to override this behaviour. SQL is the best development language for Big DataThis was Tom Kyte giving the obverse presentation to Dai Clegg: Oracle can do all this Big Data stuff, and has been doing it for some time. He started with two historical observations:
  • XML data stores were going to kill off relational databases. Which didn't happen.
  • Before relational databases and SQL there was NoSQL, literally no SQL. Instead there were things like PL/1, which was a key-value data store.
Tom had a list of features in Oracle which support Big Data applications. They were:
  • Analytic functions which have enabled ordered array semantics in SQL since the last century.
  • SQL Developer's support for Oracle Data Mining.
  • The MODEL clause (for those brave enough to use it).
  • Advanced pattern matching with the MATCH RECOGNIZE clause in 12c
  • External tables with their support for extracting data from flat files, including from HDFS (with the right connectors)
  • Support for JSON documents (see above).
He could also have discussed document storage with XMLType and Oracle Text, Enterprise R, In-Memory columnar storage, and so on. We can even do Map/Reduce in PL/SQL if we feel so inclined. All of these are valid assertions; the problem is (pace Dai Clegg) simply one of licensing. Too many of the Big Data features are chargeable extras on top of Enterprise Edition licenses. Big Data technology is suited to a massively parallel world where all processors are multi-core and Oracle's licensing policy isn't. Five hints for efficient SQLThis was an almost philosophical talk from Jonathan Lewis, in which he explained how he uses certain hints to fix poorly performing queries. The optimizer takes a left-deep approach, which can lead to a bad choice of transformation, bad estimates (but check your stats as well!) and bad join orders. His strategic solution is to shape the query with hints so that Oracle's execution plan meets our understanding of the data. <

So his top five hints are:
  • (NO_)MERGE

  • (NO_)PUSH_PRED

  • (NO_)UNNEST

  • (NO_)PUSH_SUBQ

  • DRIVING_SITE

Jonathan calls these strategic hints, because advise the optimizer how to join tables or how to transform a sub-query. They don't hard-code paths in the way that say the INDEX hint does.

Halfway through the presentation Jonathan's laptop slid off the lectern and slammed onto the stage floor. End of presentation? Luckily not. Apparently his laptop is made of the same stuff they use for black box flight recorders, because after a few anxious minutes it rebooted successfully and he was able to continue with his talk. I was struck by how unflustered he was by the situation (even though he didn't have a backup due to last minute tweaking of the slides). A lovely demonstration of grace under pressure.

Oracleinaction.com in 2014 : A review

Oracle in Action - Wed, 2014-12-31 10:17

RSS content

The Louvre Museum has 8.5 million visitors per year. This blog was viewed about 320,000 times in 2014 with an average of 879 page views per day. If it were an exhibit at the Louvre Museum, it would take about 14 days for that many people to see it.

The busiest day of the year was December 1st with 1,656 views. The most popular post that day was ORACLE CHECKPOINTS.

These are the posts that got the most views on ORACLE IN ACTION in 2014.

The blog was visited by readers from 194 countries in all!
Most visitors came from India. The United States & U.K. were not far behind.

Thanks to all the visitors.

Keep visiting and giving your valuable feedback.

Wish you all a Very Happy New Year 2015  !!!!!



Tags:  

Del.icio.us
Digg

Comments:  0 (Zero), Be the first to leave a reply!
You might be interested in this:  
Copyright © ORACLE IN ACTION [Oracleinaction.com in 2014 : A review], All Right Reserved. 2015.

The post Oracleinaction.com in 2014 : A review appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

uhesse.com in 2014 – a Review

The Oracle Instructor - Wed, 2014-12-31 09:56
Better than ever

2014 brought a new high water mark with 282,000 hits!

Meanwhile, uhesse.com gets over 1,000 hits per day on average.

Who are the referrers?

The vast majority of visitors came to uhesse.com via google searches: 163,000

On a distant second place is Twitter: 2,500 visitors came from there

Facebook and LinkedIn each lead 800 visitors to uhesse.com

400 visitors came from jonathanlewis.wordpress.com – thank you, Jonathan!

Where did uhesse.com refer to?

As a good employee, I sent 1,500 visitors to education.oracle.com and 1,300 to oracle.com :-)

About 600 each went to jonathanlewis.wordpress.com and blog.tanelpoder.com, while richardfoote.wordpress.com got 350 visitors

Nationality of the visitors

They came from almost everywhere, the full list contains 200 countries in total – isn’t the internet a great thing?

Vistors of uhesse.com by nationality

Thank you all for visiting – I hope you come back in 2015 again :-)


Categories: DBA Blogs

Top 10 Rittman Mead Blog Posts from 2014

Rittman Mead Consulting - Wed, 2014-12-31 09:27

It’s the afternoon of New Year’s Eve over in the UK, so to round the year off here’s the top 10 blog posts from 2014 from the Rittman Mead blog, based on Google Analytics stats (page views for 2014 in brackets, only includes articles posted in 2014)

  1. Using Sqoop for Loading Oracle Data into Hadoop on the BigDataLite VM – Mark Rittman, March 22, 2014 (8466)
  2. OBIEE Dashboard prompt: at least one mandatory – Gianni Ceresa, March 17th 2014 (7683)
  3. Thoughts on Using Amazon Redshift as a Replacement for an Oracle Data Warehouse – Peter Scott, February 20th 2014 (6993)
  4. The Secret Life of Conditional Formatting in OBIEE – Gianni Ceresa, March 26th 2014 (5606)
  5. Trickle-Feeding Log Files to HDFS using Apache Flume – Mark Rittman, May 18th 2014 (5494)
  6. The State of the OBIEE11g World as of May 2014 – Mark Rittman, May 12th 2014 (4932)
  7. Date formatting in OBIEE 11g – setting the default Locale for users  – Robin Moffatt, February 12th 2014 (4840)
  8. Automated Regression Testing for OBIEE – Robin Moffatt, Jan 23rd 2014 (4040)
  9. OBIEE 11.1.1.7, Cloudera Hadoop & Hive/Impala Part 2 : Load Data into Hive Tables, Analyze using Hive & Impala – Mark Rittman, Jan 18th 2014 (3439)
  10. Introduction to Oracle BI Cloud Service : Product Overview – Mark Rittman, Sep 22nd 2014 (3190)

In all, the blog in one form or another has been going for 10 years now, and our most popular post of all time over the same period is Robin Moffatt’s “Upgrading OBIEE to 11.1.1.7” – well done Robin. To everyone else, have a Happy New Year and a prosperous 2015, and see you next year when it all starts again!

Categories: BI & Warehousing

Is your disaster recovery plan a disaster?

Chris Foot - Wed, 2014-12-31 08:14

Transcript

Hi, welcome to RDX. You may think your disaster recovery strategy is rock solid, but is it as comprehensive as you would like it to be? Are you leaving any factors out of the equation?

Dimension Research recently conducted a survey of 453 IT and security pros based in the U.S. and Canada. The group discovered 79 percent of respondents experienced a major IT blackout within the past two years. Of those participants, only 7 percent felt confident in their ability to deploy recovery strategies within two hours of an incident.

To ensure information is transferred to functional facilities in the event of a disaster, enterprises would benefit from collaborating with remote DBAs. These professionals can help detail every aspect of the DR initiative and outline how continuity can be maintained.

Thanks for watching!

The post Is your disaster recovery plan a disaster? appeared first on Remote DBA Experts.

Taking it to the hackers: Going on the offensive?

Chris Foot - Wed, 2014-12-31 08:04

Transcript

Hi, welcome to RDX! Firewalls, intrusion detection systems and database access security are all necessary for protecting information. However, some professionals are saying businesses could be doing more to deter hackers.

For example, why not make it difficult for them to infiltrate systems? Amit Yoran, a former incident response expert at the U.S. Department of Defense, believes data analysis programs must be leveraged to not only identify threats, but map out sequences of events.

Once complex infiltration strategies are understood, embedded database engines can deploy counter-attacks that exploit hackers' vulnerabilities. This allows organizations to effectively dismantle complex infiltration endeavors while enabling them to reinforce existing defenses.

Thanks for watching! For more advice on database security, be sure to check in!

The post Taking it to the hackers: Going on the offensive? appeared first on Remote DBA Experts.

Happy New Year 27, 104, 2015 and 2558

Yann Neuhaus - Wed, 2014-12-31 06:25

calendar          today  tomorrow  message
----------------- ------ --------- --------------
Arabic Hijrah     1436   1436
English Hijrah    1436   1436
Gregorian         2014   2015      Happy New Year
Japanese Imperial 0026   0027      Happy New Year
Persian           1393   1393
ROC Official      0103   0104      Happy New Year
Thai Buddha       2557   2558      Happy New Year

I have an idea for an app, what's next?

Bradley Brown - Tue, 2014-12-30 18:44
This is a question that I get asked quite often.  My first thoughts are:

1. An app isn't necessarily a business.
2. Can you generate enough revenue to pay for the development?
3. There is usually more to an app than just the app.
4. Which platforms?

I thought I'd take this opportunity to address each of these points in more detail.  Before I do this, I think it's important to say that I don't consider myself to be the world's leading authority on apps, so I should explain why I get asked this question.

In 2008, when I saw my first Android phone, I was very intrigued by the ability to write an app for a phone.  I had this thought - what if I could develop an app that would I would sell and it paid for my lunch every day.  How cool would that be?  I was a Java developer (not a great one, but I could write Java code) and the Android devices used a Java development stack as their base.  So the learning curve wasn't huge for me.  More importantly, I only had to invest time, not money to write an app.

I was very much into Web Services at the time and Yahoo had (still has actually) some really cool Web Services that are available for public use.  These are based on what they call YQL (Yahoo Query Language) and since I'm a SQL (Structured Query Language) guy at heart, YQL and Web Services were right up my alley.

One of the uses of YQL included providing a location and getting all of the local events in a specified radius from that location.  So I thought I should create an app that would allow anyone to find their local events that they were interested in.  I created my first "Local Events" app and put in the market.  Not many people downloaded the app (it wasn't paying for lunch), so I started thinking about how people searched for apps.  I figured they would search for the events they were interested in - singles, beer, crafting, technical, etc.  So I created "Local Beer Events" and "Local Singles Events" and many other apps.

Another YQL search that Yahoo provides is for local businesses - again, from a specific location.  So my "second" app was centered around local businesses.  One again, I thought about how people searched for apps and I created a local Starbucks app, local Panera, local Noodles, etc.  The downside of this app was that Starbucks and many others didn't like me using their name in app name due to trademark infringements.

Back to my story of paying for lunch - I quickly paid for lunch each day and my goal became to generate $100 a day, then $1000 a day.  I did generate over $1000 in many days.  I experimented with pricing and learned a lot.

In the end, I ended up taking all of those apps off the market...or actually Google took them off the market for me.  Likely due to my app names or because I had spammed the market with over 500 apps or who knows why.

I wrote a book for Apress book on the business of writing Android apps and I spoke at numerous conferences on the topic.

It was at that point that I decided to rethink my app strategy.  What could I build that would actually be a business?  Could I charge for the app or did I need to offer an entire service?

So back to my questions above:
An app isn't necessarily a businessIf you're a developer and you can develop 100% of the app with no costs, this may not apply to you.  Most people have to pay for developers and servers to deploy an app.  A business is typically defined as an entity that makes a profit.  So income minus expenses is profit.  What will your income be from your app?  Do people actually pay for apps today?  I believe they do, but not often...i.e. there must be a LOT of value to pay for an app...especially to have enough people paying for your app.  Let's say you price your app at $2.  How many copies do you have to sell just to pay for the development?  What about the ongoing support costs?  If you paid $20k to develop the app, you would have to sell 10,000 copies to "break even."  But...you'll have to support the app, keep it running, upgrade it, etc.  Most apps (like books) never sell 10,000 copies.  So...just creating an app isn't necessarily a business.
Can you generate enough revenue to pay for the developmentLike I said above, generating revenue for an app is tough.  Paying for the development of the app is tough.  Maybe you can generate revenue other ways?  Think about this a LOT before you decide to proceed with developing your app.
There is usually more to an app than just the appMost apps aren't standalone apps.  Sure, my "Local Starbucks" app was "standalone" in some regards, but it wasn't in other regards.  It relied on the Yahoo Web Service to deliver current Starbucks locations.  I had someone approach me about a "Need a Loo" (find a local bathroom in London) app.  They had the data for all of the bathrooms...but this changed frequently.  Could I have built the app and had the data be included in the app?  Yes, but...when the Loo locations changed, I would have had to update the app, which isn't an ideal solution.  So I had to build a database and a Web application that allowed them to maintain Loo locations.  Then I had to build web services that looked up the current Loo locations from the database.  In other words, most apps involve databases, web services, and back end systems to maintain the data.  All of these imply additional costs...which imply additional revenue that must be generated to sustain your business.

I wrote 5 books for Oracle press on the topic of web applications, web services and the like.  I know how to build the backend of apps, this was the easy part for me!
Which Platforms?When you think about an app, you might be thinking of an iPhone app if you have an iPhone or an iPad.  You might be thinking of an Android app if you have an Android phone or tablet.  There are SO many development platforms today.  iOS, Android, Mac, Windows, Apple TV, Kindle Fire TV, and literally about 100 more.  There are cross platform development tools, but they tend to be what I call "least common denominator" solutions.  In other words, they will alienate someone.  If it looks like an iOS (iPhone/iPad) app, it's going to alienate the Android users...or visa-versa.  For this reason, native apps are in vogue now.

Every platform is about $30k or more in our world.  Again, all of these are expenses...that must be recouped.
Why InteliVideo?I thought long and hard about my next generation of apps that I wanted to create.  That's when I determined that I needed to create a business...that had apps, not an app that was a business.  The video business was a natural progression for me.  I wanted to have the ability to sell my educational material (Oracle training) and deliver it in an app.  We have a LOT more than an app.  We have an entire business - that has apps.  So when you think about developing an app...think about the business, not the app.

Oracle Priority Support Infogram End of Year Edition

Oracle Infogram - Tue, 2014-12-30 18:12

Data Integration Tips: ODI – One Data Server with several Physical Schemas

Rittman Mead Consulting - Tue, 2014-12-30 16:25

Yes, I’m hijacking the “Data Integration Tips” series of my colleague Michael Rainey (@mRainey) and I have no shame!

DISCLAIMER
This tip is intended for newcomers in the ODI world and is valid with all the versions of ODI. It’s nothing new, it has been posted by other authors on different blogs. But I see so much people struggling with that on the ODI Space on OTN that I wanted to explain it in full details, with all the context and with my own words. So next time I can just post a link to this instead of explaining it from scratch.

The Problem

I’m loading data from a schema to another schema on the same Oracle database but it’s slower than when I write a SQL insert statement manually. The bottle neck of the execution is in the steps from the LKM SQL to SQL. What should I do?

Why does it happen?

Loading Knowledge Modules (LKMs) are used to load the data from one Data Server to another. It usually connects to the source and the target Data Server to execute some steps on each of them. This is required when working with different technologies or different database instances for instance. So if we define two Data Servers to connect to our two database schemas, we will need a LKM.

In this example, I will load a star schema model in HR_DW schema, using the HR schema from the same database as a source. Let’s start with the approach using two Data Servers. Note that here we use directly the database schema to connect to our Data Servers.

Two Data Servers connecting to the same database instance, using directly the database schema to connect.

And here are the definitions of the physical schemas :

Physical Schemas

Let’s build a simple mapping using LOCATIONS, COUNTRIES and REGIONS as source to denormalize it and load it into a single flattened DIM_LOCATIONS table. We will use Left Outer joins to be sure we don’t miss any location even if there is no country or region associated. We will populate LOCATION_SK with a sequence and use an SCD2 IKM.

Mapping - Logical tab

If we check the Physical tab, we can see two different Execution Groups. This mean the Datastores are in two different Data Servers and therefore a LKM is required. Here I used LKM SQL to SQL (Built-In) which is a quite generic one, not particularly designed for Oracle databases. Performances might be better with a technology-specific KM, like LKM Oracle to Oracle Pull (DB Link). By choosing the right KM we can leverage the technology-specific concepts – here the Oracle database links – which often improve performance. But still, we shouldn’t need any database link as everything lies in the same database instance.

Mapping - Physical tab

 

Another issue is that temporary objects needed by the LKM and the IKM are created in the HR_DW schema. These objects are the C$_DIM_LOCATIONS table created by the LKM to bring the data in the Target Data Servers and the I$_DIM_LOCATIONS table created by the IKM to detect when a new row is needed or when a row needs to be updated according to the SCD2 rules. Even though these objects are deleted in the clean-up steps at the end of the mapping execution, it would be better to use another schema for these temporary objects instead of target schema that we want to keep clean.

The Solution

If the source and target Physical Schemas are located on the same Data Server – and the technology can execute code – there is no need for a LKM. So it’s a good idea to try to reuse as much as possible the same Data Server for data coming from the same place. Actually, the Oracle documentation about setting up the topology recommends to create an ODI_TEMP user/schema on any RDBMS and use it to connect.

This time, let’s create only one Data Server with two Physical schemas under it and let’s map it to the existing Logical schemas. Here I will use ODI_STAGING name instead of ODI_TEMP because I’m using the excellent ODI Getting Started virtual machine and it’s already in there.

One Data Server with two Physical Schemas under it

As you can see in the Physical Schema definitions, there is no other password provided to connect with HR or HR_DW directly. At run-time, our agent will only use one connection to ODI_STAGING and execute code through it, even if it needs to populate HR_DW tables. It means that we need to be sure that ODI_STAGING has all the required privileges to do so.

Physical schemas

Here are the privileges I had to grant to ODI_STAGING :

GRANT SELECT on HR.LOCATIONS TO ODI_STAGING;
GRANT SELECT on HR.COUNTRIES TO ODI_STAGING;
GRANT SELECT on HR.REGIONS TO ODI_STAGING;

GRANT SELECT, INSERT, UPDATE, DELETE on HR_DW.DIM_LOCATIONS to ODI_STAGING;
GRANT SELECT on HR_DW.DIM_LOCATIONS_SEQ to ODI_STAGING;

Let’s now open our mapping again and go on the physical tab. We now have only one Execution Group and there is no LKM involved. The code generated is a simple INSERT AS SELECT (IAS) statement, selecting directly from the HR schema and loading into the HR_DW schema without any database link. Data is loaded faster and our first problem is addressed.

Mapping - Physical tab without LKM

Now let’s tackle the second issue we had with temporary objects being created in HR_DW schema. If you scroll upwards to the Physical Schema definitions (or click this link, if you are lazy…) you can see that I used ODI_STAGING as Work Schema in all my Physical Schemas for that Data Server. This way, all the temporary objects are created in ODI_STAGING instead of the source or target schema. Also we are sure that we won’t have any issue with missing privileges, because our agent uses directly ODI_STAGING to connect.

So you can see it has a lot of advantages using a single Data Server when sources come from the same place. We get rid of the LKM and the schema used to connect can also be used as Work Schema so we keep the other schemas clean without any temporary objects.

The only thing you need to remember is to give the right privileges to ODI_STAGING (or ODI_TEMP) on all the objects it needs to handle. If your IKM has a step to gather statistics, you might also want to grant ANALYZE ANY. If you need to truncate a table before loading it, you have two approaches. You can grant DROP ANY table to ODI_STAGING, but this might be a dangerous privilege to give in production. A safer way is to create a stored procedure ODI_TRUNCATE in all the target database schema. This procedure takes a table name as a parameter and truncates that table using the Execute Immediate statement. Then you can grant execute on that procedure to ODI_STAGING and edit your IKM step to execute that procedure instead of using the truncate syntax.

 

That’s it for today, I hope this article can help some people to understand the reason of that Oracle recommendation and how to implement it. Stay tuned on this blog and on Twitter (@rittmanmead, @mRainey, @markrittman, @JeromeFr, …) for more tips about Data Integration!

Categories: BI & Warehousing

Year-end Updates on e-Literate News Posts

Michael Feldstein - Tue, 2014-12-30 16:11

For my final 2014 post, I thought it would be interesting to provide year-end updates to some news posts on e-Literate over the past year. You’ll notice that there is somewhat of an emphasis on negative stories or implications. For most positive stories, companies and institutions are typically all too happy to send out press releases with the associated media paraphrasing, and we have little need here to cover as news. The following non-exhaustive list is in date order.

D2L Growth Claims

In December 2013 I described layoffs at Desire2Learn (now officially named D2L). The significance of this story is that it calls into question D2L’s growth claims and trumpeting of massive new investment of $85 million. Some updates:

IPEDS Data on Online Learning


In early January we covered the new federal data on online learning, eventually breaking out into graphical analysis of Top 20 schools, state-by-state, and sector-by-sector categories. Some updates:

  • Russ Poulin from WCET and I (with some excellent help from WCET researchers) did some further analysis showing “significant confusion over basic definitions of terms, manual gathering of data outside of the computer systems designed to collect data, and, due to confusion over which students to include in IPEDS data, the systematic non-reporting of large numbers of degree-seeking students”.
  • Based on this analysis, NCES essentially responded by saying to ‘follow the damn rules, we’re not changing our approach’ (paraphrase).
  • The new data for the Fall 2013 term should be available in the next week or two, but for reasons listed above, I would be very cautious about forming conclusions for year-over-year changes.
2U’s IPO

In February and March we ran several stories about 2U’s IPO, as it clarified the business side of Online Service Providers and was one of the rare ed tech IPOs. Some updates:

Coursera New CEO and Direction

In March we described Coursera’s hiring of a new CEO – Richard Levin from Yale (and formerly creator of AllLearn and Open Yale Courses). Some updates:

Unizin

In what was probably our biggest “news” story of the year, Michael and I covered the creation and release of the Unizin consortium. Some updates:

  • After we broke the story on May 16th the Unizin consortium was officially announced on June 11th.
  • As of the end-of-year, Unizin has signed up 10 institutionsColorado State University, the University of Florida, Indiana University, the University of Michigan, Ohio State University, Pennsylvania State, the University of Iowa, the University of Minnesota, the University of Wisconsin-Madison, and Oregon State University. The highlighted institutions were listed as potential partners in the original May story. Purdue University, the University of Maryland, the University of Texas, and the University of Utah were listed in May but have not (yet) joined Unizin.
  • Other than the different list of schools that have joined, substantially all of the original details have been confirmed by later events.
Cal State Online Demise

In July we covered the demise of Cal State Online less than three years after its high-profile kickoff. Some updates:

Problems with University of California UCPath System

In July we covered the $220+ million delayed program to implement a systemwide Payroll system that promised to pay for itself within five years. Some updates:

Kuali 2.0

In August we described the described the big changes to Kuali – moving development to a for-profit entity – and proclaimed that “community source is dead”. Some updates:

  • The Kuali Student, Kuali Coeus, Kuali Financial System, and Kuali Ready projects have all voted to shift to the new model and run through KualiCo.
  • For Kuali Student, the University of Maryland has signed on as the first institutional partner.
  • Boston College has decided to not go with KualiCo and has issued an RFP with the following purpose:

Boston College was a Kuali Student (KS) partner until the KS Board decision on November 14, 2014 to stop the current development of Kuali Student 1.0 and move to KualiCo. Boston College would like to complete the current development of Kuali Student Enrollment 1.0, under the current ECL license and is seeking a development partner. This would involve taking the latest Kuali Student Enrollment release and building out the required functionality.

Now, on to 2015.

The post Year-end Updates on e-Literate News Posts appeared first on e-Literate.