Skip navigation.

Feed aggregator

The role of Coherence in Batch

Anthony Shorten - Wed, 2015-07-29 19:48

Lately I have been talking to partners and customers on older versions of the Oracle Utilities Application Framework and they are considering upgrading to the latest version. One of the major questions they ask is about the role of Oracle Coherence in our architecture. Here are some clarifications:

  • We supply a subset of the runtime Oracle Coherence libraries we use in the batch architecture with the installation. It does not require a separate Oracle Coherence license (unless you are intending to use Coherence for some customizations which requires the license). 
  • We only use a subset of the Oracle Coherence API around the cluster management and load balancing of the batch architecture. If you are a customer who uses the Oracle Coherence Pack within Oracle Enterprise Manager for monitoring the batch component, it is not recommended at the present time. The Coherence pack will return that components are missing and therefore give erroneous availability information. We have developed our own monitoring API within the framework that is exposed via the Oracle Application Management Pack for Oracle Utilities.
  • The idea behind the use of Oracle Coherence is as follows:
    • The Batch Architecture uses a Coherence based Cluster. This can be configured to use uni-cast or multi-cast to communicate across the cluster.
    • A cluster has a number of members (also known as nodes to some people). In our case members are threadpools and job threads.
    • A threadpool is basically a running Java Virtual Machine, preloaded with the Oracle Utilities Application Framework, ready to accept work. The reason we use threadpools is that when you execute java processes in Oracle Utilities Application Framework, there is an overhead in memory of loading the framework cache and objects, as well as java itself, before a job can execute. By creating a threadpool, this overhead is minimized and the threadpool can be used across lots of job threads.
    • Threadpools are named (this is important) and have a thread limit (this is a batch limit not a java limit as batch threads are heavier than online threads. The weight is used to describe batch because  batch thread are long running threads. Online threads are typically short running.
    • When a threadpool is started, locally or remotely, it is added to the cluster. A named threadpool can have multiple instances (even on the same machine). The threadpool limit is the sum of the limits across all its instances.
    • When a batch job thread is executed (some jobs are single threaded or multi-threaded) it is submitted to the cluster. Oracle Coherence then load balances those threads across the name of threadpool allocated on the job thread parameters.
    • Oracle Coherence tracks the threadpools and batch job threads so that if any failure occurs then the thread and threadpool are aware. For example, if a threadpool crashes the cluster is made aware and the appropriate action can be taken. This keeps the architecture in synchronization at all times.
  • We have built a wizard (bedit) to help build the properties files that drive the architecture. This covers clusters, threadpools and even individual batch jobs.
  • When building a cluster we tend to recommend the following:
    • Create a cache threadpool per machine to minimize member to member network traffic. A cache threadpool does not run jobs, it just acts as a co-ordination point for Oracle Coherence. Without a cache threadpool, each member communicates to each member which can be quite a lot of networking when you have a complex network of members (including lots of active batch job threads).
    • Create an administration threadpool with no threads to execute. This is just a configuration concept where you can connect to the JMX via this member. The JMX API is available from any active threadpool but it is a good practice to isolate JMX traffic from other traffic.
    • Create a pool of threadpools to cover key jobs and other pools for other jobs. The advantage is for monitoring and controlling resources within the JVM.

For more information about this topic and other advice on batch refer to the Batch Best Practices (Doc Id: 836362.1) available from My Oracle Support.

Upgrade your SES Database From 11.2.0.3 to 11.2.0.4 for the PeopleSoft Search Framework

PeopleSoft Technology Blog - Wed, 2015-07-29 17:33
An Oracle database Upgrade from 11.2.0.3 to 11.2.0.4 is available for Secure Enterprise Search (SES) with PeopleSoft.  This document on My Oracle Support provides step by step instructions for performing the upgrade.  Note that this upgrade is available for PeopleTools 8.53 or higher on Unix/Linux environments.

I Wish I Sold More

Cary Millsap - Wed, 2015-07-29 17:26
I flew home yesterday from Karen’s memorial service in Jacksonville, on a connecting flight through Charlotte. When I landed in Charlotte, I walked with all my stuff from my JAX arrival gate (D7) to my DFW departure gate (B15). The walk was more stressful than usual because the airport was so crowded.

The moment I set my stuff down at B15, a passenger with expensive clothes and one of those permanent grins established eye contact, pointed his finger at me, and said, “Are you in First?”

Wai... Wha...?

I said, “No, platinum.” My first instinct was to explain that I had a right to occupy the space in which I was standing. It bothers me that this was my first instinct.

He dropped his pointing finger, and his eyes went no longer interested in me. The big grin diminished slightly.

Soon another guy walked up. Same story: the I’m-your-buddy-because-I’m-pointing-my-finger-at-you thing, and then, “First Class?” This time the answer was yes. “ALRIGHT! WHAT ROW ARE YOU IN?” Row two. “AGH,” like he’d been shot in the shoulder. He holstered his pointer finger, the cheery grin became vaguely menacing, and he resumed his stalking.

One guy who got the “First Class?” question just stared back. So, big-grin guy asked him again, “Are you in First Class?” No answer. Big-grin guy leaned in a little bit and looked him square in the eye. Still no answer. So he leaned back out, laughed uncomfortably, and said half under his breath, “Really?...”

I pieced it together watching this big, loud guy explain to his traveling companions so everybody could hear him, he just wanted to sit in Row 1 with his wife, but he had a seat in Row 2. And of course it will be so much easier to take care of it now than to wait and take care of it when everybody gets on the plane.

Of course.

This is the kind of guy who sells things to people. He has probably sold a lot of things to a lot of people. That’s probably why he and his wife have First Class tickets.

I’ll tell you, though, I had to battle against hoping he’d hit his head and fall down on the jet bridge (I battled coz it’s not nice to hope stuff like that). I would never have said something to him; I didn’t want to be Other Jackass to his Jackass. (Although people might have clapped if I had.)

So there’s this surge of emotions, none of them good, going on in my brain over stupid guy in the airport. Sales reps...

This is why Method R Corporation never had sales reps.

But that’s like saying I’ve seen bad aircraft engines before and so now in my airline, I never use aircraft engines. Alrighty then. In that case, I hope you like gliders. And, hey: gliders are fine if that makes you happy. But a glider can’t get me home from Florida. Or even take off by itself.

I wish I sold more Method R software. But never at the expense of being like the guy at the airport. It seems I’d rather perish than be that guy. This raises an interesting question: is my attitude on this topic just a luxury for me that cheats my family and my employees out of the financial rewards they really deserve? Or do I need to become that guy?

I think the answer is not A or B; it’s C.

There are also good sales people, people who sell a lot of things to a lot of people, who are nothing like the guy at the airport. People like Paul Kenny and the honorable, decent, considerate people I work with now at Accenture Enkitec Group who sell through serving others. There were good people selling software at Hotsos, too, but the circumstances of my departure in 2008 prevented me from working with them. (Yes, I do realize: my circumstances would not have prevented me from working with them if I had been more like the guy at the airport.)

This need for duality—needing both the person who makes the creations and the person who connects those creations to people who will pay for them—is probably the most essential of the founder’s dilemmas. These two people usually have to be two different people. And both need to be Good.

In both senses of the word.

Three Steps to Get Big Data Ready for HR

Linda Fishman Hoyle - Wed, 2015-07-29 13:02

A Guest Post by Melanie Hache-Barrois, Oracle HCM Strategy Director, Southern Europe

Big data will revolutionize HR practices―here is how to hit the ground running with your implementation.

Big data analytics promises to deliver new insights into the workforce; these insights can help HR better predict trends and policy outcomes, and thereby, make the right decisions. It has the power to help HR to predict and plan organizational performance, to minimize the cost, time, and risk of taking on new HR initiatives, and to understand, develop, and maintain a productive workforce over a single technology platform, and much more.

Big data analytics has a huge role to play in the future of HR, but it is important that HR teams get prepared in the right way. Here are our tips to make sure that your data is ready for the big data revolution.

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri",sans-serif; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-GB; mso-fareast-language:EN-GB;}

1.  Remove data ‘islands’

The first step is to identify what kind of data you need for a truly successful HR strategy. Too often, HR teams experience data organized in silos, cut off from the rest of the organization. The migration to big data provides the perfect opportunity to identify data islands within your HR systems and define a strategy to integrate and reorganize them.

2.  Use a single interface

Understanding how data is collected within your organization is fundamental to a successful big data strategy. You have to avoid ‘copy and paste’ practices, and instead, make sure that data is collected automatically, and seamlessly integrated in one interface. The less you have to manually record information and integrate it into your HR systems, the better. It is therefore crucial to choose a single simple interface that will collect all your data and make it easily accessible to your team.

3.  Start simple

Once you have chosen the type of data you need and the way and where you will collect it, you can decide the kind of analytics you need. To be efficient and keep it simple, you can start with simple correlations to understand how big data analytics works and what kind of results you can get. You can then slowly increase the analytical complexity, heading to predictive analytics.

These three steps will ensure that Oracle big data solution will help you deliver an enhanced HR strategy that meets your corporate goals.

My Friend Karen

Cary Millsap - Wed, 2015-07-29 11:54
My friend Karen Morton passed away on July 23, 2015 after a four-month battle against cancer. You can hear her voice here.

I met Karen Morton in February 2002. The day I met her, I knew she was awesome. She told me the story that, as a consultant, she had been doing something that was unheard-of. She guaranteed her clients that if she couldn’t make things on their systems go at least X much faster on her very first day, then they wouldn’t have to pay. She was a Give First person, even in her business. That is really hard to do. After she told me this story, I asked the obvious question. She smiled her big smile and told me that her clients had always paid her—cheerfully.

It was an honor when Karen joined my company just a little while later. She was the best teammate ever, and she delighted every customer she ever met. The times I got to work with Karen were bright spots in my life, during many of the most difficult years of my career. For me, she was a continual source of knowledge, inspiration, and courage.

This next part is for Karen’s family and friends outside of work. You know that she was smart, and you know she was successful. What you may not realize is how successful she was. Your girl was famous all over the world. She was literally one of the top experts on Earth at making computing systems run faster. She used her brilliant gift for explaining things through stories to become one of the most interesting and fun presenters in the Oracle world to go watch, and her attendance numbers proved it. Thousands of people all over the world know the name, the voice, and the face of your friend, your daughter, your sister, your spouse, your mom.

Everyone loved Karen’s stories. She and I told stories and talked about stories, it seems like, all the time we were together. Stories about how Oracle works, stories about helping people, stories about her college basketball career, stories about our kids and their sports, ...

My favorite stories of all—and my family’s too—were the stories about her younger brother Ted. These stories always started out with some middle-of-the-night phone call that Karen would describe in her most somber voice, with the Tennessee accent turned on full-bore: “Kar’n: This is your brother, Theodore LeROY.” Ted was Karen’s brother Teddy Lee when he wasn’t in trouble, so of course he was always Theodore LeROY in her stories. Every story Karen told was funny and kind.

We all wanted to have more time with Karen than we got, but she touched and warmed the lives of literally thousands of people. Karen Morton used her half-century here on Earth with us as well as anyone I’ve ever met. She did it right.

God bless you, Karen. I love you.

August 12: Atradius Collections Oracle Sales Cloud Customer Forum

Linda Fishman Hoyle - Wed, 2015-07-29 11:06

Join us for another Oracle Customer Reference Forum on August 12th, 2015 at 8:00 a.m. PT / 11:00 a.m. ET / 5:00 p.m. CEST.

Sonja van Haasteren, Global Customer Experience Manager of Atradius Collections, will talk about the company’s journey with Oracle CX products focused on Oracle Sales Cloud with Oracle Marketing Cloud and its path to expand with Oracle Data Cloud.

Atradius Collections is a global leader in trade-invoice-collection services. It provides solutions to recover domestic and international trade invoices. Atradius Collections handles more than 100,000 cases a year for more than 14,500 customers, covering over 200 countries.

Register now to confirm your attendance for this informative event on August 12.

TekStream Reduces Project Admin Costs by 30% with Oracle Documents Cloud

WebCenter Team - Wed, 2015-07-29 07:54

Read this latest announcement from Oracle to find out more about how TekStream Solutions, a solution services company in North America streamlined project management and administration and improved client project delivery with Oracle Documents Cloud Service, an enterprise-grade cloud collaboration and file sync and share solution. Learn how, within the first month of its use, TekStream was able to cut project administration costs by 30% and reduce complexity to not only drive client results faster but also provide a superior project experience to both its consultants as well as its clients.

And here's a brief video with Judd Robins, executive vice president, Consulting Services of TekStream Solutions as he discusses the specific areas where they were looking to make improvements and how Oracle Documents Cloud enabled easy and yet secure cloud collaboration not only among its consultants who are always on the go, but also with its clients.

To learn more about Oracle Documents Cloud Service and how it can help your enterprise visit us at cloud.oracle.com/documents.

Existence

Jonathan Lewis - Wed, 2015-07-29 06:05

A recent question on the OTN Database Forum asked:

I need to check if at least one record present in table before processing rest of the statements in my PL/SQL procedure. Is there an efficient way to achieve that considering that the table is having huge number of records like 10K.

I don’t think many readers of the forum would consider 10K to be a huge number of records; nevertheless it is a question that could reasonably be asked, and should prompt a little discssion.

First question to ask, of course is: how often do you do this and how important is it to be as efficient as possible. We don’t want to waste a couple of days of coding and testing to save five seconds every 24 hours. Some context is needed before charging into high-tech geek solution mode.

Next question is: what’s wrong with writing code that just does the job, and if it finds that the job is complete after zero rows then you haven’t wasted any effort. This seems reasonable in (say) a PL/SQL environment where we might discuss the following pair of strategies:


Option 1:
=========
-- execute a select statement to see in any rows exist

if (flag is set to show rows) then
    for r in (select all the rows) loop
        do something for each row
    end loop;
end if;

Option 2:
=========
for r in (select all the rows) loop
    do something for each row;
end loop;

If this is the type of activity you have to do then it does seem reasonable to question the sense of putting in an extra statement to see if there are any rows to process before processing them. But there is a possibly justification for doing this. The query to find just one row may produce a very efficient execution plan, while the query to find all the rows may have to do something much less efficient even when (eventually) it finds that there is no data. Think of the differences you often see between a first_rows_1 plan and an all_rows plan; think about how Oracle can use index-only access paths and table elimination – if you’re only checking for existence you may be able to produce a MUCH faster plan than you can for selecting the whole of the first row.

Next question, if you think that there is a performance benefit from the two-stage approach: is the performance gain worth the cost (and risk) of adding a near-duplicate statement to the code – that’s two statements that have to be maintained every time you make a change. Maybe it’s worth “wasting” a few seconds on every execution to avoid getting the wrong results (or an odd extra hour of programmer time) once every few months. Bear in mind, also, that the optimizer now has to optimize two statement instead of one – you may not notice the extra CPU usage in testing but perhaps in the live environment the execution benefit will be eroded by the optimization cost.

Next question, if you still think that the two-stage process is a good idea: will it result in an inconsistent database state ?! If you select and find a row, then run and find that there are no rows to process because something modified and “hid” the row you found on the first pass – what are you going to do. Will this make the program crash ? Will it produce an erroneous result on this run, or will a silent side effect be that the next run will produce the wrong results. (See Billy Verreynne’s comment on the original post). Should you set the session to “serializable” before you start the program, or maybe lock a critical table to make sure it can’t change.

So, assuming you’ve decided that some form of “check for existence then do the job” is both desirable and safe, what’s the most efficient strategy. Here’s one of the smarter solutions that miminises risk and effort (in this case using a pl/sql environment).


select  count(*)
into    m_counter
from    dual
where   exists ({your original driving select statement})
;

if m_counter = 0 then
    null;
else
    for c1 in {your original driving select statement} loop
        -- do whatever
    end loop;
end if;

The reason I describe this solution as smarter, with minimum risk and effort, is that (a) you use EXACTLY the same SQL statement in both locations so there should be no need to worry about making the same effective changes twice to two slightly different bits of SQL and (b) the optimizer will recognise the significance of the existence test and run in first_rows_1 mode with maximum join elimination and avoidance of redundant table visits. Here’s a little data set I can use to demonstrate the principle:


create table t1
as
select
        mod(rownum,200)         n1,     -- scattered data
        mod(rownum,200)         n2,
        rpad(rownum,180)        v1
from
        dual
connect by
        level <= 10000
;

delete from t1 where n1 = 100;
commit;

create index t1_i1 on t1(n1);

begin
        dbms_stats.gather_table_stats(
                user,
                't1',
                cascade => true,
                method_opt => 'for all columns size 1'
        );
end;
/

It’s just a simple table with index, but the index isn’t very good for finding the data – it’s repetitive data widely scattered through the table: 10,000 rows with only 200 distinct values. But check what happens when you do the dual existence test – first we run our “driving” query to show the plan that the optimizer would choose for it, then we run with the existence test to show the different strategy the optimizer takes when the driving query is embedded:


alter session set statistics_level = all;

select  *
from    t1
where   n1 = 100
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last cost'));

select  count(*)
from    dual
where   exists (
                select * from t1 where n1 = 100
        )
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last cost'));

Notice how I’ve enabled rowsource execution statistics and pulled the execution plans from memory with their execution statistics. Here they are:


select * from t1 where n1 = 100

-------------------------------------------------------------------------------------------------
| Id  | Operation         | Name | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |      1 |        |    38 (100)|      0 |00:00:00.01 |     274 |
|*  1 |  TABLE ACCESS FULL| T1   |      1 |     50 |    38   (3)|      0 |00:00:00.01 |     274 |
-------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("N1"=100)

select count(*) from dual where exists (   select * from t1 where n1 = 100  )

---------------------------------------------------------------------------------------------------
| Id  | Operation          | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |       |      1 |        |     3 (100)|      1 |00:00:00.01 |       2 |
|   1 |  SORT AGGREGATE    |       |      1 |      1 |            |      1 |00:00:00.01 |       2 |
|*  2 |   FILTER           |       |      1 |        |            |      0 |00:00:00.01 |       2 |
|   3 |    FAST DUAL       |       |      0 |      1 |     2   (0)|      0 |00:00:00.01 |       0 |
|*  4 |    INDEX RANGE SCAN| T1_I1 |      1 |      2 |     1   (0)|      0 |00:00:00.01 |       2 |
---------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter( IS NOT NULL)
   4 - access("N1"=100)

For the original query the optimizer did a full tablescan – that was the most efficient path. For the existence test the optimizer decided it didn’t need to visit the table for “*” and it would be quicker to use an index range scan to access the data and stop after one row. Note, in particular, that the scan of the dual table didn’t even start – in effect we’ve got all the benefits of a “select {minimum set of columns} where rownum = 1” query, without having to work out what that minimum set of columns was.

But there’s an even more cunning option – remember that we didn’t scan dual when when there were no matching rows:


for c1 in (

        with driving as (
                select  /*+ inline */
                        *
                from    t1
        )
        select  /*+ track this */
                *
        from
                driving d1
        where
                n1 = 100
        and     exists (
                        select
                                *
                        from    driving d2
                        where   n1 = 100
                );
) loop

    -- do your thing

end loop;

In this specific case the subquery would automatically go inline, so the hint here is actually redundant; in general you’re likely to find the optimizer materializing your subquery and bypassing the cunning strategy if you don’t use the hint. (One of the cases where subquery factoring doesn’t automatically materialize is when you have no WHERE clause in the subquery.)

Here’s the execution plan pulled from memory (after running this SQL through an anonymous PL/SQL block):


SQL_ID  7cvfcv3zarbyg, child number 0
-------------------------------------
WITH DRIVING AS ( SELECT /*+ inline */ * FROM T1 ) SELECT /*+ track
this */ * FROM DRIVING D1 WHERE N1 = 100 AND EXISTS ( SELECT * FROM
DRIVING D2 WHERE N1 = 100 )

---------------------------------------------------------------------------------------------------
| Id  | Operation          | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |       |      1 |        |    39 (100)|      0 |00:00:00.01 |       2 |
|*  1 |  FILTER            |       |      1 |        |            |      0 |00:00:00.01 |       2 |
|*  2 |   TABLE ACCESS FULL| T1    |      0 |     50 |    38   (3)|      0 |00:00:00.01 |       0 |
|*  3 |   INDEX RANGE SCAN | T1_I1 |      1 |      2 |     1   (0)|      0 |00:00:00.01 |       2 |
---------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter( IS NOT NULL)
   2 - filter("T1"."N1"=100)
   3 - access("T1"."N1"=100)

You’ve got just one statement – and you’ve only got one version of the complicated text because you put it into a factored subquery; but the optimizer manages to use one access path for one instantiation of the text and a different one for the other. You get an efficient test for existence and only run the main query if some suitable data exists, and the whole thing is entirely read-consistent.

I have to say, though, I can’t quite make myself 100% enthusiastic about this code strategy – there’s just a nagging little doubt that the optimizer might come up with some insanely clever trick to try and transform the existence test into something that’s supposed to be faster but does a lot more work; but maybe that’s only likely to happen on an upgrade, which is when you’d be testing everything very carefully anyway (wouldn’t you) and you’ve got the “dual/exists” fallback position if necessary.

Footnote:

Does anyone remember the thing about reading execution plan “first child first” – this existence test is one of the interesting cases where it’s not the first child of a parent operation that runs first: it’s the case I call the “constant subquery”.


Better Ways to Play and Try

Oracle AppsLab - Tue, 2015-07-28 20:50
  • Fact 1: Dazzling animated displays (sprites, shaders, parallax, 3D) are more plentiful and easier to make than ever before.
  • Fact 2: More natural and expressive forms of input (swiping, pinching, gesturing, talking) are being implemented and enhanced every day.
  • Fact 3: Put these two together and the possible new forms of human computer interaction are endless. The only limit is our imagination.

That’s the problem: our imagination. We can’t build new interactions until A) someone imagines them, and B) the idea is conveyed to other people. As a designer in the Emerging Interactions subgroup of the AppsLab, this is my job – and I’m finding that both parts of it are getting harder to do.

If designers can’t find better ways of imagining – and by imagining I mean the whole design process from blank slate to prototype – progress will slow and our customers will be unable to unleash their full potential.

So what does it mean to imagine and how can we do it better?

colliding_orgcharts
Imagination starts with a daydream or an idle thought. “Those animations of colliding galaxies are cool. I wonder if we could show a corporate acquisition as colliding org charts. What would that look like?”

What separates a mere daydreamer from an actual designer is the next step: playing. To really imagine a new thing in any meaningful way you have to roll up your sleeves and actually start playing with it in detail. At first you can do this entirely in your mind – what Einstein called a “thought experiment.”  This can take weeks of staring into space while your loved ones look on with increasing concern.

Playing is best done in your mind because your mind is so fluid. You can suspend the laws of physics whenever they get in the way. You can turn structures inside out in the blink of an eye, changing the rules of the game as you go. This fluidity, this fuzziness, is the mind’s greatest strength, but also its greatest weakness.

So sooner or later you have to move from playing to trying. Trying means translating the idea into a visible, tangible form and manipulating it with the laws of physics (or at least the laws of computing) re-enabled. This is where things get interesting. What was vaguely described must now be spelled out. The inflexible properties of time and space will expose inconvenient details that your mind overlooked; dealing with even the smallest of these details can derail your entire scheme – or take it wild, new directions. Trying is a collaboration with Reality.

Until recently, trying was fairly easy to do. If the thing you were inventing was a screen layout or a process flow, you could sketch it on paper or use a drawing program to make sure all the pieces fit together. But what if the thing you are inventing is moving? What if it has hundreds of parts each sliding and changing in a precise way? How do you sketch that?

My first step in the journey to a better way was to move from drawing tools like Photoshop and OmniGraffle to animation tools like Hype and Edge – or to Keynote (which can do simple animations). Some years ago I even proposed a standard “animation spec” so that developers could get precise frame-by-frame descriptions.

The problem with these tools is that you have to place everything by hand, one element at a time. I often begin by doing just that, but when your interface is composed of hundreds of shifting, spinning, morphing shapes, this soon becomes untenable. And when even the simplest user input can alter the course and speed of everything on the screen, and when that interaction is the very thing you need to explore, hand drawn animation becomes impossible.

To try out new designs involving this kind of interaction, you need data-driven animation – which means writing code. This is a significant barrier for many designers. Design is about form, color, balance, layout, typography, movement, sound, rhythm, harmony. Coding requires an entirely different skill set: installing development environments, converting file formats, constructing database queries, parsing syntaxes, debugging code, forking githubs.

A software designer needs a partial grasp of these things in order to work with developers. But most designers are not themselves coders, and do not want to become one. I was a coder in a past life, and even enjoy coding up to a point. But code-wrangling, and in particular debugging, distracts from the design process. It breaks my concentration, disrupts my flow; I get so caught up in tracking down a bug that I forget what I was trying to design in the first place.

The next stage of my journey, then, was to find relatively easy high-level programming languages that would let me keep my eye on the ball. I did several projects using Processing (actually Processing.js), a language developed specifically for artists. I did another project using Python – with all coding done on the iPad so that I could directly experience interactions on the tablet with every iteration of the code.

These projects were successful but time-consuming and painful to create. Traditional coding is like solving a Rubik’s Cube: twist and twist and twist until order suddenly emerges from chaos. This is not the way I want to play or try. I want a more organic process, something more like throwing a pot: I want to grab a clump of clay and just continuously shape it with my hands until I am satisfied.

I am not the only one looking for better ways to code. We are in the midst of an open source renaissance, an explosion of literally thousands of new languages, libraries, and tools. In my last blog post I wrote about people creating radically new and different languages as an art form, pushing the boundaries in all directions.

In “The Future of Programming,” Eric Elliott argues for reactive programming, visual IDEs, even genetic and AI-assisted coding. In “Are Prototyping Tools Becoming Essential?,” Mark Wilcox argues that exploring ideas in the Animation Era requires a whole range of new tools. But if you are only going to follow one of these links, see Bret Victor’s “Learnable Programming.”

After months of web surfing I stumbled upon an interesting open source tool originally designed for generative artists that I’ve gotten somewhat hooked on. It combines reactive programming and a visual IDE with some of Bret Victor’s elegant scrubbing interactions.

More about that tool in my next blog postPossibly Related Posts:

Top 5 Ways to Personalize My Oracle Support

Joshua Solomin - Tue, 2015-07-28 17:47
Top 5 Ways to Personalize My Oracle Support

It doesn't take long using My Oracle Support (MOS) to realize just how massive the pool of data underlying it all is—knowledge articles, patches and updates, advisories and security alerts, for every version of every Oracle product line.

My first week using MOS my jaw dropped at the sheer scale of available info. Not surprisingly, some of the first tips we get as Oracle employees is how to personalize My Oracle Support to target just the areas we're interested in.

Take a minute and follow our Top 5 Ways to personalize My Oracle Support to better suit your workflow. These easy-to-follow tips can help you get the most from the application, and avoid drowning in the MOS "Sea of Information."

1. Customizing the Screen Panels

One of the easiest personalization features is to adjust the panels displayed on a given page or tab. Nearly every activity tab allows you to reorganize, move, or even hide displayed panels on the screen using the Customize Page.... link in the top right area of the screen.

When you click the link, the page will update and display a series of widgets on each panel, allowing you to customize the content.

The wrench icon lets you customize the panel name, while the circular gear icon lets you move the panel within the column. The Add Content action displays a context-sensitive panel of new content areas that can be added to the column.

2. Enable PowerView

The PowerView applet is one of the fastest ways to limit information displayed in MOS. PowerView filters information presented to you based on a products, support identifiers, or other custom filters you select. Once you've set up a PowerView filter set, any activities going forward—searches, patches and bugs, service requests (SRs)—will only appear if they are tied to your selected filters.

To build a PowerView, click the PowerView icon in the upper-left area of the screen.

To create the view, first select the primary filter criteria. "Support Identifier", "Product", and "Product Line" are common primary filters.

Remember, the goal is to use PowerView to filter everything you see in MOS against the relevent contexts you establish.

3. Set Up SR Profiles

This one's a bit trickier than the first two, but can be an enormous time-saver if you regularly enter service requests into MOS.

Go to the Settings tab in MOS, and look for Service Request Profiles link on your left. In some cases you may need to click the More... dropdown to find the Settings tab.

In the profiles view you'll see any existing profiles and an action button to create a new profile. The goal for an SR profile is to streamline the process of creating an SR for a specific hardware or software product that you're responsible for managing. When creating an SR you'll select the pre-generated profile you created earlier, and MOS fills in the relevant details you input.

4. Enable Hot Topics Email

Hot Topics Email is a second option available in the main MOS Settings tab.

Hot Topics is an automated notification system that will alert you any time specified SRs, Knowledge Documents, or security notices are published or updated.

There are dozens of options to choose from in setting up your alerts, based on product, Support Identifier (SI), content you've marked as as "Favorite", and more. See the video training "How to Use Hot Topics Email Notifications" (Document 793436.2) to get a better understanding of how to use this feature.

5. Enable Service Request Email Updates

Back in the main MOS Settings tab, click the link for My Account on the left. This will take you to a general profile view of your MOS account. What we're looking for is a table cell in the Support Identifiers table at the top that reads SR Details.

By checking this box, you are indicating that you want to be automatically notified via email any time a service request tied to the support identifier gets updated.

The goal behind this is to stay abreast of any changes to SRs for the chosen support identifier. You don't have to keep "checking in" or wait for an Oracle Support engineer to reach out to you when progress is made on SRs. If a Support engineer requests additional information on a particular configuration, for example, that would be conveyed in the SR Email Update sent to you.

The trick is to be judicious using this setting. My Oracle Support could quickly inundate you with SR details notices if there are lots of active SRs tied to the support identifier(s), so this may not be desirable in some cases.

Conclusion

With these five options enabled, you've started tailoring your My Oracle Support experience to better streamline your workflow, and keep the most relevant, up-to-date information in front of you.

Give them a whirl, and let us know how it goes!

Reuters: Blackboard up for sale, seeking up to $3 billion in auction

Michael Feldstein - Tue, 2015-07-28 13:48

By Phil HillMore Posts (350)

As I was writing a post about Blackboard’s key challenges, I get notice from Reuters (anonymous sources, so interpret accordingly) that the company is on the market, seeking up to $3 billion. From Reuters:

Blackboard Inc, a U.S. software company that provides learning tools for high school and university classrooms, is exploring a sale that it hopes could value it at as much as $3 billion, including debt, according to people familiar with the matter.

Blackboard’s majority owner, private equity firm Providence Equity Partners LLC, has hired Deutsche Bank AG and Bank of America Corp to run an auction for the company, the people said this week. [snip]

Providence took Blackboard private in 2011 for $1.64 billion and also assumed $130 million in net debt.

A pioneer in education management software, Blackboard has seen its growth slow in recent years as cheaper and faster software upstarts such as Instructure Inc have tried to encroach on its turf. Since its launch in 2011, Instructure has signed up 1,200 colleges and school districts, according to its website.

This news makes the messaging from BbWorld as well as their ability to execute on strategy, particularly delivering the new Ultra user experience across all product lines – including the core LMS – much more important. I’ll get to that subject in the next post.

This news should not be all that unexpected, as one common private equity strategy is to reorganize and clean up a company (headcount, rationalize management structures, reorient the strategy) and then sell within 3 – 7 years. As we have covered here at e-Literate, Blackboard has gone through several rounds of layoffs, and many key employees have already left the company due to new management and restructuring plans. CEO Jay Bhatt has been consistent in his message about moving from a conglomeration of silo’d mini-companies based on past M&A to a unified company. We have also described the significant changes in strategy – both adopting open source solutions and planning to rework the entire user experience.

Also keep in mind that there is massive investment in ed tech lately, not only from venture capital but also from M&A.

Update 1: I should point out that the part of this news that is somewhat surprising is the potential sale while the Ultra strategy is incomplete. As Michael pointed out over the weekend:

Ultra is a year late: Let’s start with the obvious. The company showed off some cool demos at last year’s BbWorld, promising that the new experience would be Coming Soon to a Campus Near You. Since then, we haven’t really heard anything. So it wasn’t surprising to get confirmation that it is indeed behind schedule. What was more surprising was to see CEO Jay Bhatt state bluntly in the keynote that yes, Ultra is behind schedule because it was harder than they thought it would be. We don’t see that kind of no-spin honesty from ed tech vendors all that often.

Ultra isn’t finished yet: The product has been in use by a couple of dozen early adopter schools. (Phil and I haven’t spoken with any of the early adopters yet, but we intend to.) It will be available to all customers this summer. But Blackboard is calling it a “technical preview,” largely because there are large swathes of important functionality that have not yet been added to the Ultra experience–things like tests and groups. It’s probably fine to use it for simple (and fairly common) on-campus use cases, but there are still some open manholes here.

Update 2: I want to highlight (again) the nature of this news story. It’s from Reuters using multiple anonymous sources. While Reuters should be trustworthy, please note that the story has not yet been confirmed.

Update 3: In contact with Blackboard, I received the following statement (which does not answer any questions, but I am sharing nonetheless).

Blackboard, like many successful players in the technology industry, has become subject of sale rumors. Although we are transparent in our communications about the Blackboard business and direction when appropriate, it is our policy not to comment on rumors or speculation.

Blackboard is in an exciting industry that is generating substantial investor interest. Coming off a very successful BbWorld 2015 and a significant amount of positive customer and market momentum, potential investor interest in our company is not surprising.

We’ll update as we learn more, including if someone confirms the news outside of Reuters and their sources.

The post Reuters: Blackboard up for sale, seeking up to $3 billion in auction appeared first on e-Literate.

Power BI General Availability

Pythian Group - Tue, 2015-07-28 09:18

In the last months I’ve been keeping an eye on the Power BI Preview, the new version of the Power BI cloud solution that was completely revamped compared to the first Power BI v1.
This last Friday, July 24th, Power BI was finally released to the public and it is now available globally. The best part? Now they have a FREE version, so you can start playing with it right now and get insights from your data.

Learn more about Power BI and how to use it. And if you have no idea what I’m talking about or what Power BI is, please take 1 minute of your time and watch a quick Youtube video to see it in action.

In the coming weeks I will start writing a series of posts explaining more about the BI suite and what we can use it for, so keep an eye on this blog. Also, if you are already using it, please comment below what you are thinking so far and if you are facing any difficulties.

 

Discover more about our expertise in SQL Server and Cloud

The post Power BI General Availability appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Breakthrough Solution for A/R and A/P Automation

WebCenter Team - Tue, 2015-07-28 08:12


Breakthrough Solution for A/R and A/P Automation

Webcast Scheduled July 30th @ 2:00 PM EST | Register

  • Do you need to expedite your A/R Cash Receipt processing and reduce Days Sales Outstanding?
  • Are un-applied payments affecting closure of your accounting period and impacting your cash flow?
  • Are your paying through your nose to process check information through your bank?

===============================================================

Join this complimentary session from Oracle and KPIT to learn about an exciting new solution that is providing Complete Automation and Real Time Visibility to your A/R Cash Receipt process.

KPIT offers an integrated imaging and workflow solution for your Oracle ERP for A/R Cash Receipt Processing called the Accounts Receivable Solution Accelerator

You will Learn How to:

  • Improve your A/R Cash Receipt Processing through automated capture & entry of bank remittance information
  • Automate exception handling (e.g. short payments) through automated workflows
  • Maximize user productivity by minimizing manual data entry
  • Speed payment application through configurable payment application rules
  • Improve visibility into your un-applied cash and process bottlenecks
  • Use business rules to generate appropriate notifications to customers
  • Reduce steep bank charges by avoiding lockbox processing services
  • Avoid heartburn by achieving timely closure of your accounting period
  • Reduce Days Sales Outstanding (DSO) and improve Cash Flow

Do not miss this opportunity. This is the first time a solution like this that has been offered on Oracle Technology! Register now!

PeopleSoft-Taleo Out-Of-The-Box Integration Deprecated

Duncan Davies - Tue, 2015-07-28 06:47

Oracle have made the interesting announcement that the delivered integration between PeopleSoft (HCM 9.1 or 9.2) and Taleo is being deprecated. This is a positive move from Oracle and simplifies things greatly from a client and partner perspective.

Previously Oracle delivered two different integration methods, prebuilt integration using web services and PeopleSoft Integration Broker, or the Taleo Connect Client (TCC) tool – which is a highly flexible tool for any kind of Taleo integration. It’s only the former that is being deprecated.

TCC remains the tool to use for integrating Taleo and PeopleSoft (or indeed, Taleo and any system) and it’s mature, robust, widely used and much liked.

Integration between Taleo and PeopleSoft rarely fits the ‘standard’ requirements as each client will have different processes around whether someone leaving in PeopleSoft automatically creates a requisition in Taleo or not, and what the approval chains are. Likewise, the new hires coming back in from Taleo can go to more than one place in PeopleSoft, and the amount and types of data brought over can vary wildly – especially if Taleo Onboarding is included. It’s preferable to have a TCC-built integration that fits the requirements than a bare-bones delivered process that can’t be improved.

The removal of the Integration Broker/Web Services integration as an option is a positive move as clients would regularly spend precious effort evaluating it, before concluding that it was too restrictive for their specific requirements and then using TCC.

The official announcement is here:
http://www.oracle.com/partners/secure/campaign/eblasts/peoplesoft-taleo-out-of-the-box-2613641.html


Multiple invisible indexes on the same column in #Oracle 12c

The Oracle Instructor - Tue, 2015-07-28 00:44

After invisible indexes got introduced in 11g, they have now been enhanced in 12c: You can have multiple indexes on the same set of columns with that feature. Why would you want to use that? Actually, this is always the first question I ask when I see a new feature – sometimes it’s really hard to answer :-)

Here, a plausible use case could be that you expect a new index on the same column to be an improvement over the existing old index, but you are not 100% sure. So instead of just dropping the old index, you make it invisible first to see the outcome:

 

[oracle@uhesse ~]$ sqlplus adam/adam

SQL*Plus: Release 12.1.0.2.0 Production on Tue Jul 28 08:11:16 2015

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Tue Jul 28 2015 08:00:34 +02:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> col index_name for a10
SQL> select index_name,index_type,visibility from user_indexes;

INDEX_NAME INDEX_TYPE		       VISIBILIT
---------- --------------------------- ---------
BSTAR	   NORMAL		       VISIBLE

SQL> col segment_name for a10
SQL> select segment_name,bytes/1024/1024 from user_segments;

SEGMENT_NA BYTES/1024/1024
---------- ---------------
BSTAR		       160
SALES		       600

SQL> set timing on
SQL> select count(*) from sales where channel_id=3;

  COUNT(*)
----------
   2000000

Elapsed: 00:00:00.18
SQL> set timing off
SQL> @lastplan

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
SQL_ID	b7cvb9nu10qdb, child number 0
-------------------------------------
select count(*) from sales where channel_id=3

Plan hash value: 2525234362

---------------------------------------------------------------------------
| Id  | Operation	  | Name  | Rows  | Bytes | Cost (%CPU)| Time	  |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |	  |	  |	  |  3872 (100)|	  |
|   1 |  SORT AGGREGATE   |	  |	1 |	3 |	       |	  |

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
|*  2 |   INDEX RANGE SCAN| BSTAR |  2000K|  5859K|  3872   (1)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("CHANNEL_ID"=3)


19 rows selected.

So I have an ordinary B* index here that supports my query, but I suspect that it would work better with a bitmap index. In older versions, you would get this if you try to create it with the old index still existing:

SQL> create bitmap index bmap on sales(channel_id) nologging;
create bitmap index bmap on sales(channel_id) nologging
                                  *
ERROR at line 1:
ORA-01408: such column list already indexed

Enter the 12c New Feature:

SQL> alter index bstar invisible;

Index altered.

SQL> create bitmap index bmap on sales(channel_id) nologging;

Index created.

Now I can check if the new index is really an improvement while the old index remains in place and is still being maintained by the system. So in case the new index turns out to be a bad idea – no problem to fall back on the old one!

SQL> select index_name,index_type,visibility from user_indexes;

INDEX_NAME INDEX_TYPE		       VISIBILIT
---------- --------------------------- ---------
BMAP	   BITMAP		       VISIBLE
BSTAR	   NORMAL		       INVISIBLE

SQL> select segment_name,bytes/1024/1024 from user_segments;

SEGMENT_NA BYTES/1024/1024
---------- ---------------
BMAP			 9
BSTAR		       160
SALES		       600

SQL> set timing on
SQL> select count(*) from sales where channel_id=3;

  COUNT(*)
----------
   2000000

Elapsed: 00:00:00.01
SQL> @lastplan

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------
SQL_ID	b7cvb9nu10qdb, child number 0
------------------------------------------------------------------------------------
select count(*) from sales where channel_id=3

Plan hash value: 3722975061

------------------------------------------------------------------------------------
| Id  | Operation		    | Name | Rows  | Bytes | Cost (%CPU)| Time	   |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT	    |	   |	   |	   |   216 (100)|	   |
|   1 |  SORT AGGREGATE 	    |	   |	 1 |	 3 |		|	   |

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------
|   2 |   BITMAP CONVERSION COUNT   |	   |  2000K|  5859K|   216   (0)| 00:00:01 |
|*  3 |    BITMAP INDEX SINGLE VALUE| BMAP |	   |	   |		|	   |
------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("CHANNEL_ID"=3)


20 rows selected.

Looks like everything is better with the new index, right? Let’s see what the optimizer thinks about it:

SQL> alter index bmap invisible;

Index altered.

SQL> select index_name,index_type,visibility from user_indexes;

INDEX_NAME INDEX_TYPE		       VISIBILIT
---------- --------------------------- ---------
BMAP	   BITMAP		       INVISIBLE
BSTAR	   NORMAL		       INVISIBLE

SQL> alter session set optimizer_use_invisible_indexes=true;

Session altered.

SQL> select count(*) from sales where channel_id=3;

  COUNT(*)
----------
   2000000

SQL> @lastplan

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------
SQL_ID	b7cvb9nu10qdb, child number 0
-------------------------------------------------------------------------------------
select count(*) from sales where channel_id=3

Plan hash value: 3722975061

------------------------------------------------------------------------------------
| Id  | Operation		    | Name | Rows  | Bytes | Cost (%CPU)| Time	   |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT	    |	   |	   |	   |   216 (100)|	   |
|   1 |  SORT AGGREGATE 	    |	   |	 1 |	 3 |		|	   |

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------
|   2 |   BITMAP CONVERSION COUNT   |	   |  2000K|  5859K|   216   (0)| 00:00:01 |
|*  3 |    BITMAP INDEX SINGLE VALUE| BMAP |	   |	   |		|	   |
------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("CHANNEL_ID"=3)


20 rows selected.

The optimizer agrees that the new index is better. I could keep both indexes here in place, but remember that the old index still consumes space and requires internal maintenance. Therefore, I decide to drop the old index:

SQL> drop index bstar;

Index dropped.

SQL> alter index bmap visible;

Index altered.

Hope that helped to answer the question why you would want to use that 12c New Feature. As always: Don’t believe it, test it! :-)


Tagged: 12c New Features, Performance Tuning
Categories: DBA Blogs

Secure By Default Clarifications

Anthony Shorten - Mon, 2015-07-27 22:50

In the Oracle Utilities Application Framework, the software is now installed in a mode called "Secure By Default". This has a number of connotations for new and existing installations:

  • HTTPS is the now the default protocol for accessing the product. The installer supplies a demonstration trust and demonstration identity store that can be used for default installations.
  • The permissions on the files and directories are now using common Oracle standards.

Now there are a few clarifications about these features:

  • Customers that are upgrading from older versions will use the same regime for the file permissions and access protocols that were in past releases for backward compatibility.
  • Customers on past releases can convert to the new file and directory permissions using the "setpermissions" utility shipped with the product. The Administration and Security guides outline the new permissions.
  • Customers on past releases can convert to the new HTTPS protocol like they did in the past releases. The new keystore is provided as a way of adopting it quickly.
  • We supply a basic certificate to be used for HTTPS. This is a demonstration certificate is limited in strength and scope (much the same scope and strength as the demonstration one supplied with Oracle WebLogic). It is not supported for use in production systems. It is recommended that customers who want to use HTTPS should use a valid certificate from a valid certificate issuing authority or build a self signed certificate. Note, if you use a self signed certificate some browsers may issue a warning upon login. Additionally, Customers using native mode installations can use the Oracle WebLogic demonstration certificates as well.
  • HTTPS was always supported in Oracle Utilities Application Framework. In past releases it was, what is termed, an opt-in decision (you are opt'ing in to use HTTPS). This meant that we installed using HTTP by default and then you configured HTTPS separately with additional configuration on your domain. In this new release, we have shifted the decision to an opt-out decision. We install HTTPS with a demonstration certificate as the default and you must disable it using additional steps (basically you do not specify a HTTPS port and only supply a HTTP port to reverse the decision). This is an opt-out decision as you are deciding to opt-out of the secure setup. The decision whether to use HTTPS or HTTP is an implementation one (we just have a default of HTTPS).
  • Customers using native mode (or IBM WebSphere) can manage certificates from the console or command lines supplied by the that product.

Secure by default now ensures that Oracle Utilities Application Framework products are consistent with installations standards employed by other Oracle products.

UC Davis: A look inside attempts to make large lecture classes active and personal

Michael Feldstein - Mon, 2015-07-27 13:16

By Phil HillMore Posts (350)

In my recent keynote for the Online Teaching Conference, the core argument was as follows:

While there will be (significant) unbundling around the edges, the bigger potential impact [of ed innovation] is how existing colleges and universities allow technology-enabled change to enter the mainstream of the academic mission.

Let’s look at one example. Back in December the New York Times published an article highlighting work done at the University of California at Davis to transform large lecture classes into active learning formats.

Hundreds of students fill the seats, but the lecture hall stays quiet enough for everyone to hear each cough and crumpling piece of paper. The instructor speaks from a podium for nearly the entire 80 minutes. Most students take notes. Some scan the Internet. A few doze.

In a nearby hall, an instructor, Catherine Uvarov, peppers students with questions and presses them to explain and expand on their answers. Every few minutes, she has them solve problems in small groups. Running up and down the aisles, she sticks a microphone in front of a startled face, looking for an answer. Students dare not nod off or show up without doing the reading.

Both are introductory chemistry classes at the University of California campus here in Davis, but they present a sharp contrast — the traditional and orderly but dull versus the experimental and engaging but noisy. Breaking from practices that many educators say have proved ineffectual, Dr. Uvarov’s class is part of an effort at a small but growing number of colleges to transform the way science is taught.

This article follows the same argument laid out in the Washington Post nearly three years earlier.

Science, math and engineering departments at many universities are abandoning or retooling the lecture as a style of teaching, worried that it’s driving students away. [snip]

Lecture classrooms are the big-box retailers of academia, paragons of efficiency. One professor can teach hundreds of students in a single room, trailed by a retinue of teaching assistants.

But higher-education leaders increasingly blame the format for high attrition in science and math classes. They say the lecture is a turn-off, higher education at its most passive, leading to frustration and bad grades in highly challenging disciplines.

What do these large lecture transformations look like? We got the chance in our recent e-Literate TV case study to get an inside look at the work done at UC Davis (episode 1, episode 2, episode 3), including first-person accounts from faculty members and students.

The organizing idea is to apply active learning principles such as the flipped classroom to large introductory science classes.

Phil Hill: It sounds to me like you have common learning design principles that are being implemented, but they get implemented in different ways. So, you have common things of making students accountable, having the classes much more interactive where students have to react and try to apply what they’re learning.

Chris Pagliarulo: Yeah, the main general principle here is we’re trying to get—if you want to learn something complex, which is what we try to at an R1 university, that takes a lot of practice and feedback. Until recently, much of that was supposed to be going on at home with homework or whatnot, but it’s difficult to get feedback at home when the smart people aren’t there that would help you—either your peers or your professor.

So, that’s the whole idea of the flipped classroom where come prepared with some basic understand and take that time where you’re all together to do the high-quality practice and get the feedback while we’re all together. Everything that we’re doing is focused on that sort of principle—getting that principle into the classroom.

Professor Mitch Singer then describes his background in the redesign.

Phil Hill: Several years ago, the iAMSTEM group started working with the biology and chemistry departments to apply some of these learning concepts in an iterative fashion.

Mitch Singer: My (hopefully) permanent assignment now, at least for the next five years, will be what we call “BIS 2A,” which is the first introductory course of biology here at UC Davis. It’s part of a series, and its primary goal is to teach fundamentals of cellular and molecular biology going from origins up to the formation of a cell. We teach all the fundamentals in this class: the stuff that’s used for future ones.

About three to four years ago, I got involved in this class to sort of help redesign it, come up with a stronger curriculum, and primarily bring in sort of hands-on, interactive learning techniques, and we’ve done a bunch of experiments and changed the course in a variety of ways. It’s still evolving over the last several years. The biggest thing that we did was add a discussion section, which is two hours long where we’ve done a lot of our piloting for this interactive, online, personalized learning (as the new way of saying things, I guess). This year (last quarter in the fall) was the first time we really tried to quote, flip part of the classroom.

That is make the students take a little bit more responsibility for their own reading and learning, and then the classic lecture is more asking questions trying to get them to put a and b together to come up with c. It’s sort of that process that we’d like to emphasize and get them to actually learn, and that’s what we want to test them on not so much the facts, and that’s the biggest challenge.

If you want to see the potential transformation of this core, it is crucial to look at the large lecture classes and how to make them more effective. The UC Davis case study highlights what is actually happening in the field, with input from real educators and students.

The post UC Davis: A look inside attempts to make large lecture classes active and personal appeared first on e-Literate.

Database validation failed while applying EBS R12.2 application patch : Oracle Apps DBA Training : Patches

Online Apps DBA - Mon, 2015-07-27 13:12

This post is from our Oracle Apps DBA – R12 Training ( next batch starts on 8th Aug, 2015) where we cover installation, patching, cloning and other maintenance tasks. We also cover upgrade of Oracle E-Business Suite 12.2.0 to 12.2.4 to show how to apply patches and new patch process in Oracle Apps.

One of the trainee from previous batch encountered issue while applying EBS R12.2 application patches 19330775 and 20677045 where command used is highlighted below. The new utility to apply patch is adop (prior to 12.2 you apply patches using adpatch)

adop phase=apply patches=19330775,20677045 hotpatch=yes merge=yes

But patch was failing with error message like:

adop is not able to detect any valid application tier nodes in ADOP_VALID_NODES table. Ensure autoconfig is run on all nodes.

So we run autoconfig (there is dedciated topic on Autoconfig in our Oracle Apps/R12 DBA Training) on both Apps Tier & DB Tier by using below commands

  1. Stop the application tier and Database tier
  2.  Autoconfig on Database tier
    a) Source the Oracle Database env
    b) cd $ORACLE_HOME/appsutil/scripts/$CONTEXT_NAME
    c) sh adautocfg.sh
    (Check the autoconfig logfiles for any error at $ORACLE_HOME/appsutil/log/$CONTEXT_NAME)
  3. Autoconfig on Apps Tier
    a) Please source the APPS environment
    b) cd $ADMIN_SCRIPTS_HOME
    c) sh adautocfg.sh

After running autoconfig on both the tiers, we again run the command to apply the patch. This time patch failed to execute some SQL statements and showing below error messages:

[Error]  Failed to execute SQL statement :

declare

    l_msg varchar2(4000);

 begin

    ad_zd_adop.adop_database_validations(l_msg);

    dbms_output.put_line(l_msg);

end;

[Error]                   Error Message :

[UNEXPECTED] Error occurred while performing database validations

blog

Solution:

To fix this issue, apply database patch 17693770 as per the instructions mentioned in Readme.

After applying 17693770 patch on database tier, issue got resolved and we were able to apply the patches successfully.

You can learn more from our expert team by registering to Oracle Apps DBA Training (next batch starts 8th Aug 2015- register early before 30th July to get 100 USD OFF)

K21 Technologies provide Full Money back Guarantee (If you are not happy after 2 sessions then you can ask for Full Money Back )

For further details and registration check

http://k21technologies.com/oracle-apps-dba-training

The post Database validation failed while applying EBS R12.2 application patch : Oracle Apps DBA Training : Patches appeared first on Oracle : Design, Implement & Maintain.

Categories: APPS Blogs

An inspiration to an entire generation, Dr APJ Abdul Kalam - Rest in Peace. The real hero behind Indian Space Research Organisation (ISRO)

Senthil Rajendran - Mon, 2015-07-27 10:10
An inspiration to an entire generation,  Dr APJ Abdul Kalam - Rest in Peace. The real hero behind Indian Space Research Organisation (ISRO) 



Oracle Data Integrator to load Oracle BICS and Oracle Storage Cloud Service

Rittman Mead Consulting - Mon, 2015-07-27 08:54

It is known that Oracle focuses its sales on cloud this year and in the BI world, we have seen the introduction of Oracle Business Intelligence Cloud Service – BICS – that we already covered on the blog. Another great product to store files and unstructured data is Oracle Storage Cloud Service, that can also be used as a staging area before loading a BICS schema. To provision our BI system in the cloud or to backup our files, it’s important to find a reliable and maintainable way to move on-premise data to the cloud. And when we speak about reliable and maintainable way to move data, I can’t help but think about Oracle Data Integrator. Luckily, Ayush Ganeriwal wrote two excellent blog posts explaining how to use ODI to load these two cloud services! And he even gave away the Knowledge Modules, ODI Open Tools, Jar files and other things he developed to make it work.

Load data to Oracle Storage Cloud Service with a package.

Oracle Storage Cloud Service is used to store files and unstructured data in the cloud, very much like Amazon S3. In the case of a company that has moved to the cloud, they need some space to store their files that will be used by other cloud services. Oracle Storage Cloud Service is the place to store all these files. If you want to use it as a cloud backup of your local filesystem, you might even choose Oracle Storage Cloud Archive Service which has cheaper storage but cost more when retrieving or editing data, similarly to Amazon Glacier .

Oracle Storage Cloud Service offers a RESTful web service API to load data as well as a Java Library directly wrapping this API and supporting client-side encryption. When it comes to load data, what could be better than using the tool we already use for all the rest? In this blog post , Ayush details how he created an ODI OpenTool that can load data to the Oracle Storage Cloud Service using the Java Library.

We first need to download the Java Library and place it in our ODI Studio or Agent classpaths. Below are the default locations for these classpaths, but we can also specify additional classpath if we want. Don’t forget to close ODI Studio and shutdown the agent(s) before adding the files.

ODI Studio on Unix : ~/.odi/oracledi/userlib
ODI Studio on Windows : %APPDATA%\Roaming\odi\oracledi\userlib
12c Standalone Agent : &lt;ODI_AGENT_HOME&gt;/odi/agent/lib
11g Standalone Agent : &lt;ODI_AGENT_HOME&gt;/oracledi/agent/drivers/

For a JEE agent, we need to create a server template as described in the doc : https://docs.oracle.com/middleware/1212/odi/ODIDG/setup_topology.htm#CHDHGIGI.

In the same folder, we can also drop the Jar file for the Open Tool that Ayush created and posted on java.net (last link in the folder at the moment of writing). Remember that all the drivers, libraries and Open Tools have to be present in every ODI Studio or agent install.

We can now reopen ODI Studio as we need to register the new Open Tool in the master repository so everyone can use it. In the ODI Menu, click on Add Remove Open  Tools and then on the googles icon to search for the Open Tools. In our case, the class is under the package oracle.odi.rest.

Adding an ODI Open Tool

Select the OracleStorageCloudGet tool and click the Plus icon to add it. There we can see the description of the tool, the command, the parameters and the icon used. Let’s add the OracleStorageCloudPut as well and click OK.

Open Tool description

That’s it! We can now see two new ODI Open Tools under the Plugins category in the Package Toolbox. We can add it to the package and fill all the parameters or copy/paste the command to use it in an ODI Procedure.

Open Tools in a package

Load data to Oracle Business Intelligence Cloud Service with a mapping.

BICS offers a Data Loader which is actually an Apex application taking Excel, CSV, text or zip files as input. But this is more a manual process used when working with sample data or really small dataset. If we plan to use BICS in a production environment, we need something more resilient that can be scheduled and that handles errors. We could use Data Sync but maybe you don’t want to introduce a new tool. Luckily, BICS also provides a RESTful web service API we can use to load data. Again, ODI is the perfect tool to create our own reusable components to integrate with new sources or targets. And once again, Ayush did all the job for us as described in this article.

The first step is similar to what we did previously, we need make all the drivers and libraries available for ODI Studio and the agent. The steps in the two Knowledge Modules contains Jython code, calling some java methods from the odi-bics.jar file. These methods – just like the Oracle Storage Cloud Service java library – will use the Jersey Libraries to do the RESTful web service calls so we will need them as well. Basically, just take all the jar files from the archive you can find on java.net under the name “RKM and IKM for Oracle BI Cloud Service”. As mentioned above, we need to add it to the classpath of ODI Studio and the agent installs and restart them.

The article mentions that we need to create a new Technology for BI Cloud Service but an XML export is already created in the archive. Go in the Topology, right click on Technology and import the xml file.

Adding a Technology

We can also import the two Knowledge Modules, either globally or under a single project. Ayush wrote one RKM to reverse-engineer the tables from our BICS schema and one Multi-Connections IKM to integrate the data from the on-premise database to the BICS schema in the cloud. I had to tweak the IKM a little bit but it might be linked to my particular setup. The steps were not displayed in the order it was set in the XML file and I had to reorder it properly to place the drop before the create and truncate. Probably a bug, I will investigate that later.

Reordering the steps in the KM

We are done with the install of new components and we can now set up the topology. First create a Dataserver with the newly imported technology and provide a login/password and the URL to the BICS instance as the Service URL. From my testing and unlike Ayush’s screenshot, it should start with https and should not include the port number. Of course, we need to create a physical schema where we need to provide our Identity Domain in both Tenant fields. We can leave the Database fields empty. The last step to do in the Topology is to create a Logical schema and associate it to the Physical schema through a context.

topo_DS

We can then switch to the Designer tab and create a new Model with BI Cloud Service technology and that Logical Schema. On the Reverse Engineer tab we select Customized and the RKM BI Cloud Service. We can also specify a mask to restrict the metadata import to specific table names before hitting the Reverse Engineer button at the top. I set the mask to DIM_LOC% and only my DIM_LOCATION as been reverse engineered from the BICS schema.

Model and reverse engineering

Finally, we can create a mapping. In this example I load two target tables. DIM_LOCATIONS (plural) is sitting on an on-premise Oracle Database for users accessing OBIEE from our HQ while DIM_LOCATION (singular) is in the schema linked to the our BICS instance used by remote users. There is nothing special here, except that I unselected the insert and update checkboxes for my surrogate key (LOCATION_SK) because I asked BICS to automatically populate it when I created the table – under the scene, a sequence is created and a trigger added to the table to populate the field with the next value of the sequence on each insert.

ODI-BICS-mapping

On the Physical tab, the LKM should be set on LKM SQL Multi-connect because we will delegate the data transfer to the Multi-Connections IKM. This IKM is the one we imported : IKM SQL to BI Cloud Service. I choose to enable the truncate option here.

mapping_2DS

Let’s hit the execute button and watch the result in the operator and in our BICS schema.

bics_result_DS

The two articles demonstrate one more time how easy it is to plug a new technology within ODI. This tool is a wonderful framework that can be used to move data from and to any technologies, even if it’s Big Data or Cloud as we have seen here. By having every integration jobs happening in the same place, we can have an easier maintenance, a better monitoring and we can schedule all the jobs together. It’s also way easier to see the big picture in our projects.

If you have any question about this post feel free to reach me, we love that kind of interesting challenges at Rittman Mead.

Thanks a lot to Ayush for such good components and libraries he provided. I’m glad I will share the stage with him and a fellow ODI expert, Holger Friedrich, at the Oracle Open World. We will speak about Best Practices for Development Lifecycle Management . Come to see us and say hi!

Categories: BI & Warehousing