Skip navigation.

Feed aggregator

Dammit, the LMS

Michael Feldstein - Mon, 2014-11-10 16:07

Count De Monet: I have come on the most urgent of business. It is said that the people are revolting!

King Louis: You said it; they stink on ice.

- History of the World, Part I

Jonathan Rees discovered a post I wrote about the LMS in 2006 and, in doing so, discovered that I was writing about LMSs in 2006. I used to write about the future of the LMS quite a bit. I hardly ever do anymore, mostly because I find the topic to be equal parts boring and depressing. My views on the LMS haven’t really changed in the last decade. And sadly, LMSs themselves haven’t changed all that much either. At least not in the ways that I care about most. At first I thought the problem was that the technology wasn’t there to do what I wanted to do gracefully and cost-effectively. That excuse doesn’t exist anymore. Then, once the technology arrived as Web 2.0 blossomed[1], I thought the problem was that there was little competition in the LMS market and therefore little reason for LMS providers to change their platforms. That’s not true anymore either. And yet the pace of change is still glacial. I have reluctantly come to the conclusion that the LMS is the way it is because a critical mass of faculty want it to be that way.

Jonathan seems to think that the LMS will go away soon because faculty can find everything they need on the naked internet. I don’t see that happening any time soon. But the reasons why seem to get lost in the perennial conversations about how the LMS is going to die any day now. As near as I can remember, the LMS has been about to die any day now since at least 2004, which was roughly when I started paying attention to such things.

And so it comes to pass that, with great reluctance, I take up my pen once more to write about the most dismal of topics: the future of the LMS.

In an Ideal World…

I have been complaining about the LMS on the internet for almost as long as there have been people complaining about the LMS on the internet. Here’s something I wrote in 2004:

The analogy I often make with Blackboard is to a classroom where all the seats are bolted to the floor. How the room is arranged matters. If students are going to be having a class discussion, maybe you put the chairs in a circle. If they will be doing groupwork, maybe you put them in groups. If they are doing lab work, you put them around lab tables. A good room set-up can’t make a class succeed by itself, but a bad room set-up can make it fail. If there’s a loud fan drowning out conversation or if the room is so hot that it’s hard to concentrate, you will lose students.

I am a first- or, at most, second-generation internet LMS whiner. And that early post captures an important aspect of my philosophy on all things LMS and LMS-like. I believe that the spaces we create for fostering learning experiences matter, and that one size cannot fit all. Therefore, teachers and students should have a great deal of control in shaping their learning environments. To the degree that it is possible, technology platforms should get out of the way and avoid dictating choices. This is a really hard thing to do well in software, but it is a critical guiding principle for virtual learning environments. It’s also the thread that ran through the 2006 blog post that Jonathan quoted:

Teaching is about trust. If you want your students to take risks, you have to create an environment that is safe for them to do so. A student may be willing to share a poem or a controversial position or an off-the-wall hypothesis with a small group of trusted classmates that s/he wouldn’t feel comfortable sharing with the entire internet-browsing population and having indexed by Google. Forever. Are there times when encouraging students to take risks out in the open is good? Of course! But the tools shouldn’t dictate the choice. The teacher should decide. It’s about academic freedom to choose best practices. A good learning environment should enable faculty to password-protect course content but not require it. Further, it should not favor password-protection, encouraging teachers to explore the spectrum between public and private learning experiences.

Jonathan seems to think that I was supporting the notion of a “walled garden” in that post—probably because the title of the post is “In Defense of Walled Gardens”—but actually I was advocating for the opposite at the platform level. A platform that is a walled garden is one that forces particular settings related to access and privacy on faculty and students. Saying that faculty and students have a right to have private educational conversations when they think those are best for the situation is not at all the same as saying that it’s OK for the platform to dictate decisions about privacy (or, for that matter, that educational conversations should always be private). What I have been trying to say, there and everywhere, is that our technology needs to support and enable the choices that humans need to make for themselves regarding the best conditions for their personal educational needs and contexts.

Regarding the question of whether this end should be accomplished through an “LMS,” I am both agnostic and utilitarian on this front. I can imagine a platform we might call an “LMS” that would have quite a bit of educational value in a broad range of circumstances. It would bear no resemblance to the LMS of 2004 and only passing resemblance to the LMS of 2014. In the Twitfight between Jonathan and Instructure co-founder Brian Whitmer that followed Jonathan’s post, Brian talked about the idea of an LMS as a “hub” or an “aggregator.” These terms are compatible with what my former SUNY colleagues and I were imagining in 2005 and 2006, although we didn’t think of it in those terms. We thought of the heart of it as a “service broker” and referred to the whole thing in which it would live as a “Learning Management Operating System (LMOS).” You can think of the broker as the aggregator and the user-facing portions of the LMOS as the hub that organized the aggregated content and activity for ease-of-use purposes.

By the way, if you leave off requirements that such a thing should be “institution-hosted” and “enterprise,” the notion that an aggregator or hub would be useful in virtual learning environments is not remotely contentious. Jim Groom’s ds106 uses a WordPress-based aggregation system, the current generation of which was built by Alan Levine. Stephen Downes built gRSShopper ages ago. Both of these systems are RSS aggregators at heart. That second post of mine on the LMOS service broker, which gives a concrete example of how such a thing would work, mainly focuses on how much you could do by fully exploiting the rich metadata in an RSS feed and how much more you could do with it if you just added a couple of simple supplemental APIs. And maybe a couple of specialized record types (like iCal, for example) that could be syndicated in feeds similarly to RSS. While my colleagues and I were thinking about the LMOS as an institution-hosted enterprise application, there’s nothing about the service broker that requires it to be so. In fact, if you add some extra bits to support federation, it could just as easily form the backbone of for a distributed network of personal learning environments. And that, in fact, is a pretty good description of the IMS standard in development called Caliper, which is why I am so interested in it. In my recent post about walled gardens from the series that Jonathan mentions in his own post, I tried to spell out how Caliper could enable either a better LMS, a better world without an LMS, or both simultaneously.

Setting aside all the technical gobbledygook, here’s what all this hub/aggregator/broker stuff amounts to:

  • Jonathan wants to “have it all,” by which he means full access to the wide world of resources on the internet. Great! Easily done.
  • The internet has lots of great stuff but is not organized to make that stuff easy to find or reduce the number of clicks it takes you to see a whole bunch of related stuff. So it would be nice to have the option of organizing the subset of stuff that I need to look at for a class in ways that are convenient for me and make minimal demands on me in terms of forcing me to go out and proactively look to see what has changed in the various places where there might be activity for my class.
  • Sometimes the stuff happening in one place on the internet is related to stuff happening in another place in ways that are relevant to my class. For example, if students are writing assignments on their blogs, I might want to see who has gotten the assignment done by the due date and collect all those assignments in one place that’s convenient for me to comment on them and grade them. It would be nice if I had options of not only aggregating but also integrating and correlating course-related information.
  • Sometimes I may need special capabilities for teaching my class that are not available on the general internet. For example, I might want to model molecules for chemistry or have a special image viewer with social commenting capabilities for art history. It would be nice if there were easy but relatively rich ways to add custom “apps” that can feed into my aggregator.
  • Sometimes it may be appropriate and useful (or even essential) to have private educational conversations and activities. It would be nice to be able to do that when it’s called for and still have access to whole public internet, including the option to hold classes mostly “in public.”

In an ideal world, every class would have its own unique mix of these capabilities based on what’s appropriate for the students, teacher, and subject. Not every class needs all of these capabilities. In fact, there are plenty of teachers who find that their classes don’t need any of them. They do just fine with WordPress. Or a wiki. Or a listserv. Or a rock and a stick. And these are precisely the folks who complain the loudest about what a useless waste the LMS is. It’s a little like an English professor walking into a chemistry lab and grousing, “Who the hell designed this place? You have these giant tables which are bolted to the floor in the middle of the room, making it impossible to have a decent class conversation. And for goodness sake, the tables have gas jets on them. Gas jets! Of all the pointless, useless, preposterous, dangerous things to have in a classroom…! And I don’t even want to know how much money the college wasted on installing this garbage.”

Of course, today’s LMS doesn’t look much like what I described in the bullet points above (although I do think the science lab analogy is a reasonable one even for today’s LMS). It’s fair to ask why that is the case. Some of us have been talking about this alternative vision for something that may or may be called an “LMS” for a decade or longer now. And there are folks like Brian Whitmer at LMS companies (and LMS open source projects) saying that they buy into this idea. Why don’t our mainstream platforms look like this yet?

Why We Can’t Have Nice Things

Let’s imagine another world for a moment. Let’s imagine a world in which universities, not vendors, designed and built our online learning environments. Where students and teachers put their heads together to design the perfect system. What wonders would they come up with? What would they build?

Why, they would build an LMS. They did build an LMS. Blackboard started as a system designed by a professor and a TA at Cornell University. Desire2Learn (a.k.a. Brightspace) was designed by a student at the University of Waterloo. Moodle was the project of a graduate student at Curtin University in Australia. Sakai was built by a consortium of universities. WebCT was started at the University of British Columbia. ANGEL at Indiana University.

OK, those are all ancient history. Suppose that now, after the consumer web revolution, you were to get a couple of super-bright young graduate students who hate their school’s LMS to go on a road trip, talk to a whole bunch of teachers and students at different schools, and design a modern learning platform from the ground up using Agile and Lean methodologies. What would they build?

They would build Instructure Canvas. They did build Instructure Canvas. Presumably because that’s what the people they spoke to asked them to build.

In fairness, Canvas isn’t only a traditional LMS with a better user experience. It has a few twists. For example, from the very beginning, you could make your course 100% open in Canvas. If you want to teach out on the internet, undisguised and naked, making your Canvas course site just one class resource of many on the open web, you can. And we all know what happened because of that. Faculty everywhere began opening up their classes. It was sunlight and fresh air for everyone! No more walled gardens for us, no sirree Bob.

That is how it went, isn’t it?

Isn’t it?

I asked Brian Whitmer the percentage of courses on Canvas that faculty have made completely open. He didn’t have an exact number handy but said that it’s “really low.” Apparently, lots of faculty still like their gardens walled. Today, in 2014.

Canvas was a runaway hit from the start, but not because of its openness. Do you know what did it? Do you know what single set of capabilities, more than any other, catapulted it to the top of the charts, enabling it to surpass D2L in market share in just a few years? Do you know what the feature set was that had faculty from Albany to Anaheim falling to their knees, tears of joy streaming down their faces, and proclaiming with cracking, emotion-laden voices, “Finally, an LMS company that understands me!”?

It was Speed Grader. Ask anyone who has been involved in an LMS selection process, particularly during those first few years of Canvas sales.

Here’s the hard truth: While Jonathan wants to think of the LMS as “training wheels” for the internet (like AOL was), there is overwhelming evidence that lots of faculty want those training wheels. They ask for them. And when given a chance to take the training wheels off, they usually don’t.

Let’s take another example: roles and permissions.[2] Audrey Watters recently called out inflexible roles in educational software (including but not limited to LMSs) as problematic:

Ed-tech works like this: you sign up for a service and you’re flagged as either “teacher” or “student” or “admin.” Depending on that role, you have different “privileges” — that’s an important word, because it doesn’t simply imply what you can and cannot do with the software. It’s a nod to political power, social power as well.

Access privileges in software are designed to enforce particular ways of working together, which can be good if and only if everybody agrees that the ways of working together that the access privileges are enforcing are the best and most productive for the tasks at hand. There is no such thing as “everybody agrees” on something like the one single best way for people to work together in all classes. If the access privileges (a.k.a. “roles and permissions”) are not adaptable to the local needs, if there is no rational and self-evident reason for them to be structured the way they are, then they end up just reinforcing the crudest caricatures of classroom power relationships rather than facilitating productive cooperation. Therefore, standard roles and permissions often do more harm than good in educational software. I complained about this problem in 2005 when writing about the LMOS and again in 2006 when reviewing an open source LMS from the UK called Bodington. (At the time, Stephen Downes mocked me for thinking that this was an important aspect of LMS design to consider.)

Bodington had radically open permissions structures. You could attach any permissions (read, write, etc.) to any object in the system, making individual documents, discussions, folders, and what have you totally public, totally private, or somewhere in between.You could collect sets of permissions and and define them as any roles that you wanted. Bodington also, by the way, had no notion of a “course.” It used a geographical metaphor. You would have a “building” or a “floor” that could house a course, a club, a working group, or anything else. In this way, it was significantly more flexible than any LMS I had seen before.

Of course, I’m sure you’ve all heard of Bodington, its enormous success in the market, and how influential it’s been on LMS design.[3]

What’s that? You haven’t?

Huh.

OK, but surely you’re aware of D2L’s major improvements in the same area. If you recall your LMS patent infringement history, then you’ll remember that roles and permissions were exactly the thing that Blackboard sued D2L over. The essence of the patent was this: Blackboard claimed to have invented a system where the same person could be given the role of “instructor” in one course site and the role of “student” in another. That’s it. And while Blackboard eventually lost that fight, there was a court ruling in the middle in which D2L was found to have infringed on the patent. In order to get around it, the company ripped out its predefined roles, making it possible (and necessary) for every school to create its own. As many as they want. Defined however they want. I remember Ken Chapman telling me that, even though it was the patent suit that pushed him to think this way, in the end he felt that the new way was a significant improvement over the old way of doing things.

And the rest, as you know, was history. The Chronicle and Inside Higher Ed wrote pieces describing the revolution on campuses as masses of faculty demanded flexible roles and permissions. Soon it caught the attention of Thomas Friedman, who proclaimed it to be more evidence that the world is indeed flat. And the LMS market has never been the same since.

That is what happened…right?

No?

Do you want to know why the LMS has barely evolved at all over the last twenty years and will probably barely evolve at all over the next twenty years? It’s not because the terrible, horrible, no-good LMS vendors are trying to suck the blood out of the poor universities. It’s not because the terrible, horrible, no-good university administrators are trying to build a panopticon in which they can oppress the faculty. The reason that we get more of the same year after year is that, year after year, when faculty are given an opportunity to ask for what they want, they ask for more of the same. It’s because every LMS review process I have ever seen goes something like this:

  • Professor John proclaims that he spent the last five years figuring out how to get his Blackboard course the way he likes it and, dammit, he is not moving to another LMS unless it works exactly the same as Blackboard.
  • Professor Jane says that she hates Blackboard, would never use it, runs her own Moodle installation for her classes off her computer at home, and will not move to another LMS unless it works exactly the same as Moodle.
  • Professor Pat doesn’t have strong opinions about any one LMS over the others except that there are three features in Canvas that must be in whatever platform they choose.
  • The selection committee declares that whatever LMS the university chooses next must work exactly like Blackboard and exactly like Moodle while having all the features of Canvas. Oh, and it must be “innovative” and “next-generation” too, because we’re sick of LMSs that all look and work the same.

Nobody comes to the table with an affirmative vision of what an online learning environment should look like or how it should work. Instead, they come with this year’s checklists, which are derived from last year’s checklists. Rather than coming with ideas of what they could have, the come with their fears of what they might lose. When LMS vendors or open source projects invent some innovative new feature, that feature gets added to next year’s checklist if it avoids disrupting the rest of the way the system works and mostly gets ignored or rejected to the degree that it enables (or, heaven forbid, requires) substantial change in current classroom practices.

This is why we can’t have nice things. I understand that it is more emotionally satisfying to rail against the Powers That Be and ascribe the things that we don’t like about ed tech to capitalism and authoritarianism and other nasty isms. And in some cases there is merit to those accusations. But if we were really honest with ourselves and looked at the details of what’s actually happening, we’d be forced to admit that the “ism” most immediately responsible for crappy, harmful ed tech products is consumerism. It’s what we ask for and how we ask for it. As with our democracy, we get the ed tech that we deserve.

In fairness to faculty, they don’t always get an opportunity to ask good questions. For example, at Colorado State University, where Jonathan works, the administrators, in their infinite wisdom, have decided that the best course of action is to choose their next LMS for their faculty by joining the Unizin coalition. But that is not the norm. In most places, faculty do have input but don’t insist on a process that leads to a more thoughtful discussion than compiling a long list of feature demands. If you want agitate for better ed tech, then changing the process by which your campus evaluates educational technology is the best place to start.

There. I did it. I wrote the damned “future of the LMS” post. And I did it mostly by copying and pasting from posts I wrote 10 years ago. I am now going to go pour myself a drink. Somebody please wake me again in another decade.

  1. Remember that term?
  2. Actually, it’s more of an extension of the previous example. Roles and permissions are what make a garden walled or not, which another reason why they are so important.
  3. The Bodington project community migrated to Sakai, where some, but not all, of its innovations were transplanted to the Sakai platform.

The post Dammit, the LMS appeared first on e-Literate.

Selecting a Data Warehouse Appliance [VIDEO]

Chris Foot - Mon, 2014-11-10 11:57

Transcript

Hi, welcome to RDX! Selecting a data warehouse appliance is a very important decision to make. The amount of data that companies store is continuously increasing, and DBAs now have many data storage technologies available to them. Uninformed decisions may cause a number of problems including limited functionality, poor performance, lack of scalability, and complex administration.

Oracle, Microsoft, and IBM understand the common data warehousing challenges DBAs face and offer data warehouse appliances that help simplify administration and help DBAs effectively manage large amounts of data.

Need help determining which data warehouse technology is best for your business? Be sure to check out RDX VP of Technology, Chris Foot’s, recent blog post, Data Warehouse Appliance Offerings, where he provides more details about each vendor’s architecture and the benefits of each.

Thanks for watching. See you next time!
 

The post Selecting a Data Warehouse Appliance [VIDEO] appeared first on Remote DBA Experts.

Presentations to go to at #DOAG2014

The Oracle Instructor - Mon, 2014-11-10 11:26

As every year, there’s a long list of great speakers with interesting talks to attend at the DOAG (German Oracle User Group) annual conference. Sadly I cannot attend them all, so I’ve got to make a choice:

First day

Datenbank-Upgrade nach Oracle 12.1.0.2 – Aufwand, Vorgehen, Kunden by Mike Dietrich, Oracle

Die unheimliche Begegnung der dritten Art: XML DB für den DBA by Carsten Czarski, Oracle

Advanced RAC Programming Features by Martin Bach, Enkitec

Automatische Daten Optimierung, Heatmap und Compression 12c live by Ulrike Schwinn, Oracle

Second day

Understanding Oracle RAC Internals – The Cache Fusion Edition by Markus Michalewicz, Oracle

Die Recovery Area: Warum ihre Verwendung empfohlen ist – I have to go to that one because I present it myself :-)

Geodistributed Oracle GoldenGate and Oracle Active Data Guard: Global Data Services by Larry Carpenter, Oracle

Oracle Database In-Memory – a game changer for data warehousing? by Hermann Baer & Maria Colgan, Oracle

Oracle Distributed Transactions by Joel Goodman, Oracle

Third day

High Noon – Bessere Überlebenschancen beim Datenbank Security Shoot Out by Heinz-Wilhelm Fabry, Oracle

Tuning Tools für echte Männer und Sparfüchse – vom Leben ohne EM12c by Björn Rost, portrix Systems

Best Practices in Managing Oracle RAC Performance in Real Time by Mark Scardina, Oracle

Maximum Availability with Oracle Multitenant: Seeing Is Believing by Larry Carpenter, Oracle


Tagged: #DOAG2014
Categories: DBA Blogs

Penguins and Conferences

Floyd Teter - Mon, 2014-11-10 10:13
I just came back from the East Coast Oracle User Group conference.  Good conference.  Lots of solid, technical knowledge being shared.  Being there got me to thinking...

Over the past few years, a big concern for people attending conferences is the need to justify their attendance.  It's a big deal.  And, in my own mine, the only real justification is what you bring back, share and apply post-conference.  Let me tell you a story (can you hear all of my children groaning in the background?).

All the penguins in my neighborhood get together for a little meeting every month.  They talk about the happenings around the neighborhood, complain about the weather, catch up with each other, share info on where the fish are, and all sorts of things.  It's just a little social gathering.  At least, it was until last month.

Last month, a new penguin stopped by.  He was on his way north, looking for better penguin weather.  And he was flying!  The local penguin crew was stunned because, as everybody knows, penguins can't fly.  But the new bird promised to teach them all to fly.  And, after about four hours of instruction and practice, all those penguins were flying.  Soaring.  Barrel rolls.  Loops.  Bomber dives.  Spins.  What a bunch of happy penguins, high-fiving each other and laughing about the new knowledge and skills they acquired.

After another four hours, those penguins were exhausted.  Huffing and puffing.  Soreness from muscles they didn't even know they had.  But they were exhilarated. They all agreed it was a spectacular day.

And then they all walked home...

You want to justify your attendance at a conference?  Be smarter than my local penguins.

The Cloud UX Lab

Oracle AppsLab - Mon, 2014-11-10 09:57

There’s a post over on VoX about a OAUX new lab at Oracle HQ, the Cloud UX Lab.

labwidewithJacopy

Jeremy Ashley, VP, in the new lab, image used with permission.

Finished just before OOW in September, this lab is a showcase for OAUX projects, including a few of ours.

The lab reminds me of a spacecraft from the distant future, the medical bay or the flight deck. It’s a very cool place, directly inspired and executed by our fearless leader, Jeremy Ashley (@jrwashley), an industrial designer by trade.

I actually got to observe the metamorphosis of this space from something that felt like a doctor’s office waiting room into the new hotness. Looking back on those first meetings, I never expected it would turn out so very awesome.

Anyway, the reason why I got to tag along on this project is because our team will be filling the control room for this lab with our demos. Noel (@noelportugal) and Jeremy have a shared vision for that space, which will be a great companion piece to the lab and equally awesome.

So, if you’re at Oracle HQ, book a tour and stop by the new Cloud UX Lab, experience the new hotness and speculate on what Noel is cooking up behind the glass.Possibly Related Posts:

Seeing slow startup of SOA OSB and other Java based application then verify Entropy

Arun Bavera - Mon, 2014-11-10 09:20
We faced slow startup of Domain Creation, slow startup  of Domain and resolved using proper Entropy settings:
You should be able to select the faster-but-slightly-less-secure /dev/urandom on Linux using:
$JAVA_HOME/jre/lib/security/java.security
Default /dev/urandom is configured, but as mentioned this is ignored by Java.
-Djava.security.egd=file:/dev/urandom
However, this doesn’t work with Java 5 and later (Java Bug 6202721). The suggested work-around is to use:
-Djava.security.egd=file:/dev/./urandom(note the extra ‘/./’)
 
You can also set in your environment like below in setDomainEnv.sh
if [ "${USER_MEM_ARGS}" != "" ] ; then
MEM_ARGS="${USER_MEM_ARGS}
export MEM_ARGS
fi
MEM_ARGS="${MEM_ARGS} -Djava.security.egd=file:/dev/./urandom"
 
 
Or at runtime:
export CONFIG_JVM_ARGS="-Djava.security.egd=file:/dev/./urandom"
/u01/app/oracle/product/fmw/wlserver_12.1/common/bin/config.sh
 
References:
http://theheat.dk/blog/?p=1539
http://stackoverflow.com/questions/137212/how-to-solve-performance-problem-with-java-securerandom

















Categories: Development

Pass summit 2014: My favorite sessions of this great event

Yann Neuhaus - Mon, 2014-11-10 08:19

The Pass Summit 2014 is now over and it’s time for us to go back home. I want to share my favorite sessions with you in this post.

blog_22_-_1_-_sqlpass_over

 

Pass Summit was really an amazing event regardless of the expertise area (BI or SQL Server engine). This was also a good opportunity to meet SQL Server guys in others countries as well as the French SQL Server community. We attended a lot of interesting sessions and I admit it was often difficult to choose between two or more sessions at the same time. Nevertheless, here is my list of favorite sessions I was able to participate in:

 

blog_22_-_2_-_AlwaysOn

SQL Server 2014 AlwaysOn (high availability and disaster recovery) and troubleshooting – Kevin Farlee & Trayce Jordan

These sessions was a good reminder of what SQL Server AlwaysOn exactly means. Indeed, AlwaysOn is only a label that defines two main technologies: SQL Server FCI and availability groups. Kevin Farlee described some improvements provided by SQL Server 2014 for availability groups like diagnostics, availability of the read-only replicas, resilience of the network that reduce node eviction, usage of cluster shared volumes with SQL Server 2014 FCI and others. I remember to have some blog posts in my todo list. We also had an interesting experience feedback of the support team that concerns different availability group issues and how they resolved them. It was very funny when Trace Jordan asked us for our feedback concerning availability group issues. In my case, I know I already had to face to some issues, but some people in the meeting room too, it seems :-)

 

Latches, locks, free structures and SQL Server IO – Bob Ward & Klaus Aschenbrenner

I have to admit my brain was sometimes burning during both sessions … this is what we can expect from a 500-level session :-). If I have to summarise here what can I tell from these two sessions: On one side we learned about latches and spinlocks, how they can be problematic for critical workloads and how can we can analyse and resolve them as well. On the other side, we learned about low-level Windows API and structures used by SQL Server concerning IOs. We saw interesting demos with DMVs, extended events and the Windows debugger. Personally, I’m a fan of these topics and I hope to give some sessions on this subject in the future.

blog_22_-_3_-_SQLIO

 

blog_22_-_4_-_corruption

Advanced data recovery techniques – Paul Randal

A very interesting session by Paul Rand in which he described different corruption cases and how to resolve them. The most part of the session was dedicated only to demos and I was able to see that some corruptions are relatively simple to repair while for others this is not the case, as they require strong internal structure skills concerning SQL Server pages. A powerful command that Paul used during the session: DBCC WRITPAGE - in order to corrupt database pages. I occasionally used this command in my blog. Be careful not to use it in production! Probably the most importance lesson to learn from this session: Practice! Practice! Practice!

 

Query Tuning Mastery: Manhandling Parallelism, 2014 Edition – Adam Machanic

A very impressive session on how to deal with parallelism on SQL Server. Adam already gave a session in 2012 on this subject and the new edition is on the same level than the preview. This is the type of sessions where you have to listen one or more times to hope understand all pieces of information. This also applies to Bob Ward and Paul Rand. A bit of work in sight...

blog_22_-_5_-_parallelism

 

blog_22_-_6_-_high_performance_sql

High Performance Infrastructure for SQL Server 2014 – Michael Frandson

My last session of this summit and a good surprise for me! I didn’t know Michael before and this guy has specialized on storage, scalability, and virtualization. He has contributed to a lot of projects with Microsoft and SQLCat. Michael discussed new storages features in Windows Server 2012 / R2 and how they relate to SQL Server. We had an overview on Infiniband, SMB, multi-path IO, RDMA, RCoE, NAND flash at steady pace. Then we discussed topics of SQL Server features related to storage like buffer pool extension, In-memory tables, usage of local disk with SQL Server FCI. Finally, Michael finished his session with a pretty smart reminder: the SQL Server performance is carried out both at application level (developpers) and infrastructure level (DBA / architectes)

 

Before leaving Seattle, we had our ceremonial breakfast at the Seattle marketplace and our traditional burger party at Johnny Rockets ...

 

blog_22_-_8_-_public_market blog_22_-_7_-_Johnny_Rockets_logo

 

See you at the next Pass summit!

Kuali, Ariah and Apereo: Emerging ed tech debate on open source license types

Michael Feldstein - Mon, 2014-11-10 08:13

With the annual Kuali conference – Kuali Days – starting today in Indianapolis, the big topic should be the August decision to move from a community source to a professional open source model, moving key development to a commercial entity, the newly-formed KualiCo. Now there will be two new announcements for the community to discuss, both centering on a esoteric license choice that could have far-reaching implications. Both the announcement of the Ariah Group as a new organization to support Kuali products and the statement from the Apereo Foundation center on the difference between Apache-style and AGPL licenses.

AGPL and Vendor Protection

Kuali previously licensed its open source code as Educational Community License (ECL), a derivative of the standard Apache license that is designed to be permissive in terms of allowing organizations to contribute modified open source code while mixing with code with different licenses – including proprietary. This license is ‘permissive’ in the sense that the derived, remixed code may be licensed in different manners. It is generally thought that this license type gives the most flexibility for developing a community of contributors.

With the August pivot to Kuali 2.0 / KualiCo, the decision was made to fork and relicense any Kuali code that moves to KualiCo to use the Affero General Public License (AGPL), a derivative of the original GPL license and a form of “copyleft” licensing that allows derivative works but requires the derivatives to use the same license. Ideally the idea is to ensure that open source code remains open. No commercial entity can create derivative works and license with different terms.

The problem is when you have asymmetric AGPL licenses – where the copyright holder such as KualiCo does not have the same restrictions as all other users or developers of the code. Kuali has already announced that the multi-tenant cloud-hosting code to be developed by KualiCo will be proprietary and not open source. As the copyright holder, this is their right. Any school or Kuali vendor, however, that develops its own multi-tenant cloud-hosting code would have to share this code back with KualiCo relicense and share this code publicly as open source. If you want to understand how this choice might create vendor lock-in, even using an open source license, go read Charles Severance’s post. Update: fixed wording about sharing requirements.

To their credit, the Kuali Foundation and KualiCo are very open about the intention of this license change, as described at Inside Higher Ed from a month ago.

[Barry] Walsh, who has been dubbed the “father of Kuali,” issued that proclamation after a back-and-forth with higher education consultant Phil Hill, who during an early morning session asked the Kuali leadership to clarify which parts of the company’s software would remain open source.

The short answer: everything — from the student information system to library management software — but the one thing institutions that download the software for free won’t be able to do is provide multi-tenant support (in other words, one instance of the software accessed by multiple groups of users, a feature large university systems may find attractive). To unlock that feature, colleges and universities need to pay KualiCo to host the software in the cloud, which is one way the company intends to make money.

“I’ll be very blunt here,” Walsh said. “It’s a commercial protection — that’s all it is.”

My post clarifying this interaction can be found here.

Enter Ariah Group

On Friday of last week, the newly formed Ariah Group sent out an email announcing a new support option for Kuali products.

Strong interest has been expressed in continuing to provide open source support for Kuali®products therefore The Ariah Group, a new nonprofit entity, has been established for those who wish to continue and enhance that original open source vision.

We invite you to join us. The community is open to participants of all kinds with a focus on making open source more accessible. The goal will be to deliver quality open source products for Finance, Human Resources, Student, Library, Research, and Continuity Planning. The Ariah Group will collaborate to offer innovative new products to enhance the suite and support the community. All products will remain open source and use the Apache License, Version 2.0 (http://opensource.org/licenses/Apache-2.0) for new contributions. A number of institutions and commercial vendors will be announcing their support in the coming days and weeks.

To join or learn more visit The Ariah Group at http://ariahgroup.org/

Who is the Ariah Group? While details are scarce, this new organization seems to be based on 2 – 3 current and former Kuali vendors. As can be seen from their incomplete website, the details have not been worked out. The group has identified an Executive Director, based on an email exchange I had with the company.

The only vendor that I can confirm is part of Ariah is Moderas, the former Kuali Commercial Affiliate that was removed as an official vendor in September (left or kicked out, depending on which side you believe; I’d say it was a mutual decision). I talked to Chris Thompson, co-founder of Moderas, who said that he understood the business rationale for the move to the Professional Open Source model but had a problem with the community aspects. The Kuali Foundation made a business decision to adopt AGPL and shift development to KualiCo, which makes sense in his telling, but the decision did not include real involvement from the Kuali Community. Chris sees that the situation has changed Kuali from a collaborative to a competitive environment, with KualiCo holding most of the cards.

This is the type of thinking behind the Ariah Group announcement – going back to the future. As described on the website:

We’ve been asked if we’re “branching the code” as we’ve discussed founding Ariah and our response has been that we feel that in fact the Kuali Foundation is branching with their new structure that includes a commercial entity who will set the development priorities and code standards that may deviate from the current Java technology stack in use. At Ariah our members will set the priorities as it was and as it should be in any truly open source environment. Java will always be our technology stack as we understand the burden that changing could cause a massive impact to our members.

This is an attempt to maintain some of the previous Kuali model including an Apache license (very close to ECL) and the same technology stack. But this approach raises two questions: How serious is this group (including whether they are planning to raise investment capital)? And why would Ariah expect to succeed when Kuali was unable to deliver on this model?

While this move by Ariah would have to be considered high risk, at least in its current form without funding secured or details worked out, it adds a new set of risks for Kuali itself as the Kuali Days conference begins. Kuali is in a critical period where the Foundation is seeking to get partner institutions to sign agreements to support KualiCo, contributing both cash and project staff. Based on input from multiple sources, only the University of Maryland has already signed a Memo of Understanding and agreed to this move for the Kuali Student project. Will the Ariah Group announcement cause schools to either reconsider upcoming decisions or even to just delay decisions. Will the Kuali project functional councils be influenced by this announcement on whether to move to the AGPL license.

I contacted Brad Wheeler, chair of the Kuali Foundation board, who added this comment:

Unlike many proprietary software models, Kuali was established with and continues with a software model that has always enabled institutional prerogative. Nothing new here.

Apereo Statement

In a separate but related announcement, this morning the Apereo Foundation (parent organization for Sakai, uPortal and other educational open source projects) released a statement on open source licenses.

Apereo supports the general ideas behind “copyleft” and believes that free software should stay free. However, Apereo is more interested in promoting widespread adoption and collaboration around its projects, and copyleft licenses can be a barrier to this. Specifically, the required reciprocity of copyleft licenses (like the GPL and AGPL) is viewed negatively by many potential adopters and contributors. Apereo also has a number of symbiotic relationships with other open source communities and projects with Apache-style licensing that would be hurt by copyleft licensing.

Apereo strongly encourages anyone who improves upon an Apereo project to contribute those changes back to the community. Contributing is mutually beneficial since the community gets a better project and the contributor does not have to maintain a diverging codebase. Apereo project governance bodies that feel licensing under the GPL or AGPL is necessary in their context can request permission from the Licensing & Intellectual Property Committee and the Apereo Foundation Board of Directors to select this copyleft approach to outbound licensing.

Apereo believes that the reciprocity in a copyleft open source software project should be symmetrical for everyone, specifically that all individuals and organizations involved should share any derivative works as defined in the selected outbound license. Apereo sponsored projects that adopt a copyleft approach to outbound licensing will be required to maintain fully symmetric reciprocity for all parties, including Apereo itself.

Those seeking further information on copyleft licensing, including potential pitfalls of asymmetric application, should read chapter 11 of the “Copyleft and the GNU General Public License: A Comprehensive Tutorial and Guide – Integrating the GPL into Business Practices”. This can be found at –

http://www.copyleft.org/guide/comprehensive-gpl-guidech12.html#x15-10400011.2

While Kuali would appear to be one of the triggers for this statement, there are other educational changes to consider such as the Open edX change from AGPL to Apache (reverse of Kuali) for its XBlock code. From the edX blog post describing this change:

The XBlock API will only succeed to the extent that it is widely adopted, and we are committed to encouraging broad adoption by anyone interested in using it. For that reason, we’re changing the license on the XBlock API from AGPL to Apache 2.0.

The Apache license is permissive: it lets adopters and extenders do what they want with their changes. They can release them under a copyleft license like AGPL, or a permissive license like Apache, or even keep them closed-source.

Methods Matter

I’ll be interested to see any news or outcomes from the Kuali Days conference, and these two announcements should affect the license discussions at the conference. What I have found interesting is that in most of my conversations with Kuali community people ,even for those who are disillusioned, they seem to think the KualiCo creation makes some sense. The real frustration and pushback has been on how decisions are made, how decisions have been communicated, and how the AGPL license choice will affect the community.

It’s too early to tell if the Ariah Group will have any significant impact on the Kuali community or not, but the issue of license types should have a growing importance in educational technology discussions moving forward.

The post Kuali, Ariah and Apereo: Emerging ed tech debate on open source license types appeared first on e-Literate.

poor man ActiveDirectory password checker

Laurent Schneider - Mon, 2014-11-10 07:45

To have the same users in multiple databases and no single sign on is quite a nightmare for password expiration, synchronisation and validation.

You probably were discouraged by the long long route to kerberos, where the 11.2.0.2 bugs are fixed in 11.2.0.4, the 12.1 bugs are fixed in 12.2. And lot’s of system changes that won’t be welcome by your sysadmins / winadmins.

Okay, to partly cover the password expiration issue, you could check in a profile function that the password is the one from AD.

Firstly, without SSL


CREATE OR REPLACE FUNCTION pw_function_AD
(username varchar2,
 password varchar2,
 old_password varchar2)
RETURN boolean IS
  sess raw(32);
  rc number;
BEGIN
  sess := DBMS_LDAP.init(
    'example.com',dbms_ldap.PORT);
  rc := DBMS_LDAP.simple_bind_s(
    sess, username||'@example.com', 
    password);
  rc := DBMS_LDAP.unbind_s(sess);
  RETURN(TRUE);
EXCEPTION
  WHEN OTHERS THEN
    rc := DBMS_LDAP.unbind_s(sess);
    raise;
END;
/
GRANT EXECUTE ON pw_function_ad TO PUBLIC;
CREATE PROFILE AD LIMIT 
  PASSWORD_VERIFY_FUNCTION pw_function_AD;
ALTER PROFILE AD LIMIT 
  PASSWORD_LIFE_TIME 30;
ALTER PROFILE AD LIMIT 
  PASSWORD_REUSE_MAX UNLIMITED;

alter user lsc profile AD;

When the password expires, the user must change it to its AD Password.

If I try with a dummy password, the profile will reject this


SQL> conn lsc/pw1
ERROR:
ORA-28001: the password has expired

Changing password for lsc
New password:anypassword
Retype new password:anypassword
ERROR:
ORA-28003: password verification for 
  the specified password failed
ORA-31202: DBMS_LDAP: LDAP client/server 
  error: Invalid credentials. 
  80090308: LdapErr: DSID-0C0903A9, 
  comment: AcceptSecurityContext error, 
    data 52e, v1db1
Password unchanged
Warning: You are no longer connected to ORACLE.

I need to enter my Windows password


SQL> conn lsc/pw1
ERROR:
ORA-28001: the password has expired

Changing password for lsc
New password: mywindowspassword
Retype new password: mywindowspassword
Password changed
Connected.

Secondly, with SSL.

Maybe simple bind without SSL is not possible (check http://support.microsoft.com/kb/935834). And for sure it is better to not send unencrypted plain text password over the network.

Create a wallet with password with the ROOT Certification Authority that signed your AD. You probably could download this in your trusted root certification authorities in Internet Explorer.

Internet Explorer – Tools – Internet Options – Content – Certificates – Trusted root.

Then you create a ewallet.p12 with orapki. No need for user certificate and no need for single-sign-on. Only import the trusted root (and intermediaries if applicable).

Here is the modified code


CREATE OR REPLACE FUNCTION pw_function_AD
(username varchar2,
 password varchar2,
 old_password varchar2)
RETURN boolean IS
  sess raw(32);
  rc number;
BEGIN
  sess := DBMS_LDAP.init(
    'example.com',dbms_ldap.SSL_PORT);
  rc := DBMS_LDAP.open_ssl(
    sess, 'file:/etc/wallet/MSAD', 
    'welcome1', 2);
  rc := DBMS_LDAP.simple_bind_s(
    sess, username||'@example.com', 
    password);
  rc := DBMS_LDAP.unbind_s(sess);
  RETURN(TRUE);
EXCEPTION
  WHEN OTHERS THEN
    rc := DBMS_LDAP.unbind_s(sess);
    raise;
END;
/

If you get SSL Handshake, be prepared, it could be anything! Check your wallet, your certificate, your permission, your wallet password.

One step further could be to expire users as soon as they change their password in AD or when they expire there.

For instance with powershell goodies for active directory


PS> (Get-ADuser lsc -properties PasswordLastSet).PasswordLastSet

Montag, 6. Oktober 2014 08:18:23

PS> (Get-ADuser king -properties AccountExpirationDate).AccountExpirationDate

Mittwoch, 16. Juli 2014 06:00:00

And in the database


SQL> SELECT ptime FROM sys.user$ 
  WHERE name ='LSC';

PTIME
-------------------
2014-11-10_10:33:08

If PTIME is less than PasswordLastSet or if AccountExpirationDate is not null, expire the account.

In conclusion : if you do not want to use Kerberos, nor Oracle “OctetString” Virtual Directory ovid nor Oracle Internet directory oid, this workaround may help to increase your security by addressing the “shared” and “expired” accounts problematic

There an additional hidden benefit. You could set up a self-service password reset function and send a generated expired password per mail, that the user won’t be able to change without its AD password

Past Sacrifice

Pete Scott - Mon, 2014-11-10 07:39
This year is the 100th anniversary of the beginning of World War 1. This was not the war to end wars it was said to be, instead it was the transition from the direct man-on-man combat of earlier conflicts to a scientific, engineered, calculated way of killing people efficiently, and not just military personnel. Explosives […]

ECO 2014 and Slides

DBASolved - Mon, 2014-11-10 07:30

Last week I attended the East Coast Oracle User Group conference, also known as ECO, in Raleigh, NC.  This being my first time at ECO, it was a good event for being a two day conference.  The low-key environment provided a nice, comfortable environment for interaction between the speakers and those in attendance.  If you ever have the chance to catch this conference, it would be a good one to attend.

What you can expect from ECO, is to see great speakers, both local to Raleigh and from around the country. There seems to be opportunities to see also see speakers that we all hear about and would like to see at some point.  As one of the speakers at this year’s conference, I have to say it was nice to have great attendance for my session on Oracle GoldenGate 12c Conflict Detection and Resolution.  My session was scheduled for 45 minutes; due to discussions throughout the session it lasted about 65 minutes.  Although the session ran over, it was exciting to see so many people wantiong to know more about Oracle GoldenGate and what benefits it provides to an organization.

If you would like to see the slides from my ECO session, they can be found here.

Lastly, I would like to say that ECO is one of the smaller user group conferences which seem to draw some great speakers.  Check it out next year!

Enjoy!

about.me: http://about.me/dbasolved


Filed under: General
Categories: DBA Blogs

Oracle Database Last Logins with Oracle 12c

Tracking when database users last logged in is a common security and compliance requirement – for example to reconcile users and identify stale users. With Oracle 12c this analysis can now be done through standard functionality. New with Oracle12c, the SYS.DBA_USERS has a new column: last_login. 

select username, account_status, common, last_login

from sys.dba_users

order by last_login asc;

 

Username

Account_Status

Common

Last_Login

C##INTEGRIGY

OPEN

YES

05-AUG-14 12.46.52.000000000 PM AMERICA/NEW_YORK

C##INTEGRIGY_TEST_2

OPEN

YES

02-SEP-14 12.29.04.000000000 PM AMERICA/NEW_YORK

XS$NULL

EXPIRED & LOCKED

YES

02-SEP-14 12.35.56.000000000 PM AMERICA/NEW_YORK

SYSTEM

OPEN

YES

04-SEP-14 05.03.53.000000000 PM AMERICA/NEW_YORK

If you have questions, please contact us at mailto:info@integrigy.com

Reference Tags: AuditingOracle Database
Categories: APPS Blogs, Security Blogs

Auditing OBIEE Presentation Catalog Activity with Custom Log Filters

Rittman Mead Consulting - Mon, 2014-11-10 01:49

A question that I’ve noticed coming up a few times on the OBIEE OTN forums goes along the lines of “How can I find out who deleted a report from the Presentation Catalog?”. And whilst the BI Server’s Usage Tracking is superb for auditing who ran what report, we don’t by default have a way of seeing who deleted a report.

The Presentation Catalog (or “Web Catalog” as it was called in 10g) records who created an object and when it was last modified, accessible through both OBIEE’s Catalog view, and the dedicated Catalog Manager tool itself:


But if we want to find out who deleted an object, or maybe who modified it before the most recent person (that is, build up an audit trail of who modified an object) we have to dig a bit deeper.

Presentation Services Log Sources

Perusing the OBIEE product manuals, one will find documented additional Logging in Oracle BI Presentation Services options. This is more than just turning up the log level en masse, because it also includes additional log writers and filters. What this means is that you can have your standard Presentation Services logging, but then configure a separate file to capture more detailed information about just specific goings on within Presentation Services.

Looking at a normal Presentation Services log (in $FMW_HOME/instances/instance1/diagnostics/logs/OracleBIPresentationServicesComponent/coreapplication_obips1/) you’ll see various messages by default – greater or fewer depending on the health of your system – but they all use the Location stack track, such as this one here:

[2014-11-10T06:33:19.000-00:00] [OBIPS] [WARNING:16] [] [saw.soap.soaphelpers.writeiteminfocontents] [ecid: 11d1def534ea1be0:15826b4a:14996b86fbb:-8000-0000000000001ede,0:1] [tid: 2569512704] Resolving and writing full ACL for path /shared/Important stuff/Sales by brand[[
File:soaphelpers.cpp
Line:609
Location:
        saw.soap.soaphelpers.writeiteminfocontents
        saw.soap.catalogservice
        saw.SOAP
        saw.httpserver.request.soaprequest
        saw.rpc.server.responder
        saw.rpc.server
        saw.rpc.server.handleConnection
        saw.rpc.server.dispatch
        saw.threadpool.socketrpcserver
        saw.threads
Path: /shared/Important stuff/Sales by brand
AuthProps: AuthSchema=UidPwd-soap|PWD=******|UID=weblogic|User=weblogic
ecid: 11d1def534ea1be0:15826b4a:14996b86fbb:-8000-0000000000001ede,0:1
ThreadID: 2569512704

And it is the Location that is of interest to us here, because it’s what gives hints about the types of log messages that can be emitted and that we may want to filter. For example, the one quoted above is evidently something to do with the Presentation Catalog and SOAP, which I’d guess is a result of Catalog Manager (which uses web services/SOAP to access OBIEE).

To get a full listing of all the possible log sources, first set up the BI command line environment with bi-init:

source $FMW_HOME/instances/instance1/bifoundation/OracleBIApplication/coreapplication/setup/bi-init.sh

and then run:

sawserver -logsources

(If you get an error, almost certainly you didn’t set up the command line environment properly with bi-init). You’ll get an list of over a thousand lines (which gives you an idea of quite how powerful this granular logging is). Assuming you’ll want to peruse it at your leisure, it makes sense to write it to disk which if you’re running this on *nix you can simply do thus:

sawserver -logsources > sawserver.logsources.txt

To find what you want on the list, you can just search through it. Looking for anything related to “catalog” and narrowing it down further, I came up with these interesting sources:

[oracle@demo ~]$ sawserver -logsources|grep catalog|grep local
saw.catalog.item.getlocalized
saw.catalog.local
saw.catalog.local.checkforcatalogupgrade
saw.catalog.local.copyItem
saw.catalog.local.createFolder
saw.catalog.local.createLink
saw.catalog.local.deleteItem
saw.catalog.local.getItemACL
saw.catalog.local.getItemInfo
saw.catalog.local.loadCatalog
saw.catalog.local.moveItem
saw.catalog.local.openObject
saw.catalog.local.readObject
saw.catalog.local.search
saw.catalog.local.setItemACL
saw.catalog.local.setItemInfo
saw.catalog.local.setMaintenanceMode
saw.catalog.local.setOwnership
saw.catalog.local.writeObject

Configuring granular Presentation Services logging

Let us see how to go and set up this additional logging. Remember, this is not the same as just going to Enterprise Manager and bumping the log level to 11 globally – we’re going to retain the default logging level, but for just specific actions that occur within the tool, capture greater information. The documentation for this is here.

The configuration is found in the instanceconfig.xml file, so like all good sysadmins let’s take a backup first:

cd $FMW_HOME/instances/instance1/config/OracleBIPresentationServicesComponent/coreapplication_obips1/
cp instanceconfig.xml instanceconfig.xml.20141110

Now depending on your poison, open the instanceconfig.xml directly in a text editor from the command line, or copy it to a desktop environment where you can open it in your favourite text editor there. Either way, these are the changes we’re going to make:

  1. Locate the <Logging> section. Note that within it there are three child entities – <Writers>, <WriterClassGroups> and <Filters>. We’re going to add an entry to each.

  2. Under <Writers>, add:

    <Writer implementation="FileLogWriter" name="RM Presentation Catalog Audit" disableCentralControl="true" writerClassId="6" dir="{%ORACLE_BIPS_INSTANCE_LOGDIR%}" filePrefix="rm_pres_cat_audit" maxFileSizeKb="10240" filesN="10" fmtName="ODL-Text"/>

    This defines a new writer than will write logs to disk (FileLogWriter), in 100MB files of which it’ll keep 10. If you’re defining additional Writers, make sure they have a unique writerClassId See docs for detailed syntax.

  3. Under <WriterClassGroups> add:

    <WriterClassGroup name="RMLog">6</WriterClassGroup>

    This defines the RMLog class group as being associated with writerClassId 6 (as defined above), and is used in the Filters section to direct logs. If you wanted you could log entries to multiple logs (eg both file and console) this way.

  4. Under <Filters> add:

    <FilterRecord writerClassGroup="RMLog" disableCentralControl="true" path="saw.catalog.local.moveItem" information="32" warning="32" error="32" trace="32" incident_error="32"/>
    <FilterRecord writerClassGroup="RMLog" disableCentralControl="true" path="saw.catalog.local.deleteItem" information="32" warning="32" error="32" trace="32" incident_error="32"/>

    Here we’re defining two event filters, with levels turned up to max (32), directing the capture of any occurences to the RMLog writerClassGroup.

After making the changes to instanceconfig.xml, restart Presentation Services:

$FMW_HOME/instances/instance1/bin/opmnctl restartproc ias-component=coreapplication_obips1

Here’s the completed instanceconfig.xml from the top of the file through to the end of the <Logging> section, with my changes overlayed the defaults:

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- Oracle Business Intelligence Presentation Services Configuration File -->
<WebConfig xmlns="oracle.bi.presentation.services/config/v1.1">
   <ServerInstance>

      <!--This Configuration setting is managed by Oracle Enterprise Manager Fusion Middleware Control--><CatalogPath>/app/oracle/biee/instances/instance1/SampleAppWebcat</CatalogPath>

      <DSN>AnalyticsWeb</DSN>

      <Logging>

         <Writers>
            <!--This Configuration setting is managed by Oracle Enterprise Manager Fusion Middleware Control--><Writer implementation="FileLogWriter" name="Global File Logger" writerClassId="1" dir="{%ORACLE_BIPS_INSTANCE_LOGDIR%}" filePrefix="sawlog" maxFileSizeKb="10240" filesN="10" fmtName="ODL-Text"/>
            <!--This Configuration setting is managed by Oracle Enterprise Manager Fusion Middleware Control--><Writer implementation="CoutWriter" name="Console Logger" writerClassId="2" maxFileSizeKb="10240"/>
            <!--This Configuration setting is managed by Oracle Enterprise Manager Fusion Middleware Control--><Writer implementation="EventLogWriter" name="System Event Logger" writerClassId="3" maxFileSizeKb="10240"/>
            <!--  The following writer is not centrally controlled -->
            <Writer implementation="FileLogWriter" name="Webcat Upgrade Logger" disableCentralControl="true" writerClassId="5" dir="{%ORACLE_BIPS_INSTANCE_LOGDIR%}" filePrefix="webcatupgrade" maxFileSizeKb="2147483647" filesN="1" fmtName="ODL-Text"/>
            <Writer implementation="FileLogWriter" name="RM Presentation Catalog Audit" disableCentralControl="true" writerClassId="6" dir="{%ORACLE_BIPS_INSTANCE_LOGDIR%}" filePrefix="rm_pres_cat_audit" maxFileSizeKb="10240" filesN="10" fmtName="ODL-Text"/>
         </Writers>

         <WriterClassGroups>
            <WriterClassGroup name="All">1,2,3,5,6</WriterClassGroup>
            <WriterClassGroup name="File">1</WriterClassGroup>
            <WriterClassGroup name="Console">2</WriterClassGroup>
            <WriterClassGroup name="EventLog">3</WriterClassGroup>
            <WriterClassGroup name="UpgradeLogFile">5</WriterClassGroup>
            <WriterClassGroup name="RMLog">6</WriterClassGroup>
         </WriterClassGroups>

         <Filters>
            <!--  These FilterRecords are updated by centrally controlled configuration -->
            <!--This Configuration setting is managed by Oracle Enterprise Manager Fusion Middleware Control--><FilterRecord writerClassGroup="File" path="saw" information="1" warning="31" error="31" trace="0" incident_error="1"/>
            <!--This Configuration setting is managed by Oracle Enterprise Manager Fusion Middleware Control--><FilterRecord writerClassGroup="File" path="saw.mktgsqlsubsystem.joblog" information="1" warning="31" error="31" trace="0" incident_error="1"/>

            <!--  The following FilterRecords are not centrally controlled -->
            <FilterRecord writerClassGroup="UpgradeLogFile" disableCentralControl="true" path="saw.subsystem.catalog.initialize.upgrade" information="1" warning="32" error="32" trace="1" incident_error="32"/>
            <FilterRecord writerClassGroup="UpgradeLogFile" disableCentralControl="true" path="saw.subsystem.catalog.upgrade" information="1" warning="32" error="32" trace="1" incident_error="32"/>
            <FilterRecord writerClassGroup="RMLog" disableCentralControl="true" path="saw.catalog.local.moveItem" information="32" warning="32" error="32" trace="32" incident_error="32"/>
            <FilterRecord writerClassGroup="RMLog" disableCentralControl="true" path="saw.catalog.local.deleteItem" information="32" warning="32" error="32" trace="32" incident_error="32"/>
         </Filters>

      </Logging>

[...]

Granular logging in action

Having restarted Presentation Services after making the above change, I can see in my new log file whenever an item from the Presentation Catalog is deleted, by whom, and from what IP address:

[2014-11-10T07:13:36.000-00:00] [OBIPS] [TRACE:1] [] [saw.catalog.local.deleteItem] [ecid: 11d1def534ea1be0:15826b4a:14996b86fbb:-8000-0000000000002cf1,0:1] [tid: 2458068736] Succeeded with '/shared/Important stuff/Sales by brand 2'[[
File:localwebcatalog.cpp
Line:626
Location:
        saw.catalog.local.deleteItem
        saw.httpserver.processrequest
        saw.rpc.server.responder
        saw.rpc.server
        saw.rpc.server.handleConnection
        saw.rpc.server.dispatch
        saw.threadpool.socketrpcserver
        saw.threads
Path: /shared/Important stuff/Sales by brand 2
SessionID: p8n6ojs0vkh7tou0mkstmlc9me381hadm9o1fui
AuthProps: AuthSchema=UidPwd|PWD=******|UID=r.mellie|User=r.mellie
ecid: 11d1def534ea1be0:15826b4a:14996b86fbb:-8000-0000000000002cf1,0:1
ThreadID: 2458068736
HttpCommand: CatalogTreeModel
RemoteIP: 192.168.57.1
HttpArgs: action='rm',_scid='QR5zMdHIL3JsW1b67P9p',icharset='utf-8',urlGenerator='qualified',paths='["/shared/Important stuff/Sales by brand 2"]'
]]

And the same for when a file is moved/renamed:

[2014-11-10T07:28:17.000-00:00] [OBIPS] [TRACE:1] [] [saw.catalog.local.moveItem] [ecid: 11d1def534ea1be0:15826b4a:14996b86fbb:-8000-0000000000003265,0:1] [tid: 637863680] Source '/shared/Important stuff/copy of Sales by brand', Destination '/shared/Important stuff/Sales by brand 2': Succeeded with '/shared/Important stuff/copy of Sales by brand'[[
File:localwebcatalog.cpp
Line:1186
Location:
        saw.catalog.local.moveItem
        saw.httpserver.processrequest
        saw.rpc.server.responder
        saw.rpc.server
        saw.rpc.server.handleConnection
        saw.rpc.server.dispatch
        saw.threadpool.socketrpcserver
        saw.threads
Path: /shared/Important stuff/copy of Sales by brand
SessionID: ddt6eo7llcm0ohs5e2oivddj7rtrhn8i41a7f32
AuthProps: AuthSchema=UidPwd|PWD=******|UID=f.saunders|User=f.saunders
ecid: 11d1def534ea1be0:15826b4a:14996b86fbb:-8000-0000000000003265,0:1
ThreadID: 637863680
HttpCommand: CatalogTreeModel
RemoteIP: 192.168.57.1
HttpArgs: path='/shared/Important stuff/copy of Sales by brand',action='ren',_scid='84mO8SRViXlwJ*180HV7',name='Sales by brand 2',keepLink='f',icharset='utf-8',urlGenerator='qualified'
]]

Be careful with your logging

Just because you can log everything, don’t be tempted to actually log everything. Bear in mind that we’re crossing over from simple end-user logging here into the very depths of the sawserver (Presentation Services) code, accessing logging that is extremely diagnostic in nature. Which for our specific purpose of tracking when someone deletes an object from the Presentation Catalog is handy. But as an example, if you enable saw.catalog.local.writeObject event logging, you may think that it will record who changed a report when, and that might be useful. But – look at what gets logged every time someone saves a report:

[2014-11-10T07:19:32.000-00:00] [OBIPS] [TRACE:1] [] [saw.catalog.local.writeObject] [ecid: 11d1def534ea1be0:15826b4a:14996b86fbb:-8000-0000000000002efb,0:1] [tid: 2454759168] Succeeded with '/shared/Important stuff/Sales 01'[[
File:localwebcatalog.cpp
Line:1476
Location:
        saw.catalog.local.writeObject
        saw.httpserver.processrequest
        saw.rpc.server.responder
        saw.rpc.server
        saw.rpc.server.handleConnection
        saw.rpc.server.dispatch
        saw.threadpool.socketrpcserver
        saw.threads
Path: /shared/Important stuff/Sales 01
SessionID: p8n6ojs0vkh7tou0mkstmlc9me381hadm9o1fui
AuthProps: AuthSchema=UidPwd|PWD=******|UID=r.mellie|User=r.mellie
ecid: 11d1def534ea1be0:15826b4a:14996b86fbb:-8000-0000000000002efb,0:1
ThreadID: 2454759168
HttpCommand: CatalogTreeModel
RemoteIP: 192.168.57.1
HttpArgs: path='/shared/Important stuff/Sales 01',action='wr',_scid='QR5zMdHIL3JsW1b67P9p',repl='t',followLinks='t',icharset='utf-8',modifiedTime='1415600931000',data='<saw:report xmlns:saw="com.siebel.analytics.web/report/v1.1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:sawx="com.siebel.analytics.web/expression/v1.1" xmlVersion="201201160"><saw:criteria xsi:type="saw:simpleCriteria" subjectArea="&quot;A - Sample Sales&quot;" withinHierarchy="true"><saw:columns><saw:column xsi:type="saw:regularColumn" columnID="c1dff1637cbc77948"><saw:columnFormula><sawx:expr xsi:type="sawx:sqlExpression">"Time"."T05 Per Name Year"</sawx:expr></saw:columnFormula></saw:column></saw:columns></saw:criteria><saw:views currentView="0"><saw:view xsi:type="saw:compoundView" name="compoundView!1"><saw:cvTable><saw:cvRow><saw:cvCell viewName="titleView!1"><saw:displayFormat><saw:formatSpec/></saw:displayFormat></saw:cvCell></saw:cvRow><saw:cvRow><saw:cvCell viewName="tableView!1"><saw:displayFormat><saw:formatSpec/></saw:displayFormat></saw:cvCell></saw:cvRow></saw:cvTable></saw:view><saw:view xsi:type="saw:titleView" name="titleView!1"/><saw:view xsi:type="saw:tableView" name="tableView!1" scrollingEnabled="false"><saw:edges><saw:edge axis="page" showColumnHeader="true"/><saw:edge axis="section"/><saw:edge axis="row" showColumnHeader="true"><saw:edgeLayers><saw:edgeLayer type="column" columnID="c1dff1637cbc77948"/></saw:edgeLayers></saw:edge><saw:edge axis="column" showColumnHeader="rollover"/></saw:edges></saw:view></saw:views></saw:report>',sig='queryitem1'
]]

It’s the whole report definition! And this is a very very small report – real life reports can be page after page of XML. That is not a good level at which to be recording this information. If you want to retain this kind of control over who is saving what report, you should maybe be looking at authorisation groups for your users in terms of where they can save reports, and have trusted ‘gatekeepers’ for important areas.

As well as the verbose report capture with the writeObject event, you also get this background chatter:

[2014-11-10T07:20:27.000-00:00] [OBIPS] [TRACE:1] [] [saw.catalog.local.writeObject] [ecid: 0051rj7FmC3Fw000jzwkno0007PK000000,0:200] [tid: 3034580736] Succeeded with '/users/r.mellie/_prefs/volatileuserdata'[[
File:localwebcatalog.cpp
Line:1476
Location:
        saw.catalog.local.writeObject
        saw.subsystem.security.cleanup
        saw.Sessions.cache.cleanup
        saw.taskScheduler.processJob
        taskscheduler
        saw.threads
Path: /users/r.mellie/_prefs/volatileuserdata
ecid: 0051rj7FmC3Fw000jzwkno0007PK000000,0:200
ThreadID: 3034580736
task: Cache/Sessions
]]

volatileuserdata is presumably just that (user data that is volatile, constantly changing) and not something that it would be of interest to anyone to log – but you can’t capture actual report writes without capturing this too. On a busy system you’re going to be unnecessarily thrashing the log files if you capture this event by routine – so don’t!

Summary

The detailed information is there for the taking in Presentation Services’ excellent granular log sources – just be careful what you capture lest you bite off more than you can chew.

Categories: BI & Warehousing

Oracle assists professionals looking for recovery programs

Chris Foot - Mon, 2014-11-10 01:40

Between natural disasters, cybercrime and basic human error, organizations are looking for tools that support disaster recovery endeavors, as well as the professionals capable of using them.

A fair number of administrators often use Oracle's advanced recovery features to help them ensure business continuity for database-driven applications. The database vendor recently unveiled a couple of new offerings that tightly integrate with their Oracle Database architecture.

Restoration and recovery
IT-Online acknowledged Oracle's Zero Data Loss Recovery Appliance, which is the first of its kind in regard to its ability to ensure critical Oracle databases retain their information even if the worst should occur. The source maintained that Oracle's new architecture can protect thousands of databases using a cloud-based, centralized recovery appliance as the target.

In other words, the Recovery Appliance isn't simply built to treat databases as information repositories that need to be backed up every so often.  The appliance's architecture replicates changes in real time to ensure that the recovery databases are constantly in sync with their production counterparts.  Listed below are several features that make the architecture stand out among conventional recovery offerings:

  • Live "Redo" data is continuously transported from the databases to the cloud-based appliance protecting the most recent transactions so that servers don't sustain data loss in the event of a catastrophic failure
  • To reduce the impact on the production environment, the Recovery Appliance architecture only delivers data that has been changed, which reduces server loads and network impact
  • The appliance's automatic archiving feature allows backups to be automatically stored on low cost tape storage
  • Data stored on the appliance can be used to recreate a historical version of the database

Simplifying database management and availability 
The second application that Oracle hopes will to enhance entire database infrastructures Oracle Database Appliance Management. The appliance manager application allows administrators to create rapid snapshots of both database and virtual machines, enabling them to quickly create and allocate development and test environments.

"With this update of Oracle Database Appliance software, customers can now reap the benefits of Oracle Database 12c, the latest release of the world's most popular database right out of the box," said Oracle Vice President of Product Strategy and Business Development Sohan DeMel. "We added support for rapid and space-efficient snapshots for creating test and development environments, organizations can further capitalize on the simplicity of Oracle engineered systems with speed and efficiency." 

The post Oracle assists professionals looking for recovery programs appeared first on Remote DBA Experts.

SQL Developer and Big Data Appliance (sort of)

DBASolved - Sun, 2014-11-09 20:55

Recently, Enkitec received an Oracle Big Data Appliance (BDA) for our server farm in Dallas (Thanks Accenture!).  With this new addition to the server farm, I’m excited to see what the BDA can do and how to use it.  Since I use Oracle SQL Developer for a lot of things, I figure I better see if I can connect to it…. wait I don’t have access yet, darn!  Simple solution, I’ll just use the Oracle Virtual Box VM (Big Data Lite) to make sure my that my SQL Developer can connect when I eventually get access.

The first thing I needed is download the Big Data Lite VM.  It can be downloaded from the Oracle Technology Network (here). The second thing I needed to do was download the connectors for HIVE from Cloudera, use the version for the platform you need (here).

After downloading the Cloudera connectors for HIVE, these needed to be unzipped in a location that can be accessed by SQL Developer. Since I’m on a MacBook Pro, I unzipped them in this location:


$ cd ~/Downloads
$ unzip ./Cloudera_HiveJDBC_2.5.4.1006.zip -d /Users/Bobby/Oracle/connectors
$ cd /Users/Bobby/Oracle/connectors
$ ls -ltr
total 21176
-rw-r--r--@ 1 Bobby  staff  5521341 Sep 10 15:16 Cloudera_HiveJDBC4_2.5.4.1006.zip
-rw-r--r--@ 1 Bobby  staff  5317239 Sep 10 15:16 Cloudera_HiveJDBC3_2.5.4.1006.zip
$ unzip ./Cloudera_HiveJDBC4_2.5.4.1006.zip -d ./Hive
$ cd ./Hive
$ ls -ltr
-r--r--r--@ 1 Bobby  staff  1083758 Sep  8 17:28 Cloudera - Simba JDBC Driver for Hive Install Guide.pdf
-rw-r--r--@ 1 Bobby  staff     9679 Sep  8 23:28 slf4j-log4j12-1.5.8.jar
-rw-r--r--@ 1 Bobby  staff    23445 Sep  8 23:28 slf4j-api-1.5.8.jar
-rw-r--r--@ 1 Bobby  staff   367444 Sep  8 23:28 log4j-1.2.14.jar
-rw-r--r--@ 1 Bobby  staff   347531 Sep  8 23:28 libthrift-0.9.0.jar
-rw-r--r--@ 1 Bobby  staff   275186 Sep  8 23:28 libfb303-0.9.0.jar
-rw-r--r--@ 1 Bobby  staff   294796 Sep  8 23:28 ql.jar
-rw-r--r--@ 1 Bobby  staff   596600 Sep  8 23:28 hive_service.jar
-rw-r--r--@ 1 Bobby  staff  7670596 Sep  8 23:28 hive_metastore.jar
-rw-r--r--@ 1 Bobby  staff  2972229 Sep  8 23:28 TCLIServiceClient.jar
-rw-r--r--@ 1 Bobby  staff  1656683 Sep  8 23:29 HiveJDBC4.jar

 
Once the connectors are extracted, SQL Developer needs to know which HIVE connector to use.  In this case the JDBC4 connector is required.  Unzipped the JDBC4 set of files into a directory, in my case I’m using a directory called Hive.

In order to tell SQL Developer which connector to use, it needs to be specified in the interface by doing the following:

  1. Start SQL Developer
  2. Oracle SQL Developer -> Preferences
  3. Database -> Third Party JDBC -> Add Entry
  4. Restart SQL Developer

After restarting SQL Developer, we now see an option on the connection screen for Hive.


 

 
 

 
 

 
 

Now SQL Developer is ready to connect to a Big Data Appliance, oh I mean to my VM for Big Data Lite :), lets setup a connection and see if we can connect.  Since I’m connecting to a Virtual Box VM, I need to setup some ports to be used between my MacBook and the VM.  In this case, I have setup a SQL port on 15211 which maps to the standard database port of 1521.  For the Hive connection I’ve setup 10001 which maps to port 10000.

 

 

With the ports put in place, now I can setup SQL Developer to connect to the Hive on the Big Data Lite VM.  You will notice that on the username, password, server name and port is needed.  The database parameter is optional when connecting to a Bid Data Hive.


 

 
 

 
 

 
 

Once the connection is configured, I can login to the Hive and review what tables are listed in the Big Data Lite VM.

 

 

 

 

 

 
 

 
 

 

The end result is that now I can visualize the data that is in a Big Data Appliance/Lite VM and begin to interact with objects defined within.

Enjoy!

about.me: http://about.me/dbasolved


Filed under: BigData
Categories: DBA Blogs

rmoug 2015 presentation ready to go ... makes me ( not ) grumpy

Grumpy old DBA - Sun, 2014-11-09 19:04
All ready to roll with RMOUG 2015 Training days presentation "OS Truth, little white lies, and the Oracle Wait Interface".

Best of course to come to RMOUG 2015 Training days ... but this is link to pdf version of the presentation here: John Hurley RMOUG 2015

If you are out there ( very excited personally my first time at this conference ) please kick me and say hello.  Ok maybe skip the kicking part ...


Categories: DBA Blogs

handy ash queries to look at blocked sessions ( how many when for what event ) ...

Grumpy old DBA - Sun, 2014-11-09 15:13
Licensing may be required ... please check if applicable.

A query like this can be used to check how much blocking and what session it was ... ( so then you can drill into those blockers sessions ).  Probably can be done easily with some kind of rollup query?

Adjust SAMPLE_TIMEs in where clause below.

select ash_data.*, substr(sqlinfo.sql_text,1,70)
from
(SELECT to_char(ash.sample_time,'MM/DD/YYYY HH24:MI:SS') what_time,  count(*) sessions_blocked, ash.event, ash.blocking_session, ash.blocking_session_serial#, ash.sql_id, ash.sql_opname
FROM DBA_HIST_ACTIVE_SESS_HISTORY ash
WHERE ash.SAMPLE_TIME >= TO_DATE('01-NOV-2014 13:00', 'DD-MON-YYYY HH24:MI')
  and ash.sample_time <= to_date('08-NOV-2014 17:00', 'DD-MON-YYYY HH24:MI')
-- and ash.event like 'enq: TX - row%'
and blocking_session is not null
group by to_char(ash.sample_time,'MM/DD/YYYY HH24:MI:SS'), ash.event, ash.sql_id, ash.sql_opname, ash.blocking_session, ash.blocking_session_serial#
order by 1) ash_data,
v$sqlarea sqlinfo
where ash_data.sql_id = sqlinfo.sql_id
and sessions_blocked >= 1
order by what_time

...

For example once you have narrowed it down to something interesting looking ( who is blocked / what sql_id / what event ) ... you can use something like this.  I am now looking at any active history information on what the blockers were doing or waiting on.


select * from DBA_HIST_ACTIVE_SESS_HISTORY where ( session_id, session_serial# ) in (
SELECT blocking_session, blocking_session_serial# FROM DBA_HIST_ACTIVE_SESS_HISTORY
WHERE SAMPLE_TIME > TO_DATE('06-NOV-2014 14:00', 'DD-MON-YYYY HH24:MI')
AND SAMPLE_TIME < TO_DATE('06-NOV-2014 15:00', 'DD-MON-YYYY HH24:MI')
and event like 'enq: TX - row%'
and sql_id = '0kbzgn17vbfc5' )
and SAMPLE_TIME > TO_DATE('06-NOV-2014 14:30', 'DD-MON-YYYY HH24:MI')
AND SAMPLE_TIME < TO_DATE('06-NOV-2014 15:00', 'DD-MON-YYYY HH24:MI')
order by sample_time
Categories: DBA Blogs

Elevated Task Manager Shortcut on Windows 7

Mike Rothouse - Sun, 2014-11-09 07:05
Received a replacement laptop so now I have to perform some software installs and configure it the way I had my old laptop.  To get Task Manager to display All Processes without having to select this option every time, it must run as Administrator.  Searched Google and found this link which describes how to setup […]

Oracle SQL Profile: why multiple OPT_ESTIMATE?

Yann Neuhaus - Sat, 2014-11-08 15:48

In a previous blog I'v shared my script to retrieve the OPT_ESTIMATE hints from a SQL Profile. In the example I made, I had two lines for each table:

--- PROFILE HINTS from dbiInSite (1) statement 4fz1vtn0w8aak:
/*+
OPT_ESTIMATE(@"SEL$2CBA5DDD", TABLE, "EMPLOYEES"@"SEL$1", SCALE_ROWS=2)
OPT_ESTIMATE(@"SEL$58A6D7F6", TABLE, "EMPLOYEES"@"SEL$1", SCALE_ROWS=2)
OPT_ESTIMATE(@"SEL$6AE97DF7", TABLE, "DEPARTMENTS"@"SEL$1", SCALE_ROWS=5.185185185)
OPT_ESTIMATE(@"SEL$58A6D7F6", TABLE, "DEPARTMENTS"@"SEL$1", SCALE_ROWS=5.185185185)
*/

The reason is that when the optimizer do some transformations to the query, then the query block identifiers can change. And when you adjust a cardinality estimation, you must do it for all transformations or you will completely mess up the optimizer choice.

When I do an explain plan which show the query blocks, I have only the SEL$58A6D7F6 one:

SQL> explain plan for
  2  select distinct DEPARTMENT_NAME  from DEPARTMENTS join EMPLOYEES
  3  using(DEPARTMENT_ID)  where DEPARTMENT_NAME like '%ing' and SALARY>20000 ;

Explained.

SQL> select * from table(dbms_xplan.display(format=>'basic +alias'));

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------
Plan hash value: 3041748347
-------------------------------------------------------------------
| Id  | Operation                             | Name              |
-------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |                   |
|   1 |  HASH UNIQUE                          |                   |
|   2 |   NESTED LOOPS SEMI                   |                   |
|   3 |    TABLE ACCESS FULL                  | DEPARTMENTS       |
|   4 |    TABLE ACCESS BY INDEX ROWID BATCHED| EMPLOYEES         |
|   5 |     INDEX RANGE SCAN                  | EMP_DEPARTMENT_IX |
-------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SEL$58A6D7F6
   3 - SEL$58A6D7F6 / DEPARTMENTS@SEL$1
   4 - SEL$58A6D7F6 / EMPLOYEES@SEL$1
   5 - SEL$58A6D7F6 / EMPLOYEES@SEL$1

In order to confirm that the duplicate OPT_ESTIMATE are coming from different transformations, I've generated a 10053 trace and searched for SEL$6AE97DF7:

Registered qb: SEL$6AE97DF7 0x851d8eb8 (DISTINCT PLACEMENT SEL$58A6D7F6; SEL$58A6D7F6; "EMPLOYEES"@"SEL$1")
---------------------
QUERY BLOCK SIGNATURE
---------------------
  signature (): qb_name=SEL$6AE97DF7 nbfros=2 flg=0
    fro(0): flg=0 objn=92595 hint_alias="DEPARTMENTS"@"SEL$1"
    fro(1): flg=1 objn=0 hint_alias="VW_DTP_43B5398E"@"SEL$43B5398E"

that's the Distinct Placement.
let's try the PLACE_DISTINCT hint:

SQL> explain plan for
  2  select /*+ PLACE_DISTINCT(EMPLOYEES) */ distinct DEPARTMENT_NAME  from DEPARTMENTS join EMPLOYEES
  3  using(DEPARTMENT_ID)  where DEPARTMENT_NAME like '%ing' and SALARY>20000 ;

Explained.

SQL> select * from table(dbms_xplan.display(format=>'basic +alias'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------
Plan hash value: 2901355344

--------------------------------------------------------------------
| Id  | Operation                              | Name              |
--------------------------------------------------------------------
|   0 | SELECT STATEMENT                       |                   |
|   1 |  HASH UNIQUE                           |                   |
|   2 |   NESTED LOOPS SEMI                    |                   |
|   3 |    TABLE ACCESS FULL                   | DEPARTMENTS       |
|   4 |    VIEW PUSHED PREDICATE               | VW_DTP_43B5398E   |
|   5 |     TABLE ACCESS BY INDEX ROWID BATCHED| EMPLOYEES         |
|   6 |      INDEX RANGE SCAN                  | EMP_DEPARTMENT_IX |
--------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SEL$6AE97DF7
   3 - SEL$6AE97DF7 / DEPARTMENTS@SEL$1
   4 - SEL$9B757045 / VW_DTP_43B5398E@SEL$43B5398E
   5 - SEL$9B757045 / EMPLOYEES@SEL$1
   6 - SEL$9B757045 / EMPLOYEES@SEL$1

Here is where the

OPT_ESTIMATE(@"SEL$6AE97DF7", TABLE, "DEPARTMENTS"@"SEL$1", SCALE_ROWS=5.185185185)
makes sense. The same cardinality adjustment must be done for each transformation that the optimizer is evaluating.

That observation brings me to the following: what happens to your profiles when you upgrade to a version that brings new optimizer transformations? The optimizer will compare some plans with adjusted cardinalities, compared with some plans with non-adjusted cardinalites. And that is probably not a good idea.

In my opinion, SQL Profiles are just like hints: a short term workaround that must be documented and re-evaluated at each upgrade.

PeopleSoft's paths to the Cloud - Part III

Javier Delgado - Sat, 2014-11-08 14:53
In my previous posts on this series, I have covered how cloud computing could be used to reduce costs and maximize the flexibility of PeopleSoft Development and Production environments. In both cases, I focused on one specific area of cloud computing, Infrastructure as a Service (IaaS).

Today I will explain what kind of benefits can be expected by using another important area: Database as a Service (DBaaS). Instead of using an IaaS provisioned server to install and maintain your database, DBaaS providers take responsibility for installing and maintaining the database.

There are many players in this market, including Amazon, Microsoft and Oracle. The service features may differ, but in a nutshell, they normally offer these capabilities:

  • Backups: the database backups are automated, and you can decide to restore point-in-time backups at any moment. You can also decide when to take a snapshot of your database, which may be eventually be used to create another database instance (for example, to copy your Production database into the User Acceptance environment).
  • High Availability: while some IaaS provider do not support high-availability database solutions such as Oracle RAC (for instance, it is not supported by Amazon EC2), many DBaaS providers include high availability by default.
  • Contingency: some providers maintain a standby copy of your database in another data center. This allows you to quickly restore your system in the case the original data center's services are lost.
  • Patching: although you can decide when to apply a database patch, the DBaaS will do that for you. In many case, you can turn on automatic patching, in order to make sure your database engine is always up to date.
  • Monitoring: providers give the system administrators access to a management console, in which they can monitor the database behavior and add or remove resources as needed.
  • Notifications: in order to simplify the monitoring effort, you normally have the possibility of setting up notifications to be received by email and/or SMS upon a list of events, which may include CPU usage, storage availability, etc.

Under my point of view, these services offer significant advantages for PeopleSoft customers, particularly if your current architecture does not support all the previously mentioned services or you do not have the right DBA skills in-house. Even if your organization does not fall in these categories, the scalability and elasticity of DBaaS providers is very difficult to match by most internal IT organizations.

In any case, if you are interested in using Database as a Service for your PeopleSoft installation, make sure you correctly evaluate what each provider can give you.